Before becoming an architect, all candidates must take and pass the Architect Registration Examination® (ARE®)—a multi-part exam developed with the help of hundreds of volunteer architects, psychometricians, and other professionals.

Interested in learning more about how the exam is put together? NCARB is committed to being transparent about how the ARE is developed and administered, so candidates, licensing board members, and the public can trust the validity of ARE results. In part one of this blog series, we explored the individuals involved in developing the exam. In part two, we’ll dive deeper into the process that exam questions go through before they become scored items. 

How do new items get added to the ARE?

Each new ARE question, or item, goes through a rigorous multi-step review process before being added to the exam as a pre-test question. This process is similar to processes used for licensure exams in other professions and aligns with best practices in licensing assessments.

  1. First, each architect item writer completes their item writing assignment independently. In addition to drafting each item, writers are also required to provide rationales for why the right answer is right, why the wrong answers are wrong, and why the item aligns with its assigned objective and cognitive level. In addition, item writers are required to provide a reference for the item, which could be an industry-standard reference book, a resource in a case study, or the steps of a calculation.
  2. Newly written items are then reviewed by a second, more experienced architect item writer. If needed, the reviewer will provide comments back to the first item writer to improve the item and ensure it aligns with the ARE objectives and that it is technically accurate. 
  3. Next, the newly written items all undergo initial review by NCARB staff editors for compliance with ARE item writing standards.
  4. The next step of the process occurs at an in-person item review meeting. Each newly written item is displayed for a small group of additional volunteer architects, who review the item and discuss whether it’s appropriate for the ARE. The group of volunteer architects come from various regions across the country, work at different size firms, on many different types of projects, and have different levels of licensed experience, all of which helps ensure that items fairly reflect architectural practice without unintended biases or assumptions. By the end of the meeting, the group has either approved or rejected the newly written items.
  5. Approved items go through one last round of review by NCARB staff editors to ensure full alignment with the ARE Test Specification and all item writing standards. Finally, they are approved for pretesting and can be placed on a future exam form as a pretest question.

What is a pre-test item?

A pre-test item is an unscored question that is placed on the exam so that NCARB and our psychometrician consultants can monitor its performance, ensuring it is a fair and reliable item, before it becomes a live, scored item. These items typically spend around one year on the exam as unscored items to ensure they are psychometrically and statistically sound and do not exhibit bias towards one candidate demographic or another. If an item successfully clears the pretesting phase, it becomes a scored exam item.

How many new items are added to the ARE each year? How many new items are included on each exam?

Each year, about 300-500 new items are added to the ARE item bank. The amount varies based on the number of architect volunteers NCARB works with each year and the specific item bank needs at that time.

The specific number of brand-new questions, or pretest questions, on each exam form ranges from six to nine, depending on the ARE division.

Do pre-test items impact the time available for testing? How do they impact the exam score?

The time allotted for each division is based on the total number of items, operational and pretest. Candidates don’t know which items are unscored as they are randomly placed throughout an exam. The exact number of pretest items per divisional form is listed in the content section of the ARE Guidelines.

As an example, Project Planning & Design (PPD) contains 91 scored and 9 pretest items. Although candidates will see 100 items when they sit for their exam, their score is calculated based on a maximum total of 91 points, or one point for every correct response on scored items. Pretest items have a scoring value of zero. Even if an individual answered all scored and pretest items correctly on this division, their score would still be calculated as 91 out of 91, or 100%.

Evaluation of ARE items doesn’t stop after they are approved for pretesting. NCARB and its psychometricians regularly evaluate the statistical performance, content relevancy, and alignment to standards of all items on the ARE. This includes both pretest and scored items. Items flagged for statistical performance, relevance, bias, or misalignment with standards are reviewed by a group of volunteer architects and are typically either edited and re-pretested for another year, or simply retired from the exam. This ensures statistically problematic or outdated items are continuously culled from the item bank.

Why does NCARB pretest exam items?

Pretesting newly authored exam items is a standard process for NCARB that aligns with best practices in the assessment industry. Pretesting an item helps ensure its validity before it is used to assess whether a candidate is competent in a particular skill or knowledge area.

How does NCARB use statistics to analyze item performance?

NCARB uses statistics to assess the quality of items, but it is not the only factor that determines whether a question survives pretesting and becomes an operational scored item. Statistical analysis informs the peer review process where volunteer architect panels evaluate each pretest item and determine whether it’s appropriate to become a scored item.