You’re taking an exam. You’ve been sitting in front of the monitor for two and a half hours, powering through the Construction & Evaluation division of the Architect Registration Examination® (ARE®) 5.0—answering questions about how you should handle owners who can’t make up their minds and contractors who can’t follow the provisions of the contract.
Now, you’re starting the case studies. You stare at the monitor, struck by what seems to be an unusual scenario, and your mind can’t help but to briefly wander. “XYZ Architecture?” How oddly generic. “Is responsible for designing an indoor ultimate frisbee course and dog park?” How oddly specific. And “located in the central northern United States?” Well, that’s rather vague. Before diving into the next set of questions, you allow yourself to wonder one last time, How does NCARB come up with this stuff?
Outlining Content Areas
ARE items, or questions, begin with what’s called a practice analysis: a study in which architects and other industry stakeholders are surveyed and interviewed to determine what exactly a competent, independently practicing architect needs to know. The results of the analysis are then used to create the test specification document, an outline that defines the structure and content of the exam. The current test specification defines the exam’s six divisions, which are subdivided into content areas, which are further broken down into specific objectives.
Drawing From Real-World Examples
The test specification is then given to volunteer architects to start writing ARE items. Most of these items are informed by the architects’ direct experiences while working in the profession, and the case studies are entirely drawn from such experiences. So, when you come across a case study dealing with an indoor ultimate frisbee course and a dog park, know that it was part of an actual project completed by one of your colleagues. Other identifying features, such as firm names and specific locations, are scrubbed from these case studies, which is why you’ll find terms like “central northern United States” and “XYZ Architecture.”
Newly written items are always reviewed by an experienced, item-writing mentor who’s put their stamp on more than a few items. The mentor can either send the item back to its writer for revision or submit it to a professional editor. After editing is complete, the item is ready for committee review.
Vetting New Items and Case Studies
A few times each year, nearly 100 architect volunteers gather at Item Development and Case Study Subcommittee meetings to vet each item and case study during workshop settings similar to what many architects might remember—with affection or apprehension—from architecture school. All items and case studies are debated, modified, polished, and declared “ready for use.” Not all items make the cut, though. Committees might deem an item “faulty” for various reasons. In other occasions, an item may only need minor tweaking to make it exam-ready.
Pretesting Items at the Test Center
Items that do make the cut travel back to the editor for one last proofread, and then back to NCARB for a final review. After NCARB reviews the item, it is at last ready for the exam. However, newly developed but still unproven items aren’t just thrown into your high-stakes exam. They must first make it through a “pretesting” period, during which they’ll appear on the exam, but won’t count toward or against your final score.
When pretesting, NCARB tasks a team of psychometricians to gather data on how each pretest item performs. During this process, psychometricians will consider questions like:
- Are too many candidates getting it wrong?
- Is one response chosen significantly more than others?
- Are candidates taking an unusually long time to answer the item?
If the answers to these questions don’t raise any red flags, then it’s safe to say the item is performing well and ready for operational status—at which point it will begin counting toward your exam score.
Monitoring Performance
Each item has been discussed, tested, and found to be appropriate. That’s not to say that appropriateness is permanent, though. Items are continually monitored for their performance, and if an item moves out of an acceptable performance range, it is retired or removed from the exam.
A day might come when an item will no longer serve as an appropriate and relevant assessment. As soon as that day comes, the outdated item will never be seen again at the test center. As for the items that remain (the operational ones), you can rest assured that they’re good items.