Artificial intelligence (AI) is reshaping the architecture profession in real time, from how architects design and model to how they interpret building codes and engage with clients and communities. As AI systems increasingly inform or generate design outcomes, architects are placed in the position of validating, endorsing, or relying upon the outputs produced by these tools.
NCARB’s Futures Collaborative, a committee of expert volunteers, has been exploring how emerging trends and technology—including AI—will impact architecture practice and regulation. From their research, the collaborative has identified five key factors that architects should keep in mind when considering the impact that AI will have on the role and regulation of the architect. Read the Future Trends Report to learn more.
Explore the Future Trends Report
- AI is redefining professional competency, but education and regulation have not kept pace. While some architecture firms have rapidly adopted AI—integrating it into platforms for space planning, zoning analysis, code compliance, and more—education and regulatory systems have been much slower to incorporate AI into their frameworks. NCARB is exploring how AI will fit into future licensure assessments, but determining how else the profession could work to ensure AI literacy should be an ongoing conversation in the architecture community.
- The pace of technological change is outstripping the rhythm of regulation. As part of Pathways to Practice, NCARB is exploring how the licensure model of the future could allow for a more flexible, agile approach so that regulators can more easily adapt to new skills, tools, or public safety concerns. Encouraging a modern, adaptable licensure approach is essential to ensuring regulation of the profession keeps pace with modern practice.
- Architects remain fully accountable, even as AI tools increase opacity. Many users of AI—including architects—do not fully understand or control the internal logic of the AI tools they use. This opacity presents a challenge when it comes to maintaining responsible control of work completed by an AI tool. Even if architects don’t understand how an AI tool produces a result, they must still evaluate and understand what it produces. Regardless of disclaimers issued by software developers or the autonomy of machine-generated outputs, the architect of record is still bound by the standard of care.
- Bias in AI design workflows is an unregulated, present-day concern. Many current AI tools risk reinforcing patterns of inequity embedded in historical data sets, such as those related to land use, zoning, and housing algorithms. Without a standard mechanism for bias auditing, architects are left without formal guidance on how to evaluate or mitigate harm, raising the possibility of regulatory blind spots around equity and justice. As AI use becomes more and more commonplace, architects should consider what tools, standards, and training would help them evaluate whether an AI tool is perpetuating systemic inequities.
- Transparency is a prerequisite for trust—yet explainability is often absent. When evaluating AI tools, be sure to consider how much transparency a tool or company offers into the underlying systems that drive its tool. While full explainability might be difficult to achieve given the complexity of AI’s deep learning systems, transparency protocols such as documentation requirements, disclosure of AI tool usage in design narratives, or “human-in-the-loop” verification procedures may provide more transparency into architectural oversight.
Want to learn more about trends impacting the future of architecture and regulation? Explore the Future Trends Report.
About the Futures Collaborative
NCARB’s Futures Collaborative is a volunteer-led effort comprised of practicing architects and other experts from across the country. Since 2017, the collaborative has been exploring how the practice of architecture is evolving, both in the near-term and the long-term. Through their research, NCARB is able to ensure that the regulation of practice can adapt proactively to change.