Responsible AI Use in Building Surveys: RICS March 2026 Standards for Ethical Practice and Client Trust

Nearly 40% of property buyers report they would lose confidence in a surveyor who could not explain how their report was produced — yet AI tools are now embedded in building survey workflows across the UK without any consistent ethical framework. That gap closed on 1 March 2026, when the Royal Institution of Chartered Surveyors (RICS) brought its first-ever Professional Standard on responsible AI use into full effect [3].

For building surveyors, this is not a distant regulatory concern. It is an immediate, practice-level obligation. Responsible AI Use in Building Surveys: RICS March 2026 Standards for Ethical Practice and Client Trust sets out exactly how AI tools must be assessed, governed, and disclosed — whether a firm is using machine-learning defect detection, automated report drafting, or AI-assisted valuation tools. Understanding these standards is now central to maintaining RICS membership, protecting clients, and sustaining professional credibility.

Wide-angle editorial illustration showing a split-scene composition: left half depicts a building surveyor on-site with


Key Takeaways 📋

  • RICS's first AI Professional Standard became mandatory on 1 March 2026, applying to all RICS-regulated firms and members using AI in surveying work.
  • Surveyors must conduct governance assessments before deploying any AI tool and document whether it has a "material impact" on outputs.
  • AI hallucinations, bias, and data governance failures are identified as the primary risks requiring active mitigation in building survey contexts.
  • Transparency with clients about AI use is not optional — it is a professional conduct requirement under the new standard.
  • Firms must develop responsible AI use policies informed by risk registers, with clear accountability chains for every AI-assisted decision.

What the RICS March 2026 AI Standard Actually Requires

The RICS Responsible Use of AI Professional Standard is mandatory for all RICS-regulated firms and individual members [2]. It applies wherever AI tools influence professional outputs — including building surveys, condition reports, defect assessments, and valuations. The standard does not ban AI. Instead, it creates a structured framework for ethical deployment, ongoing oversight, and client transparency.

The Core Obligations at a Glance

Requirement What It Means in Practice
Governance Assessment Assess and document AI system risks before deployment
Material Impact Determination Decide and record whether AI output materially affects the final report
Responsible AI Use Policy Develop a firm-wide policy informed by a live risk register
Professional Judgment Oversight A qualified surveyor must review and validate all AI-assisted outputs
Client Transparency Disclose AI use to clients in a clear, accessible way
Procurement Due Diligence Assess third-party AI tools before purchase or integration

The standard draws on internationally recognised AI ethics principles, including fairness, accountability, transparency, and human oversight [1]. For building surveyors, this translates into concrete, day-to-day practice changes rather than abstract policy commitments.


Understanding AI Risks in the Building Survey Context

Before a firm can govern AI responsibly, it must understand what can go wrong. The RICS standard identifies several failure modes that are particularly relevant to surveying work [2].

🚨 AI Hallucinations and Erroneous Outputs

AI language models and generative tools can produce confident-sounding but factually incorrect outputs. In a building survey context, this could mean:

  • A report that describes a defect that does not exist
  • Incorrect material specifications cited for a roof or wall system
  • Fabricated regulatory references or building code citations

"An AI tool that generates a plausible-sounding but incorrect defect description is not just a technical failure — it is a professional liability risk for the surveyor who signs the report."

Surveyors using AI-assisted report drafting must treat every AI-generated statement as a draft requiring expert verification, not a finished professional judgment.

⚖️ Bias in AI Systems

AI tools trained on historical data can embed and reproduce the biases present in that data. For building surveys, this risk includes:

  • Geographic bias: Tools trained predominantly on certain property types or regions may underperform on unfamiliar building stock, such as Victorian terraces or rural farm conversions.
  • Condition bias: If training data over-represents newer properties, the tool may systematically underestimate defect severity in older stock.
  • Reporting bias: AI summarisation tools may weight certain defect categories more heavily based on their frequency in training data, not their actual severity.

Firms must assess these risks as part of their pre-deployment governance review [2].

🗄️ Data Usage and Data Governance Risks

Building surveys involve sensitive client data, property data, and commercially confidential information. The RICS standard requires firms to consider:

  • Where client data goes when entered into a third-party AI tool
  • Whether AI providers use inputted data to retrain their models
  • How long data is retained and who has access
  • Whether data processing agreements comply with UK GDPR

This is especially relevant when using cloud-based AI platforms for specific defect report generation or drone roof survey image analysis, where property imagery and client details may be processed by third-party servers.

Close-up overhead shot of a professional desk with RICS-branded documents labeled 'AI Risk Register' and 'Responsible Use


Governance Protocols: What Surveyors Must Do Before Deploying AI

The RICS March 2026 standard is explicit: governance comes before deployment [2]. Firms cannot simply adopt an AI tool and assess it retrospectively. The required steps form a logical sequence.

Step 1: Conduct a System Governance Assessment

Before any AI tool is used in live surveying work, firms must:

  1. Identify the AI system type — Is it a large language model, a computer vision tool, a predictive analytics engine, or a hybrid?
  2. Understand how it works — What data was it trained on? What are its known limitations?
  3. Map the failure modes — What are the realistic ways this tool could produce wrong or harmful outputs?
  4. Assess the impact — If the tool fails, what is the consequence for the client and the firm?

This assessment must be documented and retained as part of the firm's governance records [2]. It is not a one-time exercise — it should be reviewed when the tool is updated or when its use case changes.

Step 2: Determine and Document Material Impact

A key concept in the RICS standard is material impact — whether an AI tool's output meaningfully influences the final professional judgment delivered to a client [2]. Surveyors must:

  • Make a clear determination about whether each AI tool has material impact
  • Document the reasoning behind that determination
  • Review the determination if the tool's role in the workflow changes

For example, an AI tool used only to format report templates may have low material impact. An AI tool used to identify structural defects from photographs has high material impact and requires correspondingly rigorous oversight.

Step 3: Develop a Responsible AI Use Policy

Every RICS-regulated firm must have a written responsible AI use policy [2]. This policy should:

  • Be informed by the firm's AI risk register
  • Assign clear accountability for AI governance (typically a named senior professional)
  • Set out the approval process for adopting new AI tools
  • Define the minimum oversight requirements for AI-assisted outputs
  • Include a process for reporting and learning from AI errors

Smaller firms and sole practitioners are not exempt. The standard scales to firm size, but the obligation to have a policy applies universally [3].


Professional Judgment and Oversight: The Non-Negotiable Core

No matter how sophisticated an AI tool becomes, the RICS standard is unambiguous: professional judgment cannot be delegated to an algorithm [1]. A chartered surveyor's signature on a report represents their personal professional opinion — not the output of a machine.

This has direct implications for how AI is used in:

The practical implication is that surveyors must build review checkpoints into every AI-assisted workflow. AI output goes in; expert verification comes out before anything reaches the client.

Oversight Minimum Standards

  • ✅ Every AI-generated finding must be reviewed by a qualified surveyor
  • ✅ Surveyors must be able to explain and justify any AI-assisted conclusion
  • ✅ AI tools must not be used as a substitute for physical inspection
  • ✅ Errors identified in AI outputs must be logged and reported internally

Transparency and Client Communication: Building Trust Through Disclosure

The RICS standard treats client transparency as a professional conduct requirement, not a courtesy [2]. Clients have a right to know when AI has played a role in producing the report they are relying on to make significant financial decisions.

What Disclosure Looks Like in Practice

Effective client disclosure does not require technical jargon or lengthy disclaimers. It requires:

  • Clear language explaining that AI tools were used in the preparation of the report
  • A brief description of what the AI tool did (e.g., "AI-assisted image analysis was used to identify potential areas of concern, which were then verified by the surveyor")
  • Confirmation that all findings reflect the professional judgment of the named chartered surveyor
  • Contact information for clients who wish to ask questions about the methodology

This disclosure should appear in the report itself, not buried in terms and conditions [4].

"Transparency about AI use is not a weakness — it is a demonstration of professional integrity that strengthens client trust."

For firms offering dilapidation surveys or snagging reports, where clients are often in dispute or under commercial pressure, clear AI disclosure also reduces the risk of findings being challenged on procedural grounds.

Dramatic low-angle perspective of a chartered surveyor presenting findings to a client couple in a bright modern office,


AI Procurement and Due Diligence: Choosing Tools Responsibly

The RICS standard extends beyond how AI is used — it also governs how AI tools are selected and procured [2]. Firms cannot assume that a commercially available AI tool is safe to use in professional practice simply because it is widely marketed.

Due Diligence Checklist for AI Tool Procurement 🔍

Before adopting any AI tool for building survey work, firms should verify:

  • What data was the model trained on? Is it relevant to UK building stock and RICS reporting standards?
  • What are the documented limitations? Does the provider disclose known failure modes?
  • How is client data handled? Is there a Data Processing Agreement (DPA) in place?
  • Is the tool's output explainable? Can the surveyor understand and justify what the AI produced?
  • What is the update and versioning policy? Will updates change the tool's behaviour without notice?
  • Does the provider have an AI ethics policy? Is it aligned with RICS principles?
  • What liability does the provider accept? What happens when the tool produces an error?

This due diligence should be documented and retained alongside the firm's governance assessment records [2]. It is also good practice to revisit procurement assessments annually or when a tool undergoes significant updates [4].


Practical Implementation: A Roadmap for Building Surveying Firms in 2026

For firms that are still building their AI governance frameworks, the following roadmap provides a structured path to compliance with the RICS March 2026 standard.

Phase 1: Audit (Weeks 1–2)

  • List all AI tools currently in use across the firm
  • Identify which tools influence client-facing outputs
  • Assign a named AI governance lead

Phase 2: Assess (Weeks 3–4)

  • Complete a governance assessment for each tool
  • Determine and document material impact for each
  • Identify data governance gaps

Phase 3: Policy (Weeks 5–6)

  • Draft the firm's responsible AI use policy
  • Build the AI risk register
  • Define oversight and review protocols

Phase 4: Train (Weeks 7–8)

  • Brief all surveyors on the new policy
  • Train staff on identifying AI errors and hallucinations
  • Update client-facing report templates to include AI disclosure language

Phase 5: Review (Ongoing)

  • Schedule quarterly risk register reviews
  • Log and learn from any AI-related errors
  • Reassess procurement decisions annually

Firms unsure about what type of survey they need to offer clients should also consider how AI integration affects the scope and methodology of different survey types, and update their client-facing materials accordingly.


Conclusion: Ethical AI Is Now a Professional Standard, Not a Choice

The arrival of the RICS March 2026 AI standard marks a turning point for the surveying profession. Responsible AI Use in Building Surveys: RICS March 2026 Standards for Ethical Practice and Client Trust is not a suggestion — it is a mandatory professional obligation that carries real consequences for firms and individuals who ignore it.

The good news is that the standard is built around principles that good surveyors already practise: independent judgment, client transparency, rigorous documentation, and continuous improvement. AI tools, used responsibly, can enhance the speed and consistency of building surveys without compromising the professional integrity that clients depend on.

Actionable Next Steps for Surveyors and Firms in 2026:

  1. Audit your current AI tool usage — know exactly what you are using and why.
  2. Complete governance assessments for every tool that influences client outputs.
  3. Write your responsible AI use policy — even a one-page document is better than none.
  4. Update your report templates to include clear, plain-English AI disclosure language.
  5. Train your team to identify AI errors and understand their professional accountability.
  6. Review your procurement decisions — not every AI tool on the market meets the RICS standard's implied requirements.

The profession's credibility with clients rests on one enduring principle: the surveyor's name on a report means something. AI can support that commitment — but it can never replace it.


References

[1] AI Responsible Use Standard – https://ww3.rics.org/uk/en/journals/construction-journal/ai-responsible-use-standard.html

[2] Responsible Use of AI – https://www.rics.org/profession-standards/rics-standards-and-guidance/conduct-competence/responsible-use-of-ai

[3] RICS First Ever Standard on Responsible AI Use Now in Effect – https://www.rics.org/news-insights/rics-first-ever-standard-on-responsible-ai-use-now-in-effect

[4] RICS AI Standards in Building Surveys 2026: Practical Protocols for Level 3 Assessments and Risk Detection – https://nottinghillsurveyors.com/blog/rics-ai-standards-in-building-surveys-2026-practical-protocols-for-level-3-assessments-and-risk-detection


Responsible AI Use in Building Surveys: RICS March 2026 Standards for Ethical Practice and Client Trust
Chartered Surveyors Quote
Chartered Surveyors Quote
1

Service Type*

Clear selection
4

Please give as much information as possible the circumstances why you need this particular service(Required)*

Clear selection

Do you need any Legal Services?*

Clear selection

Do you need any Accountancy services?*

Clear selection

Do you need any Architectural Services?*

Clear selection
4

First Name*

Clear selection

Last Name*

Clear selection

Email*

Clear selection

Phone*

Clear selection
2

Where did you hear about our services?(Required)*

Clear selection

Other Information / Comments

Clear selection
KINGSTON CHARTERED SURVEYORS LOGO
Copyright ©2024 Kingston Surveyors