Year in Review 2025: AI - Magic or menace?
One of the core topics you were keen on us covering this year was AI, assessing its potential benefits for supporting day-to-day practice, with a view on where it’s going next.
Across our Studios, many of you got involved in discussing this topic both on and offline – with 100s of you attending our seminars, and taking part in our roundtables, engaging with our bi-weekly editions of The Edit, and contributing to over a million social media impressions on our posts covering the divisive topic of AI in-depth.
As we rapidly approach the end of the year, and ahead of us exploring this topic further over the coming 12-months, we’ve pulled together some of the key takeaways from all of the above content, with differences and similarities highlighted across our Studio regions, Manchester, Glasgow & London, plus some practical tips from the experts we’ve spoken to.
Across all our discussions, AI was found to be neither magic nor menace — rather a fast-evolving set of tools that amplify capabilities, reshape workflows while raising real legal, ethical and environmental questions in the process.
Top-line overview — what kept coming up?
- AI is a tool, not a mind: It was repeatedly emphasised that generative AI behaves like predictive text / a “probability remix machine” - novel-looking, but not thinking. Use it for tasks it’s suited to, and not to replace judgement.
- Adoption vs capability gap: The built environment sector hasn’t adopted AI at the same speed it’s been developed - practices risk being left behind or misusing immature workflows.
- Intellectual property & provenance are urgent: Designers are already encountering IP leakage and uncertainty about what data models were trained on and who owns outputs.
- Ethics, regulation and accountability matter: Bias was flagged, as well as fairness, and long-term societal impacts as issues that need governance and professional standards.
- Practical gains - automation + standardisation: AI can efficiently handle standardised, data-heavy tasks (e.g., material passports, specs, analytics), freeing human designers to focus on storytelling and user experience.
- Skills & culture are the bottleneck: Upskilling, avoiding “AI-washing”, and telling better stories with data were highlighted as essential organisational change.
- Big-picture risk - emergence and scale: Thought leadership warns that the web/AI ecosystem acts like a living, emergent system — hard to control from a single actor — so resilience and systems thinking are required.

Top points raised in Manchester
- AI is polarising; debate between “friend” and “foe” in design. Consensus: potential huge, but adoption is slow.
- AI is predictive remix, not human thought: Designers should treat outputs accordingly.
- IP & ownership: Problems with stolen IP were raised, and that many models were trained on creators’ work without permission — legal frameworks lag.
- Beware of shallow applications: Don’t use AI to poorly automate tasks you’re already good at. Instead, reframe where AI brings real value.
Top points raised in Glasgow
- Glasgow’s experts focused on long-term and ethical implications: Bias, public trust, job changes and procurement impacts.
- Automation for standardised tasks (material passports, sustainability data, energy performance) is seen as a clear near-term benefit. The more standardised the input, the better the automation results.
- Practices are at different stages of adoption currently, ranging from cautious experimentation to active integration.
Top points raised in London
- AI isn't at full capacity, nowhere near: “AI is overhyped but a paradigm shift is coming.”
- AI-washing is a real risk: Organisations claiming AI capability without strategic integration or measurable outcomes.
- Industry fragmentation: Many small firms compete and therefore lack incentives for coordinated upskilling; a structural issue for sector-wide transformation.
- Emphasis on the art of storytelling: Designers must convert AI-derived data into compelling narratives to secure investment and client buy-in.
Similarities between regions
- All regions stressed: AI is a tool (not sentient), IP & ethics are major concerns, and the sector must bridge a skills gap.
- Value in automation for standardised, repeatable tasks was a common thread.
- Caution about hype/poor use: Thoughtful application was urged.

Differences between regions
- Manchester leaned more on the philosophical/practical definition of AI and IP anecdotes from practitioners.
- Glasgow focused on ethical implications and day-to-day benefits for sustainability and public projects (material passports, communications).
- London emphasised digital transformation strategy - the need to embed AI into business models, avoid AI-washing, and invest in storytelling & sector upskilling.
What should designers consider for their projects in 2026?
- Treat AI outputs as drafts, not final designs. Don’t confuse attractive visuals with good UX or compliance — validate AI-generated imagery/space plans with real user testing and technical checks.
- Explicit IP and contractual clauses for AI use. Define who owns prompts, derivatives, and model-trained content; log sources and preserve evidence of original authorship.
- Record provenance & data lineage for any AI-generated spec or performance prediction. Material passports, sustainability claims and performance models must carry provenance so they can be audited. Automate where inputs are standardised.
- Avoid AI-washing. Make measurable pilots, not marketing. Start small with measurable pilots (time saved, errors reduced, faster RIBA stage deliverables).
- Design human-in-the-loop processes. Keep human oversight for creative intent, ethical decisions, user-centred design and final sign-off. Emphasis was placed on the necessity of human judgement.
- Prioritise upskilling & storytelling skills. Invest in training staff to interrogate outputs, craft narratives from data, and sell AI-driven value to clients/boards.

In summary…
The overarching sentiment insists we must use AI deliberately and defensibly: it’s powerful for productivity and standardised work. We must focus on provenance, human oversight, upskilling and measured pilots rather than hype. And above all we should think systemically - the long-term behaviour of AI at scale will be emergent and socio-technical; resilience and professional standards will matter just as much as the tools themselves. In short, humans will very much continue to be necessary.