AI: Sustainability champion, or insupportable?
For our latest seminar in London, we brought together two seemingly opposing themes. AI, and sustainability. The notion of one supporting the other perhaps seems, on the surface, unfathomable. The questionable credentials of technology in terms of energy and water usage have remained no secret. But on closer inspection there’s more to the conversation than we possibly initially believed.
To truly understand the gargantuan nature of the topic that emerges when these two themes collide, our panellist Dr Phil Tetlow, a technologist and socio-technical thinker whose work spans IT architecture and data ecosystems, suggested that we must first ask ourselves: what does sustainability mean in the age of AI?
In Phil’s view, we must broaden our perception of sustainability to include the information ecosystem: Large Language Models have absorbed huge amounts of online content and now – facing a scarcity of fresh training material - growth in synthetic data production has surged. This, in short, is a problem, he said.
AI is a tool, Phil argues, though it’s a “second-order” tool within a larger system where humans themselves are the primary “tool” driving unsustainability.
Fellow panellist, Will Arnold, Head of Sustainable Materials, Useful Simple Trust & Technical Author, UK Net Zero Carbon Buildings Standard, concurred. AI is fundamentally neutral and should be treated as a tool - useful, but not decisive. He emphasised that the climate crisis is driven primarily by human choices and structural incentives (e.g., demolishing and rebuilding rather than reusing), and AI does not change the impact of those core decisions.
The age-old saying “a worker is only as good as their tools”, sprang to mind. So if AI is simply a tool, without influence, what can we do to change and sustain our future, as the humans that are using it?










AI: Positive or negative?
To begin, Host of the session, David Smalley, Director at Material Source Studio, asked, “AI: Net positive or net negative for the climate challenges that we face?”
Will responded, “I’m going to start by totally disagreeing with the question in the first place. I think of it as totally neutral. AI is a tool. To my mind, it’s in the same family as the calculator, the typewriter, the coffee machine. And we can use that for good, or we can use it for other things. By itself, it won’t solve anything.”
Addressing the other half of the session’s theme, sustainability, Will added, soberly, “We’re living in the middle of an existential crisis. There was news out this week that the Gulf Stream is 50/50 likely to slow down to the point at two degrees warming where this country would get the same sort of weather patterns that Alaska has at the moment.
“This gives an idea of the magnitude that we’re dealing with when it comes to sustainability. This is really scary stuff. And to my mind, AI by itself is not going to solve it."
“AI is a tool we can use along the way to do lots of clever things, but it won’t undo all negativity by itself.” - Will Arnold
Giving an example specific to the built environment, Will commented, “If I choose to knock down a building and build another in its place that does the exact same function, I’m doing all sorts of ecological damage by doing that. But it’s my choice. Whether or not I chose to use ChatGPT to write a bit of text to convince the planning authorities that it was the right thing to do… that won’t change the fundamental decision that I made to knock something down and start again. That’s what I mean when I say AI is a tool.”
An emphasis on the evident need to tackle sustainability was echoed by Phil, who said, “Sustainability is really interesting, really important, because it is one of the topics of our age. If we get it wrong, we really get it wrong, and there’s a full stop at the end of that sentence.
“To counterbalance that, AI’s a big deal as well”, he added. “AI is a tipping point at planet level.”
Returning to Will’s point about AI being a tool, Phil stated to the audience, “If you think of AI as being anything other than a tool, you’re making a mistake.”
The real power of that tool, he shared, “is understanding where, when, and why you use it.
“But then there’s a hell of a lot of naivety out there, even amongst the highest-level professionals. We really, really need to address that,” he stressed.
Also mirroring the sentiment shared by Will that AI as a tool is at the mercy of humans as decision makers, Phil suggested "it’s a second-order tool".
“What do I mean by that?” he said, “Well, sustainability is the thing that we’re really talking about, I think, and we need to address. The tool that’s actually causing the problem of sustainability on the planet today is me, you, and every other human on the planet. So we are the tool at planet level that is actually causing that crisis.
“There is a tool beyond that tool: the second-order tool, which is AI; essentially a second-order existential opportunity or failure within the grander system that is the ecosystem of this planet.”
(“If that doesn’t take your mind to a point where you want to explode, then I’ve done it wrong,” he added, to laughs in the audience).
There’s a question within a question to be answered as part of this discussion, Phil believes, and that’s “what does sustainability mean in the age of AI?”
While the average person on the street may not really have a firm grasp on what ‘sustainability’ means, it is, of course, ingrained in the psyche of built environment professionals, woven into the very bedrock of our conversations for at least the last 10-years. For our community, Phil said, “If you talk to a professional, you’ll get the angle that Will’s played, which is: it’s about concrete, it’s about steel, it’s about buildings, it’s about sucking oxygen out of the air, carbon depletion, all of that type of thing.”
This isn’t, in fact, the whole picture when it comes to sustainability anymore, Phil argues.
“The angle that I play on top of that is this: sustainability is exactly that. It’s not necessarily about natural resources. It plays out in the synthetic world as well.
“We’ve now reached the point where the LLMs (the ChatGPTs of this world) have drunk the entire Internet. And the problem is that they are still hungry for training material. So there are companies on this planet today who are making billions upon billions of dollars per annum synthesising data to feed into these AIs simply because they don’t have enough natural resources - in other words, information that has come from the web - to keep them happy.
“Now, what’s interesting about that is that’s essentially pushing the arc of what we would consider to be human to a point where I would argue - and argue forcefully - that there is a valid definition of sustainability in there as well. Can we, should we, must we sustain that artificial growth of the intelligence that we’re increasingly relying on as a tool to help the human world cope with the crisis that it’s in? I would strongly suggest that we’re in a very, very dangerous place.”




Not a technological problem
As a tool, Will shared that AI does have the answers to help solve the climate crisis. But that alone is not enough. “If I say to Claude, or ChatGPT, or Co-Pilot, ‘What does humanity need to do in order to limit global warming to two degrees, in order to stop these tipping points such as the Gulf Stream stopping completely?’ it will give me a pretty accurate list of all the interventions we need to make. Because it scrapes all that data from the experts who know very easily how to get us off fossil fuels, how to decarbonise cement-making, how to make sure we never have to make steel using coal ever again. All of that stuff exists. All the technological solutions exist."
The problem, he said, is not a technological one.
“It’s a societal problem, a political problem, a capitalist problem.” - Will Arnold
What LLMs lack, Will commented, is context. They’ll find the most logical web of solutions to a problem such as the Gulf Stream, but in reality, the volume of societal, economic, political, historical factors at play will make its suggestions, however viable in theory, unviable.
“All of those bigger, knottier, more complex human problems, those are the bits that humans have to solve. And this comes back to the idea that AI is just a tool.”
If the context was somehow removed, David asked, “Could AI solve our climate challenges?”
Phil compared the potential of AI to that of Twitter – “that was just a tool, but if it hadn’t been for Twitter, we wouldn’t have had the Arab Spring. That singular tool, when it was put in the right hands at the right time for the right reason, managed to almost overthrow multiple regimes across the planet."
“Do I think that we might get a similar episode occurring with sustainability? I think the right answer has to be: please God, yes.” - Phil Tetlow
The notion as to whether AI is truly, as first thought, a neutral tool was here called into question by Will. “It’s a tool, agreed, but when asked a question it does seem to tend towards the affirmative, in that if I want to get it to back up my thinking behind something, it seems to be much better at doing that than doing critical pushback on it?” he asked Phil.
Will’s assumption is real, Phil confirmed. “If you go into the settings of ChatGPT, you will actually find that there are switches in there that direct the answers towards the affirmative to deliberately please the user. You can switch all of those off and get an independent opinion. That’s one of the first things you should do when you engage with any AI: essentially neutralise it."
This, Phil added, is “a marketing ploy for the companies” – the marketeers at LLM firms want us to believe that AI is genuinely fond of us.
A genuine supporter?
Moving onto solutions, David asked, “Where is AI genuinely helping on the subject of sustainability?”
For Will, LLMs support with communications – “they're good at making sense of things, and executive summaries and reports. Nine times out of 10 they'll be right.”
Phil suggests there is a great deal of untapped potential for built environment professionals, largely using AI for tasks such as the above. “AIs can do things that humans naturally cannot. I’ll give you some examples...”
From the handling of huge volumes of data – “AIs can handle millions upon millions – billions - of head-loads of data in one go, and we need that for sustainability – to assessing micro-currents or micro-climates to build into a meta-model that something or somebody needs to absorb “then the world’s AIs will be able to do that”, shared Phil.
“The other thing that AIs are good at is seeing the non-obvious. You can apply very forceful guardrails over AI to basically say, ‘This is what we see as the world’s best experts.’ Here’s the source data, here are the source opinions, here’s what we get out of the AIs. Then there are specific fields of mathematics - one of them is called homology - and homology is the field where, if you put a wire frame around a donut, mathematical homology will tell you where the hole is in the middle. You can apply homology over AI and it will mathematically tell you what you’ve missed.
“That’s important in areas like sustainability because the chances are that the real answer is going to be somewhere that humans have not seen yet.”
"We should ask AI 'What don’t I know?' then?", continued David.
Phil responded, “The ways to use AI are: the monotonous and the mundane – ask it to do the tasks that are time-consuming or repetitive; ask it the obvious; and, where the real magic is; bend it out of shape.
“For example, I never, ever go to a Large Language Model and prompt it directly. What I’ll do is I’ll go: ‘Here’s an idea. I’d like you to write me the most efficient prompt to give me the best answer you possibly can, based on the trace of the idea.’”










Ethics & education
A question from audience member, Diego Correa, Creative Director, Diego Correa Interior Design, raised the dual subjects of ethics and education. “Firstly, I’m surprised no one’s mentioned the ethics of AI use yet, and secondly, there are millions of humans on this planet, but only a very small percentage of those people know how to get the most out of AI. How can we even consider (the masses), we will catch up? Sounds an impossibility?"
Tackling both elements, Phil responded, “I can give you a very precise answer to your first point. There are a lot of good people and a lot of good organisations around the world that care a lot about ethics in AI.
“Most of the standards bodies are included in this – some I’ve been fortunate enough to work with. The world’s brightest minds are working on it.
“The problem that we have got is that it takes the standards bodies such as the BSI typically 18-months just to get their head around a problem, then another 18-months to eventually set standards in place. Now, AIs are evolving more at a rate of 18-minutes rather than 18-months. There is a disparity that is wrong."
“The goodness is, there are a lot of good people looking at it. Unfortunately, the human mechanics are just way too slow to keep up.” - Phil Tetlow
Addressing the second point from Diego: “Where does ethics fit in the equation with regards to sustainability?”, Phil replied, “That’s almost an impossible question to ask.”
Answering the question with a question, Phil continued, “Just because something is ethical for you and I, is it ethical for the planet?
“You can translate that into: is it alright for 15 million people to die if six and a half billion people survive? That is a question above governments.”
Taking a wider view of sustainability as part of ESG strategy, Blair Boyle, Associate Workplace Designer, Savills, asked from the audience, “In terms of governance, you mentioned AI being a tool. Tools can be used for great things, for terrible things, and also for very frivolous things. This week there have been reports about tech companies who incentivise or encourage staff to use however many tokens per month because it’s going to be part of their rewards package. But there have also been reports of companies who regret firing workers and replacing aspects of that role with AI because it’s now more cost-effective to actually have junior staff do those things.
“Do you think there’s going to be any changes in the way that AI is governed in terms of how efficiently it should be used, so it’s not frivolously thrown around?”
Will shared that at the Trust, an AI policy was brought in “about a year or so ago to try and start to get people to at least think about when they do and don’t use it, and what they do with it. In very simple terms: if there’s something you could do without it, you should do it without it. And that has been quite a good guiding principle, we’ve found.
“What’s interesting”, Will continued, “to your point about what happens to staff, is that even in a world without AI, as a more senior engineer in my firm, it would usually be quicker for me to do something for my client than it would be for me to train up a graduate so they can do it. But if we all worked on that principle, we’d hollow out our companies really, really quickly. Our entire profession would die, because those of us who know how to do it well right now would do it until the age when we retire, and then nobody would design any buildings.”
AI, Will believes, is doing a similar thing – taking away tasks that prior to now would have come under a budding engineers' charge. “To me, AI is kind of the same thing. What we risk if we get too used to using it - whether it’s the mundane stuff or even the creative stuff - what we miss out on is training ourselves to do all of that. And we become totally reliant on it. This goes beyond even technical skills and creative skills. It goes to our culture. What it means to be designers and professionals and humans.”
“We want to have a culture where it’s ok to be wrong, so that people feel okay to say the silly thing. And you lose all of that if you think you’ve got this God on your phone that will tell you the right answer all the time.” - Will Arnold
Another question came from Timna Rose, Founder & Creative Director, Studio ATARA in the audience, who said, "Approaching this from a more simplistic (more broadly recognised maybe) question of sustainability, I have real 'user guilt' when it comes to its potential to negatively impact the environment, how do you manage your 'user guilt?'"
“I don’t have any guilt using it, actually, when it comes to environmental impact, because the research I’ve done tells me that every time I do a search, give it something to do, I'm responsible for emissions in the realms of a gram to five grams of carbon dioxide. It’s that order of magnitude,” responded Will.
“I’m working on projects where the carbon emissions of building that project is 100,000 tonnes. So you’re talking a billion times as much carbon. So actually, if in the course of that project I use AI a million times to get it to be better, we are still completely outweighed,” he added.





The case for Small Language Models
Continuing on the topic of LLMs, audience member Steven Gale, Portal Architects, shared a view that perhaps using AI tools of this nature to undertake mundane tasks on a mass scale is incorrect, “I wondered if there’s more power for the everyday use of AI in solving complex problems that you can’t write a mathematical expression to solve. In other words: indeterminate structures – think, if you’re a civil engineer - multiple multivariate occupancy data that you can then use to design buildings a bit better without guessing. It’s like Small Language Models. We limit the horizon of data, so the results are specific to the problem.
“Do these things exist?”, he asked the panel.
Phil agreed with Steven’s suggestion – “AI is a very broad field. And we’re dealing with something that isn’t even 5% of the overall field. But when we’re talking about Large Language Models and neural networks, they have specific ways of trying to answer specific problems.”
He offered some advice, “One of the things that I’ve done, for example, is you might ask a Large Language Model to help you enter a specific niche area, but then what you’ll do is you’ll control the responses from that Large Language Model using specifically smaller models or algorithms because you approximately know the target that you’re aiming for as an expert.”
“Large Language Models are good at scale. They’re good at precision. They’re good at broad and deep thinking. But they’re not good at direction. They’re not good at planning. They’re terrible at things like compassion, empathy, and so on and so forth. If you get that blend right, then it literally is magic.” - Phil Tetlow
“For those that want to use AI responsibly, what guiding principles should they follow?” asked David.
“I think one of the guiding principles is to treat it as a tool,” said Will. “It doesn’t replace the need to think about why we’re building, how we use less materials to do so, how we reuse what we’ve already got, how we treat materials that are currently incredibly impactful in so many damaging ways, how we treat those as really scarce so that we try and use them more sensibly, how we do that for good, and how we try and do it in a way that encourages others to do the same thing, to create that industry change.”
Phil believes the answer can be distilled down into a series of soundbites.
“Blair in the audience spoke about governance. Here, I might suggest that the word ‘provenance’ fits in. When you’re talking about provenance, most people don’t understand what provenance is. So there’s an education piece that needs to be done there.
“The real problem that we have got is traction at the professional practice level. Which means every one of you going out on your daily routine and prophesising and educating and sitting people down and going, ‘This is a big deal, and this is the way you have got to behave.’ Because that’s what professionals are supposed to do.
“We’re currently in this ‘opiate phase’, everybody’s like headless chickens at the moment. It’s going to take time for that to settle out. But the mechanics of the human machine to allow that settling to take place; AI will run away with it before then. So we’ve got some challenges.”
As the session drew to a close, Will made the poignant comment: “Technology will not save us.”
Referencing the fact that though technology has moved on, a concrete office building designed today will be three times as heavy as the equivalent that was built in the 1960s, with three times as much environmental damage - despite the engineers having access to all the same technology.
“We use three times as much material because we’re now in a world where material’s got cheap and labour’s got expensive, and we want to build it quicker, and it has to be more eye-catching so we can put it on Instagram. And therefore, we build an office building where the columns are 12 metres apart from each other, the whole thing waves out over the side of the A5, and it’s covered in plastic. It’s nuts.
“And yet we could have made it more efficient the same way they did with cars, but we haven’t done.
“Coming right back to that earlier point: all of this stuff is there. It’s a tool. The problem is not the tool; the problem is the people using the tool are not pushing us in the right direction.”
Much ground was covered in just an hour. So before we headed into the Studio for more conversation over food and drinks, David asked two audience members for their key takeaways from the session:
James Coop, Make Architects, said: “AI is naturally affirmative. Introduce anomalies to overhaul the data that it’s going through. Learn how to bend AI into a horseshoe.”
Paul Dare, Dare Design Studio, added: “As professionals, we need to hang onto our professionalism and not let these tools take over. We’ve spent a long time learning what we do, AI is a great tool that can help us speed things up.”
A huge thanks to our expert panel for sharing their insight, to you, our audience, for joining us and asking thought-provoking questions. And thank you to our supporters for this event, Agua Fabrics, CUPA PIZARRAS - both Partners at Material Source Studio.
The conversation now continues at a dedicated roundtable on the topic of AI & experience this Thursday at Material Source Studio London. And at our upcoming seminar in Glasgow: AI: Destroyer or creator? Get your free ticket here and join the discussion.
Top takeaways at a glance
- AI is part of the sustainability equation, like it or not.
- Think carefully about what 'sustainability' actually means. It's more far reaching than you might expect.
- Use AI to find the hidden secrets to help attain sustainability. There's lots of room for improvement.
- AI is an imperfect tool - critique it, check it, be cautious.
- AI cannot undo a poor human starting point.