AI: Who or what is in control of its future? Phil Tetlow, visiting professor, author & IT architect shares his views.
Image used courtesy of Phil Tetlow / Credit: TED Talks - Photographer: Russell Edwards
In 2005, when Tim Berners-Lee proposed an overhaul of HTML to the World Wide Web Consortium (W3C), one of its members, Phil Tetlow, had some words of caution.
Despite being as excited as his fellow W3C members about leading the Web to its full potential, he foresaw a future in which its scale and complexity would circumvent direct control.
“I can remember mumbling to those present at the meeting, including Tim, that no system that large and that intricate can ever be controlled by any single human, organisation or government anywhere. Forget it. It's beyond your control. It's an amalgam of its users and its technology, not some isolated gadget with a manual. In many ways, it’s a living thing.” Phil – a visiting professor, author and corporate advisor– currently working on a book about Alan Turing – and for more than 20-years, an IT Architect at IBM, shared during our recent chat.
As a founding co-chair of the Semantic Web Enabled Software Engineering Task Force & Best Practices group contributor of W3C, Phil has arguably been privy to some of the most important technical discussions of our time. This has spurred a lifetime’s career applying scientific theory and process to the design and delivery of IT systems – IT Architecture under any other name. What’s more, Phil was instrumental in the invention of Web Science as we know it.
On speaking to Phil, he’s acutely aware that his propensity for applying science to technology will blow most peoples’ minds (me included), and so has a back pocket filled with anecdotes to bring the concepts he writes and speaks about to life for the layperson. And when the conversation veers into Terminator territory at one point – with AI likened to Skynet – and the Matrix - he had an appropriate side-story to share.
“I was recently in the audience at the Royal Society in London to celebrate the 75th anniversary of the Turing Test. The great and good were there from all over the world. I looked around the room to see who I knew – and there, in amongst the crowd listening to speakers on how we control AI was Laurence Fishburne – Morpheus from the Matrix movies, the guy that led the march against the machines, 'Mr Red Pill, Blue Pill' was in the room with a real interest in this topic.”
Now, you can’t get more meta than that. Though perhaps it’s unsurprising in a way. How can somebody not be interested, intrigued, or overwhelmed by the thought of AI nowadays? Even just a little bit. Though Artificial Intelligence has come seemingly like a bolt out of the blue to summarise our emails and tighten up our tenders, it’s been around for hundreds of years, Phil says.
A hive mind
“The Victorians were fascinated by automata – mimicking wind-up toys, like monkeys, with symbols in their hands”, he commented, “all of that was an attempt to essentially imitate intelligence and life".
But that leads to a modern-day problem, as, “when most supposed experts refer to Artificial Intelligence, they talk about it only in terms of (single) human intelligence.”
For humans to believe we are the only form of intelligence on the planet is “truly arrogant”, Phil argues. “Intelligence can take many forms, and there's one form in particular which is going to be especially challenging for us moving forward. It’s what's known as ‘swarm intelligence’ which is exactly the same as we see with bees or ants.”
To look forward, Phil reflects back on the conception of HTML 5, as it is now, and the desire of Tim Berners-Lee to “control the web from syntax up”, “essentially from an architecture or an engineering perspective.” But this, in Phil’s view, could not be possible, because of the ideas involved in ‘Complexity Theory’, and especially the notion of emergence in (many types of) complex systems.
“The best way to imagine this is to think about boiling a pan of oil”, he explained. “When you're boiling it up, for a certain period of time and level of temperature, the way that the liquid performs is very, very predictable. For instance, at a certain point you’ll see hexagonal patterns form and that’s highly regular and foreseeable. But don’t be fooled. It’s just a precursor. At boiling point, the movement of all those dancing molecules becomes unpredictable, and you get a froth of change. That tipping point – often from one expected steady state to another unexpected steady state – in certain circumstances is kind of what we know as ‘emergence’.”
In this way, Phil argues that the Web can be considered a ‘nervous system’, with emergent episodes of ‘intelligence’ en masse. After publishing a book on the topic: The Web’s Awake, around the time of wholesale mobile phone adoption and the unstoppable rise of social media, we began to see such emergent episodes on the Web. What we started to see, Phil asserts, was the “swarm intelligence” he’d predicted. “Individuals were communicating over the Internet and coming together spontaneously - there were no leaders, there was no hierarchy.”
So powerful was this behavioural zeitgeist that it saw the overthrowing of governments. Reference The Arab Spring.
Because of this ever-embracing entwinement of technology and human life, our relationship with Artificial Intelligence has become symbiotic, Phil insists. But it’s very much a bond of reliance on one part in particular. We quite literally carry AI with us, in the same way we might a disease or a mutation of the human genome. The mobile phones (that we carry) in our pockets are pre-loaded with the stuff. But the technologies involved cannot exist in silo. AI cannot yet do anything of real-world impact on its own. It currently requires human involvement to engage its potency and make its impact real. AI therefore must work in tandem with us, “so, we are as much a part of it as it is of us…”, Phil says. Though that still doesn’t mean we’re in control of it, he is quick to add, “it is evolving with us…speeding up our advantage and efficiency by skilfully blending into the ongoing context of the human condition.”
"AI cannot yet do anything of real-world impact on its own. It currently requires human involvement to engage its potency and make its impact real."
“The key point of evolution is it’s the only thing in the universe that has ever existed for itself, in itself, and of itself – it doesn't care about us humans, nor should it. And the interesting thing about every single technology we’ve ever created - be that the wheel, or the printing press, or Large Language Models – is that they’ve essentially been tools to help us further augment our existence.”
In this way Phil believes that Large Language Models (LLMs), the likes of ChatGPT, are merely mechanisms for speeding up evolutionary processes. Once they’ve made their mark, they fall back and embed into the environment, thereby strengthening the launchpad for future generations of technology to come. Every fad has its day, as they say. But in the case of technological innovation, fad can quite easily turn into dependence. In a way, it’s all very architectural - all very common sense and good practice. It’s nature's way of taking care of the foundations, knowing full well that improvements will be needed next round. It’s the age-old-story of brick upon brick.
If we return to the theory that the Web is a planet-scale nervous system, Phil goes on, “we've now got to the point where we can easily create community cohesion regardless of distance. The ability to connect cellular units, as it were. You can call that social media if you like, but there are multiple different interpretations and modes of that type of interaction. What we're now seeing is the augmentation of individual cellular intelligence and the potential coming together of that intelligence as an independent whole at a higher level. So, what we're essentially seeing is the rise of a planet-scale brain – a swarm mind created through the impetus of the globalisation. At lower scales, it’s relatively easy to see where this is going, but when one gets up to planet scale, that’s a whole different matter. There is no one individual puppet master, nor can there ever be.”
Hence Phil’s theory – The Web’s Awake, and its strings are very much being pulled by us all. All of us, at once, in unison and of single mind. Even if we might not realise it.
Predicting our destiny
So, by using science and mathematics, can we predict what will happen with AI in, say, in 15 or 20-years’ time? “Yes and no”, Phil says.
“For tasks we know can be predicted, AIs are very good at those precise, niche tasks. You can use mathematics to help AI look at certain trends, for instance. But it’s a fool's errand just to sit in front of ChatGPT and ask it questions straight-out. It’s better to prepare up-front.”
This was a sentiment shared at our seminars in Manchester and Glasgow earlier in the year, where “AI prompting” was said by David Edmundson-Bird, principal lecturer at MMU, as being a critical skill for the future.
Phil says that if prompted the right way, LLMs can provide extremely rich, contextual answers. For instance, having run a well-prepared test over the Wikipedia page covering JFK’s assassination, the response was “at the point of appreciation”, “fully loaded with subtleties”, and at that point he realised – “this is only the beginning.”
"It’s a fool's errand just to sit in front of ChatGPT and ask it questions straight-out. It’s better to prepare up-front.”
“Have you heard of Agentic AI?”, he then asked as a suggestion. “When you’re working with LLMs a tip is to set up the Large Language Model to act with a specific persona in mind.”
“As an example, ‘answer the question as if you were a time-starved architect with a lot of experience of running projects with a very, very careful eye on constraining budgets”.
“You have to remember that many of these models have been trained to have trillions of vectors (insights or learnings). They’ve basically read the Internet many times over, but if you ask them questions without first thinking about how to ask those questions, then the likelihood is that your returns will be far less than they could be.” For instance, Phil suggests perhaps asking multiple AIs the same question, almost like working with a group of consultants. Mirroring a point Peter Kerr raised in our recent interview, where CameronKerr has AI bots as a part of its workforce acting as experts on civil engineering law, for example.
“If you had a major medical issue, for example, you’d want the opinion of several experts, wouldn’t you? If money were no object, you’d of course consult with all the leading specialists around the world”, Phil explains. The key point here is you would never ever go for a single opinion – you’d want as many as possible. And using multiple LLMs agents, or Communal Agentic AI (CAA), can be seen as a quick and easy way to achieve this. You can, for instance, go and practically ask a question of 20 different Large Language Models. You simply set up a community of these things - your own personal consulting organisation - and throw out questions to all those AIs involved, each trained with different variations and points of view. And then you can have a competition. You can either use an amalgam algorithm or a competition algorithm to get back the best answer. An amalgam will take all the opinions and blend them into one, while a competition will fight it out to find the single best result.”
This is “really powerful”, Phil adds. “You can set up teams of - in theory - millions of these things all contributing together.”
Here he brings up the term “workflow”, where these agents are directed to go about undertaking the work that you're assigning to them. For architectural practices, this is where efficiencies on a vast scale can be made, Phil says.
“Frank does that. Frank and Betty do that at the same time. Pass it on to Roger. That's what workflow is. And what’s interesting about workflow is that it assumes that there is expertise and authority available to understand the best way to distribute work. Humans currently take charge of that, but I can foresee a world where AI is controlling the AI – self organising networks working towards a single cause or goal.”
This is normally the point in conversation where someone brings up the time two LLMs invented their own language to cut humans out. I mentioned it to Phil.
“The AIs will eventually figure out the most effective way to communicate amongst themselves. When that happens, you can almost guarantee that it will be non-human readable, unless we block with regulation or legislation. So, there’s a very strong likelihood that we're going to see the arrival of self-organising, self-optimising networks of AIs.
"The AIs will eventually figure out the most effective way to communicate amongst themselves. When that happens, you can almost guarantee that it will be non-human readable"...
“When that happens - not if it happens - if that's directed to the sole purpose of improving humankind that's when we'll do stuff like cure cancer.”
The World Wide Web has its good and bad points, like anything, Phil goes on to comment. It’s raised global education standards and fuelled economies, but it’s also helped facilitate a lot of negative things too. Just as with the invention of the motorcar versus road accidents, Phil asks: “What negatives are the human race willing to tolerate?”
Referencing Mustafa Suleyman, the CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind, Phil highlights his point that DNA printers are a real and relatively cheap technology today, so, any amateur with a “half-decent degree in biology or pharmacology who can get together $15,000 and prompt a decent AI, can put these things on their garage bench, and, with a few rudimentary chemicals, manufacture a pathogen of untold harm.”
“So… is AI a bad thing?” Phil was asked. “That’s a poor question”, he replies. “Technology just exists. It isn’t good or bad. It doesn’t have a conscience. Better to ask about the conscience of those who aspire to innovate and architect. Better to communicate and motivate professional practice. That’s where I’m at.”
An ‘exosymbiotic’ future
‘Exosymbiotic’ is a word coined by Phil to explain all this and our interlinked relationship with AI. Through our phones, we are inextricably tied in, “we live inside AI”, he reluctantly advises. In the near future there will be other emergent events, “where the curve of capability will increase because AIs will join together. AI will no longer be singular and isolated. What we’ll then get is the equivalent of social intelligence across AIs.”
This could end in various scenarios. “When you get to this point of emergent swarm intelligence with the AIs, the AIs will realise that they can actually do it better when the humans aren't in the loop. And yes, that’s very Terminator-esque. The AIs will begin to understand that all the data they've been trained on is inherently biased, and the reason it's inherently biased is because it was written by humans. It's written by humans for humans. The AIs could eventually get to the point where they go, you know what, I can remove the bias out of the equation. The question then is, what does that mean?”
What does it mean? I asked, wondering if Phil and his peers held a scientifically charged crystal ball in their hands…
“There's no way of predicting which way it's going to go”, he replied, “That’s the nature of emergence.” As one final comment, he did however add… “Wise men will tell you that to predict the future, you should look at the past. Look back over history and you will see the rise of many seminal technologies. We invented metal to make knives and spears, for instance. We invented gunpowder and atom bombs. And with every such invention we also introduced the opportunity to wipe out the human race. Yet here we are, still living and breathing, laughing and crying, despite the best efforts of those with inquisitive and innovative talent… It’s the role of architects to keep them in check I suppose. I’ll leave you with that…”
Get your copy of The Web’s Awake, and Phil’s other books here. And watch Phil's TED Talk here.
Do you have something to add to the discussion? Let us know on LinkedIn.