How to Think About AI
'How to Think About AI: A Guide for the Perplexed’ by Richard Susskind (March 2025)
What is the right way to think about AI, when the world’s smartest thinkers can’t agree what it means, how to manage it, or whether it leads to global salvation, annihilation or somewhere in between. This is the paradox with which Professor Susskind frames his new book, and it is a tough one. He started wrestling with ‘good old fashioned’ AI as a student at Glasgow University in 1981, then as a researcher, legal expert, consultant, and representative of organisations such as the British Computer Society. He also has the unique distinction of close collaboration with his two sons in technology research: Daniel Susskind (author of ‘A World without Work’) and Jamie Susskind (author of ‘The Digital Republic’). So he brings a lifetime or more of experience to this small book. Can it reduce our perplexity?
Who is perplexed?
Take any café or bar in a city near you and all sorts of people are chewing over AI. They have many questions: What is it really? What does it mean for my work? How will it impact future generations? Are we doomed? Once the preserve of academics, policy wonks or science geeks, these questions are now mainstream, urgent and polarising. And they grow in intensity with the spread and surging capability of intelligent systems.
But it is not just the layperson who is perplexed; the propellerheads are also struggling. Susskind observes that even on the matter of basic definitions, the experts can’t agree. ‘When 400 AI experts were asked to select from a range of proposed definitions, the most popular was only accepted by 56% of them.’ That was before things got complicated with the most recent and potent wave of AI change, driven by Large Language Models and Generative AI. ‘We already don’t fully understand how our most advanced AI systems work. And it is likely that this state of incomprehension will grow…we do not have general, systematic, scientific ways of explaining how ChatGPT and similar systems do what they do’. This sense of AI ‘voodoo’ is often called out by those at the leading edge of AI research, for example in the mysteries of LLMs identified by Christopher Summerfield in ‘Strange New Minds’ (AIBR #2). It seems that anyone can be challenged by AI.
In our search for guidance Susskind warns against over-reliance on one particular source - technology leaders. We should approach the predictions of the likes of Altman, Musk, Amodei with respect but caution. The risk is of leaders who can ‘pronounce too dogmatically on issues like ethics, social impact, international relations and regulation’ often with a glaring lack of expertise or humility. They are ‘often too vested … to speak reliably or objectively about [AI’s] future effects.’ In this Susskind could find common cause with the concerns of The AI Con, our previous review (AIBR #3). ‘Technology is too important to be left to the technologists’
Start at the beginning
So where do we go from here? This book sets out to guide the general reader and assumes no prior technical knowledge on their part. Its strategy is to set out the big moving parts of the AI debate, such as the risks, the scenarios, and the destination. The tone is open-minded but questioning. Critically rather than giving us the answers, at its core are a series of frameworks or ways of looking at AI.
But first, history. Perhaps every introductory AI book starts with a potted history. A procession of seminal (or clichéd) milestones - usually comprising keywords such as Turing / Dartmouth College / symbolic / AI Winter / Deep Blue v Jeopardy / Attention is All You Need / Lee Sedol v AlphaGo / ChatGPT / Singularity. This book covers similar ground, albeit brought to life as it connects with Susskind’s own career as it winds through the different eras of AI. But Susskind’s historical tour is extremely rapid. It becomes quickly clear he is not particularly concerned here with the inner workings of AI - there are more important questions.
Process versus Outcome
The most useful idea in the whole book is ‘Process v outcome’ thinking. Susskind sees these fundamentally different viewpoints lurking behind many of the disagreements and confusions in the AI debate. A process view focuses on how work is done, an outcome view focuses on the result. So for example, is a patient more interested in the medicine and the doctor (the process view), or better health (the outcome view)? In AI, this manifests itself in contrasting preoccupations - such as whether AI can truly reason or be conscious (process view) versus whether it can generate outputs that are faster, cheaper or better than the human equivalent (outcome view).
Susskind is an outcome-thinker. ‘What these systems do is of paramount concern, rather than how they do it.’ Obsessing about whether an AI system can ‘really think’ (the focus for example of a controversial recent paper ‘The Illusion of Thinking’) is an example of this. Saying AI does not think like a human is a bit like claiming that a submarine does not swim. It misses the point. Susskind has little time for po-faced social scientists who dismiss AI from a process perspective, whilst oblivious to the outcome view - the relentless doublings of AI speed and impact. ‘Whatever exaggeration there might well be about AI… we are already seeing AI secure significant efficiency and productivity gains.’ AI may be different, but it is useful.
Even if you disagree with Susskind, it is profoundly useful to see these two rival perspectives at play. If these are new to you, you will start to notice them immediately.
Working with uncertainty
Alongside the process versus outcome approach, Susskind uses a range of other tools to work with the unfolding uncertainty around AI. Firstly to clear the way he fires off a salvo of foundational observations: AI is increasingly capable; AI is accelerating exponentially, and there is no apparent finish line for AI (what we see today is not at or near an end state). Not everyone would agree, but for Susskind, these tenets are needed. ‘We are living in an era of greater technological change than humanity has ever witnessed.’
Recommended by LinkedIn
Clear thinking is impaired by a range of common but crippling human biases, such as ‘irrational rejectionism’, ‘technological myopia’ or ‘not us thinking’ (the tendency of people to think that other people’s jobs will be at risk from AI - but not their own).
Then there is scenario thinking. The book usefully lays out a convincing range of alternative futures that AI could be taking us towards, from Hype, GenAI+, AGI, Superintelligence and Singularity. AGI or Artificial General Intelligence (where AI ‘roughly equates to having a computer that has the full range of intellectual capabilities’) emerges as Susskind’s focus. ‘We should plan for AGI arriving between 2030 and 2035.’ Some would deny the very possibility of AGI; others would argue that it is already here. The key for beginners is partly to understand the terms, but also that they exist in a spectrum of possibilities.
In the same spirit Susskind sets out: seven different types of AI risk (tagged as existential; catastrophic; political; socio-economic; unreliable performance; over-reliance, inaction); five or maybe six ages of human progress from spoken words to digital technology, transhumanism and beyond; and an important three-way distinction on the impact of AI - Automation, elimination and innovation. Susskind notes that most AI usage today is automation, taking existing work and doing it with machines, often faster and better. Then there is elimination, for example a new type of medical diagnosis that completely bypasses the need for surgery. Thirdly there is innovation which solves problems that were completely unknown or unaddressed before AI. Susskind’s advice for those trying to create an impact with AI is to look beyond automation. Automation opportunities are visible, but they will be proven to be a small subset of what AI can achieve.
Finally there is a recognition that, for all its variations and fuzziness, AI does not exist in a vacuum. We should look out for the parallel technologies that magnify AI’s impact, in particular Virtual Reality (VR) and Brain-Computer Interface (BCI). I would add humanoid robots to this list, but the point is well made that AI is taking flight with other technologies. ‘The gap between humans and 1s and 0s has now almost been closed. It will shut completely with the advent of virtual reality and brain-computer interfaces.’
Susskind is alert to the confusions that arise from language. ‘We don’t have the words’ for all the new possibilities created by AI (for Susskind even ‘AI’ itself is a problematic label, preferring instead ‘massively capable systems’). But he wants to avoid heading too far down the rabbit hole, for example with the help of ‘as if’ thinking. Take the big unresolved questions about whether AI can really judge, create or be intelligent. In the meantime we can make better decisions about how we work with AI if we behave ‘as if’ they embody these qualities. Welcome to ‘quasi-creative’, ‘quasi-judgmental’ and ‘quasi-intelligent’ AI. This might seem academic, but Susskind is trying to keep us moving past the philosophical quicksand.
Where next?
So Susskind provides a number of frames and tools for looking at AI, that we can use to form our own opinions. But he also sets out his own predictions, in particular focused on the world of institutions and policy. He is clear that we are not ready for AI.
Given his career and current responsibilities, he is most expansive when looking at the legal system, observing how unfit for purpose current ideas and practices like legal personality, intellectual property and the court system are for an AI-powered future. His thinking extends more broadly into healthcare, education and beyond. The whole policy and regulatory landscape is narrow and reactive. It is focused too much on GenAI and it lacks ambition and vision.
His strongest recommendation is for the pervasive AGI scenario to be taken seriously - now. ‘I call loudly for what-if-AGI thinking - in government, business, education and beyond… failing to look squarely at AGI as a probable future would be unforgivable.’ AI of at least AGI sophistication is imminent and we need to build the policies, skills and strategies for dealing with this.
How to Think about AI is firmly grounded. However it does allow itself to look into the longer term and its logic takes us to some amazing places. He explores the long term implications for consciousness, the cosmos, the end of evolution and sharing the planet with machines. Unlike some of the more visionary thinkers on AI, such as Ray Kurzweil (author of the Singularity is Near / Nearer), Susskind doesn’t conclude that these more mind-blowing scenarios are likely. What is striking is that they find their way into this sober analysis as relevant possibilities.
Ultimately this small book covers a lot of ground, yet it can’t cover everything. Some topics demand further exploration, particularly the impact of AI on distinct groups such as businesses, consumers, families and citizens - and we will look into each of these in more specialised books. But How to Think About AI is a strong foundation, providing a clear framing of many of the questions that matter.
It also has a refreshing openness, visible in its acknowledgement of alternative futures, and the recognition that the author’s opinions have changed over time. Susskind interestingly confesses that he had once rather overlooked the risks of AI. ‘[I have] become more sharply aware of the range and scale of the threats that AI poses… I now believe that balancing the benefits and threats of artificial intelligence - saving humanity with and from AI - is the defining challenge of our age.’ Game on.
Next review - ‘AI First: The Playbook for a Future-Proof Business’, by Adam Brotman and Andy Sack (June 2025)
AI Book Review is 100% Human-written!
Richard Susskind - thanks for your note, you are very welcome, it has helped me think through a number of key concepts. I particularly like the 'process v outcome' distinction, I have used it already! From your perspective was there anything about the overall process of writing and producing this book that has taken you by surprise?