Artificial & Unintelligent

29.03.26

Being Intelligent about Artificial Intelligence

The two talks at GSA landed in a really good way because they didn’t feel like “AI is the future” hype, but they also weren’t doom and gloom. They were basically about how we keep the human part of life intact while these tools get more powerful and more normal.

Alastair Macdonald framed it as “avoiding the zombie apocalypse,” meaning people becoming so absorbed in phones and tech that they end up present-but-not-present. He opened up the AI section with a quote from Mustafa Suleyman that kind of sets the tone for the whole topic: “If you’re not a little afraid of AI, you’re not paying attention.” 

What I liked is that his talk wasn’t “technology bad.” It was more like, do we actually think things through carefully enough, and how can we control and use technology better to benefit our shared humanity?  That’s such a design question, because the zombie apocalypse isn’t some sudden event, it’s just loads of small design decisions stacked over time.


Attention as a design material

One thread Alastair kept pulling on was attention. Not just as something we “spend,” but as something that can be taken, shaped, and monetised. He also brings up the uncomfortable question of whose interests are being served when certain behaviours are encouraged by the way technologies are designed. 

That links to a lot of work that exists outside the course as well. Tristan Harris has called the knock-on effects of persuasive tech “human downgrading,” basically the idea that these systems gradually downgrade attention, relationships, and wellbeing because the business model rewards keeping people hooked. 

That’s where the “zombie” feeling comes from for me. It isn’t that people are weak, it’s that the systems are genuinely designed to be sticky.

Invisible Costs

One of the more surprising points in Alastair’s later slides is how physical AI actually is. Not just electricity, but water. He includes a line along the lines of, every time you ask an AI chatbot a question you might be using more water than your morning coffee, because of data centre cooling. This is something I touched on a little in my blog on sustainability and how designers must consider the less obvious environmental impacts of the software tools we use.

There’s research that puts numbers behind the broader point. A widely cited paper on AI’s water footprint says training GPT-3 in Microsoft data centres could directly evaporate 700,000 litres of freshwater, and it discusses how water use is an overlooked part of AI’s environmental cost. 

That’s the kind of thing that should probably be part of “responsible AI” conversations more often, especially for designers, because we’re the ones normalising these tools into everyday workflows.

AI & XR for Good, Bad and Ugly

Paul Chapman’s talk felt like the counterbalance. He’s deep into emerging technologies and he was speaking about all the ways AI and VR/AR can be used for genuine benefit, with the important caveat that there are places it just isn’t appropriate.

The part that grabbed me most was the idea of human models and virtual environments in lab scenarios, where multiple people can work on the same thing together. That’s where VR stops being escapism and becomes a shared tool. Training, simulation, collaboration, safety, accessibility, these are all areas where “virtual” can support real life rather than replace it.

I also think design has to get better at drawing that line, because the same technology can either be used to help people learn, work, and communicate, or it can be used to keep people inside a loop.

When a tool starts changing the person using it

A lot of the fear around AI gets dismissed as people being “anti-technology,” but I do not think that is what is going on. Most people are not scared of robots taking over in a movie way, they are scared of the quieter stuff. The slow shift where you do not fully know what is true anymore, you do not know who made something, you do not know what is nudging your decisions, and you do not know who is actually in control of the systems you rely on every day. When AI becomes baked into search, education, workplaces, and communication, it starts to sit in the position of “default truth,” and that is a lot of power for a small number of companies to hold.

I also think a massive part of the fear is around losing agency. Not just jobs and industries changing, but people feeling like they have to adapt to the tools rather than the tools adapting to human life. That is where the whole “zombie apocalypse” idea starts to feel less like a joke and more like a warning.

And then there is the part I worry about most, the impact on younger people. When I was growing up, parents and teachers always pushed the importance of analogue working. Sketching, making, prototyping, getting hands-on, learning how materials behave, and not letting software do all the thinking. Even when we started using CAD and cameras more, there was still this line of, “do not lose the fundamentals.” Now we are in a place where kids are being told to stick with CAD and cameras rather than jumping straight to AI, and it almost feels like history repeating, but faster. Traditional analogue techniques fall further into the background, and suddenly the argument of efficiency versus authenticity looms over everything.

The difference this time is that younger people are being exposed to AI from the start. They are not choosing it later, it is just there, built into how they learn, communicate, and create. I genuinely feel my generation might be the last to have experienced a big chunk of life without AI in the background. In a weird way that gives us something valuable. We can actually call on memories of doing things the slower way, the satisfaction of learning a skill properly, and the pride that comes from the time and effort it takes to make something great. That does not mean rejecting new tools, but it does mean we have a responsibility to protect the parts of making and thinking that AI cannot replace, and to notice when convenience starts quietly taking something human away.

The part I keep thinking about, escape, replacement, and real people

The thing I find most interesting, and also the most unsettling, is how easy it is for people to slide from “tool” to “escape.” Not just VR worlds, but AI as a replacement for conversation, or comfort, or social friction.

Alastair ends his deck with big questions around whether we want to abdicate how we think and how we interact directly with one another, and how we reduce our complicity in tech that harms wellbeing. 

That’s probably where I land too. AI and virtual realities are not automatically good or bad. The outcome depends on what we optimise for. If we optimise for human flourishing, collaboration, and real-world benefit, it’s exciting. If we optimise for engagement, control, and extraction, we drift closer to the zombie apocalypse without even noticing.