Why AI Is a Disaster for the Climate

Amid all the hysteria about ChatGPT and co, one thing is being missed: how energy-intensive the technology is

What to do when surrounded by people who are losing their minds about the Newest New Thing? Answer: reach for the Gartner Hype Cycle, an ingenious diagram that maps the progress of an emerging technology through five phases: the “technology trigger”, which is followed by a rapid rise to the “peak of inflated expectations”; this is succeeded by a rapid decline into the “trough of disillusionment”, after which begins a gentle climb up the “slope of enlightenment” – before eventually (often years or decades later) reaching the “plateau of productivity”.

Given the current hysteria about AI, I thought I’d check to see where it is on the chart. It shows that generative AI (the polite term for ChatGPT and co) has just reached the peak of inflated expectations. That squares with the fevered predictions of the tech industry (not to mention governments) that AI will be transformative and will soon be ubiquitous. This hype has given rise to much anguished fretting about its impact on employment, misinformation, politics etc, and also to a deal of anxious extrapolations about an existential risk to humanity.

All of this serves the useful function – for the tech industry, at least – of diverting attention from the downsides of the technology that we are already experiencing: bias, inscrutability, unaccountability and its tendency to “hallucinate”, to name just four. And, in particular, the current moral panic also means that a really important question is missing from public discourse: what would a world suffused with this technology do to the planet? Which is worrying because its environmental impact will, at best, be significant and, at worst, could be really problematic.

How come? Basically, because AI requires staggering amounts of computing power. And since computers require electricity, and the necessary GPUs (graphics processing units) run very hot (and therefore need cooling), the technology consumes electricity at a colossal rate. Which, in turn, means CO2 emissions on a large scale – about which the industry is extraordinarily coy, while simultaneously boasting about using offsets and other wheezes to mime carbon neutrality.

The implication is stark: the realisation of the industry’s dream of “AI everywhere” (as Google’s boss once put it) would bring about a world dependent on a technology that is not only flaky but also has a formidable – and growing – environmental footprint. Shouldn’t we be paying more attention to this?

Fortunately, some people are, and have been for a while. A study in 2019, for example, estimated the carbon footprint of training a single early large language model (LLM) such as GPT-2 at about 300,000kg of CO2 emissions – the equivalent of 125 round-trip flights between New York and Beijing. Since then, models have become exponentially bigger and their training footprints will therefore be proportionately larger.

But training is only one phase in the life cycle of generative AI. In a sense, you could regard those emissions as a one-time environmental cost. What happens, though, when the AI goes into service, enabling millions or perhaps billions of users to interact with it? In industry parlance, this is the “inference” phase – the moment when you ask Stable Diffusion to “create an image of Rishi Sunak fawning on Elon Musk while Musk is tweeting poop emojis on his phone”. That request immediately triggers a burst of computing in some distant server farm. What’s the carbon footprint of that? And of millions of such interactions every minute – which is what a world of ubiquitous AI will generate?

The first systematic attempt at estimating the footprint of the inference phase was published last month and goes some way to answering that question. The researchers compared the ongoing inference cost of various categories of machine-learning systems (88 in all), covering task-specific (ie fine-tuned models that carry out a single task) and general-purpose models (ie those – such as ChatGPT, Claude, Llama etc – trained for multiple tasks).

The findings are illuminating. Generative tasks (text generation, summarising, image generation and captioning) are predictably more energy- and carbon-intensive compared with discriminative tasks. Tasks involving images emit more carbon than ones involving text alone. Surprisingly (at least to this columnist), training AI models remains much, much more carbon-intensive than use of them for inference. The researchers tried to estimate how many inferences would be needed before their carbon cost equalled the environmental impact of training them. In the case of one of the larger models, it would take 204.5m inference interactions, at which point the carbon footprint of the AI would be doubled.

This sounds a lot but, on an internet scale, it isn’t. After all, ChatGPT gained 1 million users in its first week after launch and currently has about 100 million active users. So maybe the best hope for the planet would be for generative AI to topple down the slippery slope into Gartner’s “trough of disillusionment”, enabling the rest of us to get on with life.

Source