Some thoughts on AI and its costs
1
I watched this video by Simon Clark called ‘Should I feel guilty about using AI?’ and found the opening pretty interesting:
Asking a large language model like ChatGPT a question uses approximately 3.6 joules of energy, enough to run an LED bulb for one second.
Using an AI tool to convert speech to text uses approximately 79 joules, while generating an image from a text prompt uses approximately 1700 joules, enough to power a laptop for about 30s. And right now, ChatGPT alone processes over 1 billion queries each day. AI tools consume a lot of energy and our use of these tools is exploding…
There is legitimate fear that our use of tools like ChatGPT and Midjourney will cause serious environmental damage.
I was like: what? So, querying ChatGPT 60 times is like running an LED bulb for a minute, or generating 60 images is like having my laptop run for 30 minutes—long enough to watch this video? That doesn’t seem super energy-intensive, tbh. We don’t usually talk about lights and laptops as having horrendous environmental impacts.
Indeed, this is the sort of argument that Andy Masley, for example, uses to argue that ChatGPT is not bad for the environment. Masley argues that generative AI use by average people is a tiny tiny fraction of their total carbon footpint, such that abstaining from chatbot use is just not a sensible thing for climate-concerned people to focus on.
And interestingly enough, Simon actually agrees! His answer to his titular question is ‘No, you shouldn’t feel guilty for using AI’, for reasons similar to Andy’s. Throughout, he is scrupulous about putting AI’s environmental impacts in context. For example, he points out that while AI uses a lot of water, it’s still less than 0.1% of what agriculture uses. And yet, Simon’s video caption says ‘AI tools undeniably have a large impact on the environment’, and the overall rhetoric and vibe of the video is ‘AI is a big environmental problem’.
It makes me wonder whether this debate isn’t really about how much energy or water AI uses, but about…how worth it AI is, generally.
Perhaps asking ‘is AI bad for the environment?’ is like asking ‘is £50 expensive?’ That’s a nonsensical question, on its own. Expensive for what? For a month’s rent in London, or a transatlantic flight, £50 is incredibly cheap. For a bag of crisps or a 10-minute bus ticket, £50 is incredibly expensive. Expense is a property of an exchange between two things, not a resource.
I think people’s opinions on AI’s environmental impact are often correlated with their opinions about its usefulness. If you think that AI is a purveyor of slop and misinformation, then you’re more likely to complain about how bad for the environment it is. If you think that it’s a game-changingly helpful productivity tool, then you’re more likely to reach for Andy’s cheat sheet.
Why is this? Well, it could be the horns effect in the case of the AI critics, or self-serving bias in the case of the AI defenders. But I think it’s also that these two groups disagree profoundly on what the environmental costs of AI actually buy. If you find that GPT often gives you helpful advice, or if you really don’t want to have to write that essay/email, or if you love creating weird and wonderful AI art, then it seems reasonable to pay 3-seconds-of-lighting’s-worth of energy to use it each time (like paying £50 for rent). If you see it as a brain-sucking peddler of lies that’s nonetheless constantly foisted upon us against our will, then the fact that it also uses a bunch of energy is just all the more reason to be annoyed at it (like someone taking a shit on your doorstep and then demanding £50 for the privilege).
Andy is aware that some people who worry about AI’s environmental impact just don’t like it in general:
If you think there are reasons why ChatGPT is not just useless but actively harmful (copyright, hallucinations, job loss, risks from advanced AI, etc.) make the case directly without adding incorrect climate statistics… If you try to smuggle in a lot of unconvincing additional reasons why something’s bad, it undermines your otherwise strong case. Environmental objections to ChatGPT often dilute other serious criticisms of the technology.
I have a lot of time of this perspective. But to defend the AI critics a little: if you think that something is very bad generally, then that makes any cost—even a ‘fair’ cost—a genuine, if peripheral, additional reason why it’s bad. We might call this a ‘to add insult to injury’ cost.
Consider a couple who share a home and finances, Adam and Bea. One day, Adam is horrified to discover that Bea wants to purchase a ginormous sculpture that is as hideously ugly as it is politically questionable. In the course of their argument over whether to buy the sculpture, he exclaims, ‘You really want to waste £100 on that piece of junk?!’
Is this legit? Bea doesn’t think so—she knows that Adam would think nothing of spending £100 on a pair of shoes, or a couple of concert tickets, or a hotel room. £100 isn’t expensive for a hand-crafted statue.
But from Adam’s point of view, the fact that the statue costs £100 is a genuine bad thing about it, even if it’s not his central objection to buying it. If Bea buys the statue, not only will they have an atrocity in their dining room and be scorned by their artsy friends, but, to add insult injury, they’ll be £100 down—£100 that, as Bea rightly points out, could be spent on a nice new pair of shoes, or a concert, or a holiday—you know, things that are actually good.
And this ‘to add insult to injury’ argument obviously bites harder the less money you have. It’s one thing if Adam and Bea are comfortably-off; it’s quite another if the alternative use of their £100 is not a hotel room, but an energy bill or some groceries. In that case, Adam’s indignation seems all the more reasonable, even if the main reason that he hates the statue is still because it’s hideous.
So if you’re already very worried about our reckless energy consumption and its effect on the climate, and you also generally hate AI, then its environmental impacts kind of are relevant: emotionally it just seems crazy to be spending scarce resources on this dumb thing, even if the impacts are not huge in the grand scheme of things.
2
So, where do I come down on all this? I find myself pretty convinced by Andy’s arguments that the environmental costs of personal AI use are not worth worrying about. But I also find weirdly dissonant the position so often espoused by my EA friends that’s well-illustrated by this 80,000 Hours article. Peter Hartree starts by saying ‘To truly understand what AI can do — and what is coming soon — you should make regular use of the latest AI services’ and the article consists of tips on how to use AI to make you more productive. And then the last sentence is:
Have fun! Then… continue reading our resources on why future AI systems may ruin the world — and what you can do to prevent that.
Now, I don’t actually think that this position—that AI is a wonderful productivity tool and also might kill us all—is incoherent logically or rationally. But I find it incoherent in terms of vibes. There’s something that feels unaesthetic about avidly using a technology that also fills you with abject terror for the future of the universe. Doesn’t it feel…off? Dissonant?
Meanwhile, I personally don’t use AI much, but that’s neither because I think that doing so is bolstering plagiarism software nor contributing to outsized environmental harms. Nor is it exactly because of fear of AI doom… not that I exactly disbelieve in AI doom, but it feels like an outcome that is psychologically prudent to keep at arm’s length. It’s more just that I don’t want to, for sort of nebulous and unclear reasons. But I’m a great believer in not doing things that you don’t want to do, even if your reasons are not legible to others or yourself.
Part of it is that one of the main things that LLMs are good at is writing, and I find writing meaningful, so I don’t really want to outsource it. When I write things, I’m not producing an object but expressing a truth, and LLMs are not going to be able to express truth for me. This isn’t a comment on their writing quality; once, I put a rough draft of a blog into Claude and asked it to write it up into a finished blog. I thought the result was objectively pretty decent—no egregiously misplaced m-dashes. It sounded quite like my existing blogs. It just felt intensely wrong, in a sort of squirmy, cringey, boundary-violating sort of way.
All of this has made me wonder whether I should explicitly abstain from AI, as part boycott, part aesthetic principle. I have a few thoughts on why to do this, but that’s perhaps for another post.