Does ChatGPT use 10x more energy than a standard Google search?
A journey down the rabbit hole of viral AI energy claims. It's probably true in relative terms, but that's not what matters.
I keep running into this hot take everywhere: 'Don't use ChatGPT! It uses 10x more energy than a Google search! Stop destroying the planet!'
It's time we had a proper conversation about this. Based on what I know, this anti-AI energy argument is missing the forest for the trees. And as you’ll see, the argument is based on what is largely outdated data. Part of the reasons why I’m writing this post is because I'm also completely open to not having the best data, and therefore, being wrong. If you've got better data, I would love to learn about it.
This is nevertheless an opinionated piece, so buckle up 😅
Bad advice
First, the advice to not use AI is seriously bad. AI is the defining technology of our age, and will change the world like no technology before. Trying to avoid it is basically closing your eyes to a high-speed train heading your way. That usually doesn't end well.
If you want to understand the world we're going to live in, you've got to use AI on a regular basis. Find the use cases that work for you. Find its downsides. Criticize it based on your experience. Work with it. Improve it. Not mastering AI will be worse than not mastering digital technology was. And for young people, asking them to not use AI is like asking them to put on a blindfold (Thank goodness they're not listening).
Energy requirements
So, where did this energy data come from in the first place? I see it pop up everywhere, so let's walk through an example. I was recently alerted to the advice to not use ChatGPT through the UN. The UN environment program released a news story about the supposed environmental impact of AI. It has the following quote:
A request made through ChatGPT, an AI-based virtual assistant, consumes 10 times the electricity of a Google Search, reported the International Energy Agency.
Ok, let's go and check the report of the International Energy Agency. In this 170-page report, we can find the following quote:
When comparing the average electricity demand of a typical Google search (0.3 Wh of electricity) to OpenAI’s ChatGPT (2.9 Wh per request), …
Numbers! We're finally getting somewhere. The linked paper was published in the journal Joule by Alex de Vries. In it, we find the following:
Alphabet’s chairman indicated in February 2023 that interacting with an LLM could “likely cost 10 times more than a standard keyword search.6" As a standard Google search reportedly uses 0.3 Wh of electricity,9 this suggests an electricity consumption of approximately 3 Wh per LLM interaction.
We're now down to the claims of a single person, who prefaces the statement with "likely". But, we're talking about Alphabet's chairman, so we should take this seriously. The linked Reuters article, from where the quote comes from, has the following full section:
In an interview, Alphabet's Chairman John Hennessy told Reuters that having an exchange with AI known as a large language model likely cost 10 times more than a standard keyword search, though fine-tuning will help reduce the expense quickly.
Ok, not much more information...
Let's start with the Google search query. In the quotes above, it was claimed that a Google search uses 0.3 Wh. The linked Google blog post is from 2009, though, and things have certainly improved since then. Newer estimates (though not from Google) from 2024 put this number at 0.04 Wh, almost a 10x improvement. Let's go with that.
On the LLM side, things are murkier. The estimate of 3 Wh above comes from simply taking one person by their word, and multiplying an outdated 2009 estimate by ten. To be fair to Alex though, he also writes, “This figure aligns with SemiAnalysis’ assessment of ChatGPT’s operating costs in early 2023, which estimated that ChatGPT responds to 195 million requests per day, requiring an estimated average electricity consumption of 564 MWh per day, or, at most, 2.9 Wh per request.” Aha - let’s go and look at said SemiAnalysis’ assessment. It states the following:
”We assume that OpenAI used a GPT-3 dense model architecture with a size of 175 billion parameters, hidden dimension of 16k, sequence length of 4k, average tokens per response of 2k, 15 responses per user, 13 million daily active users, FLOPS utilization rates 2x higher than FasterTransformer at <2000ms latency, int8 quantization, 50% hardware utilization rates due to purely idle time, and $1 cost per GPU hour.” (emphasis mine)
All good and fine, except the “average tokens per response of 2k“. That's an incredibly long response. Play with most models, and the response is generally in the order of 200 tokens, especially in regular, conversational exchanges. Since the energy demand largely scales with output tokens, we can divide their estimate of 2.9Wh by - drum roll - factor 10!
Can we find more data? Yes - I recently came across a nice paper from October 2023 entitled “From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference”. The authors measured the energy consumption using NVIDIA's built-in monitoring tools to capture GPU energy usage during model runs on different hardware setups. They then calculated energy per generated token by dividing the total measured energy by the number of output tokens produced, testing this across different configurations to understand typical energy costs.
The overall estimated energy consumption per token produced was around 3-4 Joules. Thus, an average LLM request-response interaction using the LLaMA 65B model (with about 200 tokens total) consumes approximately 0.2 Wh. Of course, a ChatGPT request may use more energy, depending on the model, but things have probably also gotten more efficient since 2023. In any case, that estimate aligns well with the SemiAnalysis estimates mentioned above, but corrected for response length.
What does it mean?
If we take more current estimates, the 10x difference seems to stand, because on the one hand, the original Google request estimate from 2009 was outdated in 2024, and thus overestimated today’s energy needs by roughly factor 10, while on the other hand, the estimate of an average LLM request was also overestimated by factor ~10. Since these cancel each other out, the original claim stands, but based on much lower numbers. This is important.
On the face of it, the advice of not using AI and do Google searches instead seems reasonable from an energy point of view. However, there are numerous issues with this.
First, as mentioned above, the use of AI is much more powerful than simple internet search. So we're comparing apples with oranges. You can do much more with an LLM than you can do with a standard Google search.
Second, as you've undoubtedly seen in many places and increasingly often, a Google search will also render an AI-assisted result through Gemini. I don't know to what extent these are cached, but to the extent that search results become more and more “LLM-enriched”, the whole comparison of search vs LLM becomes increasingly moot.
Third, energy usage of AI models is rapidly improving. There are still many, many efficiency gains to be had because this is such a new technology. On the other hand, we are going more towards a world of compute time at inference (the “thinking” aspect behind models like O1 and O3), which uses many more tokens than just a simple one pass. This would increase energy usage. I assume these things might balance each other out, though if history is a guide, the net efficiency will improve (Moore's Law + economics say hi.)
About that energy
Overall, things seem complicated, but let's just get the most important aspect straight here. Let's say you're a heavy LLM user and you're making 100 requests every single day. So you'd be using 20 Wh for your AI needs.
How much is 20 Wh of energy? Let's use an everyday object for comparison - a car. An average gasoline car might consume about around 8 L of fuel per 100 km. One liter of gasoline contains about 9–10 kWh of energy. That works out to ~0.72 kWh/km. In such a car, 20 Wh advances you about 30 meters. Think about that - your entire day of heavy AI usage equals driving your car the length of a tennis court. A more efficient car might move you 60m, a very efficient EV perhaps even 100m. In other words, a full year of ChatGPT use is energetically equivalent to driving to your favorite restaurant and back - once.
If you'd like to stick with electricity, 20 Wh gets you at best 20 mins of TV - and that's just for the TV - streaming energy is not included. (Not, not going to make a joke about Netflix and chill energy use.)
In general, moving around atoms is much more energy intensive than moving around electrons. Driving less, or heating less, is going to have an outsized effect on your energy consumption, compared to most everything you do with electricity. In addition, electricity can be produced almost entirely CO2 neutral.
So to summarize: yes, it's plausible that AI requires an order of magnitude more energy than internet search, but it can do much more, and if your internet search becomes "LLM-enriched", all bets are off. The absolute energy requirements are probably about 10x lower than what is usually claimed. But most importantly, in terms of absolute energy, this is still a drop in the bucket. It makes no sense to focus our energy discussions here - instead, they should be focused on things like transportation, heating, and other aspects that involve moving atoms. And before we forget: AI will be able to assist us heavily on improving these issues.
One more thing…
Hey, what about combining LLM requests and search? You know, like Perplexity? Indeed, let’s ask the question to perplexity:
Did I say these numbers are popping up everywhere? 😂 Seriously... I went down the rabbit hole with these links, and yes, they all lead to the same original source of the data - an outdated Google blogpost, a guesstimate by the Google chairman, and a SemiAnalysis substack. What a world we're living in! Next time someone tells you not to use AI because of energy usage, you know where to send them... 🐰🕳️"
CODA
This is a newsletter with two subscription types. I highly recommend to switch to the paid version. While all content will remain free, I will donate all financial support to the EPFL AI Center.
To stay in touch, here are other ways to find me:
Social: I’m mainly on LinkedIn but also have presences on Mastodon, Bluesky, and X.
Podcasting: I’m hosting an AI podcast at the EPFL AI Center called “Inside AI” (Apple Podcasts, Spotify), where I have the privilege to talk to people who are much smarter than me.
Conferences: I’m an organizer of AMLD, the Applied Machine Learning Days - our next large event, AMLD 2025, takes place on Feb 11-14, 2025, in Lausanne, Switzerland.