Interesting post and a great reminder that these viral statistics should always be viewed with scepticism -
As I read the article I considered my anecdotal experience of LLM energy use. I use Ollama to run AI models on a machine equipped with an Nvidia RTX 3090. Roughly speaking a query tends to cause the graphics card to draw c.350 watts (according to system monitoring) for about 30 seconds (estimate, does vary)
Energy (Wh) = Power (W) × Time (hours) = 350 W × (30/3600) hours = 350W × 0.00833 hours = 2.9Wh!
Sure, but LLMs are not a replacement for search engines, which utilize databases. I'm not saying generative AI should be avoided at all costs, but it does consume MORE energy than a search engine that utilizes databases, whether or not it's 10 times the amount. Sometimes LLMs are useful, sometimes unnecessary if you're smart enough to know which tools to use for which purposes. Not to mention the "factual" information provided by LLMs always needs to be verified with sources anyway, and the more powerful it gets the more often it puts out accidental incorrect information (or it "hallucinates," I don't have time to explain that; Google it). It's not as black and white as you and the critics on the other side make it out to be.
This article presents a fascinating paradox: 'Statistics never lie, but they often deceive.' While numbers may seem objective, their interpretation is rarely free from bias or omission. The truth is, what we read—especially data-driven claims—should never be taken as absolute. Behind every statistic lies an unseen context, and it’s often that hidden layer that holds the real insight.
The key takeaway? Always question the frame of reference. Cultivate curiosity, seek alternative perspectives, and don’t hesitate to dig deeper yourself. A well-rounded understanding comes not from accepting information at face value, but from exploring what isn’t immediately visible.
Great article—thank you for the thought-provoking work and for sharing it with us!
Interesting post and a great reminder that these viral statistics should always be viewed with scepticism -
As I read the article I considered my anecdotal experience of LLM energy use. I use Ollama to run AI models on a machine equipped with an Nvidia RTX 3090. Roughly speaking a query tends to cause the graphics card to draw c.350 watts (according to system monitoring) for about 30 seconds (estimate, does vary)
Energy (Wh) = Power (W) × Time (hours) = 350 W × (30/3600) hours = 350W × 0.00833 hours = 2.9Wh!
Sure, but LLMs are not a replacement for search engines, which utilize databases. I'm not saying generative AI should be avoided at all costs, but it does consume MORE energy than a search engine that utilizes databases, whether or not it's 10 times the amount. Sometimes LLMs are useful, sometimes unnecessary if you're smart enough to know which tools to use for which purposes. Not to mention the "factual" information provided by LLMs always needs to be verified with sources anyway, and the more powerful it gets the more often it puts out accidental incorrect information (or it "hallucinates," I don't have time to explain that; Google it). It's not as black and white as you and the critics on the other side make it out to be.
Also it's easy to remove the AI feature in Google if it doesn't serve your purposes. As a wise girl once said, por que no los dos?
This article presents a fascinating paradox: 'Statistics never lie, but they often deceive.' While numbers may seem objective, their interpretation is rarely free from bias or omission. The truth is, what we read—especially data-driven claims—should never be taken as absolute. Behind every statistic lies an unseen context, and it’s often that hidden layer that holds the real insight.
The key takeaway? Always question the frame of reference. Cultivate curiosity, seek alternative perspectives, and don’t hesitate to dig deeper yourself. A well-rounded understanding comes not from accepting information at face value, but from exploring what isn’t immediately visible.
Great article—thank you for the thought-provoking work and for sharing it with us!