Discussion about this post

User's avatar
DG's avatar
Mar 26Edited

Interesting post and a great reminder that these viral statistics should always be viewed with scepticism -

As I read the article I considered my anecdotal experience of LLM energy use. I use Ollama to run AI models on a machine equipped with an Nvidia RTX 3090. Roughly speaking a query tends to cause the graphics card to draw c.350 watts (according to system monitoring) for about 30 seconds (estimate, does vary)

Energy (Wh) = Power (W) × Time (hours) = 350 W × (30/3600) hours = 350W × 0.00833 hours = 2.9Wh!

Expand full comment
Kira Nerys's avatar

Sure, but LLMs are not a replacement for search engines, which utilize databases. I'm not saying generative AI should be avoided at all costs, but it does consume MORE energy than a search engine that utilizes databases, whether or not it's 10 times the amount. Sometimes LLMs are useful, sometimes unnecessary if you're smart enough to know which tools to use for which purposes. Not to mention the "factual" information provided by LLMs always needs to be verified with sources anyway, and the more powerful it gets the more often it puts out accidental incorrect information (or it "hallucinates," I don't have time to explain that; Google it). It's not as black and white as you and the critics on the other side make it out to be.

Expand full comment
2 more comments...

No posts