Artificial Intelligence: What to Worry About - 2025 Edition
Misinformation tsunami, mass unemployment, Skynet - when it comes to modern AI, what should we be worried about? I wrote a post about it in 2023; time to revisit it.
First of all, hello new readers, I'm glad you found me! I hope you find these posts useful, and that we can learn from each other.
This is an updated version of a post I wrote in May 2023. I thought now is a good time to revisit some of the concerns of AI at the time, with what we know in January 2025. While I am generally very optimistic about AI and its potential to improve our lives, I believe it's important to think about concerns and potential risks.
So without further ado, here is the post, with the updates designated clearly.
Without a doubt, AI is changing the world. The advances in recent months have been extremely impressive, and everyone is left scratching their heads, asking, what does this all mean? What does it mean for my job? What does it mean for the education of my kids? What does it mean for society? What does it mean for humanity?
We all walk around with a model of the future in our heads. This model influences our decisions, and when things change, we update the model. Most people don’t like doing that (we all know that “people don’t like change”). It’s uncomfortable and can be scary because, until you have adopted a new model, the future appears very uncertain.
Even Snoop Dogg is confused.
This is a key reason why there are so many anxieties surrounding the recent developments in AI. The change was significant and instant. The old models about the future don't work anymore. While we're trying to update our models, there are a lot of possible futures on offer in the media and public discourse. Let's explore some of those.
UPDATE JANUARY 2025: Not much has changed. Snoop Dogg's confusion is still the perfect metaphor for the world's state when it comes to AI.
The future of mass unemployment
The problem: AI, some worry, will lead to mass unemployment. Once machines can do anything a human can do, why bother with humans who have demands, get sick, want vacation, and cost a lot of money?
The solution: Embrace the new technology. First, the adoption of new technology takes longer than we think. Or, as the saying goes, we overestimate the effect in the short term and underestimate it in the long run. We have tried and tested ways to deal with technology revolutions. If technology is supposed to lead to mass unemployment, it is catastrophically bad at it. Unemployment is at historically low levels.
My view: A job is a set of tasks. Some jobs have a very limited set of clearly defined tasks, while others have many tasks that are much more open-ended. Technology like AI does not take over jobs; it takes over tasks. The more limited the task is, the more likely it will one day be done by technology. However, most of the tasks that are highly automatable have already been automated. Of course, any new technology will automate more tasks, and therefore some tasks and some jobs may be lost. But new tasks emerge, and entirely new jobs will emerge alongside. This always happens. Sure, the speed at which everything happens here is mind-boggling. But we've all learned in the past 10 years that we need to be fast with adopting technological change.
Worry level: ⭐️ out of ⭐️⭐️⭐️⭐️⭐️
UPDATE JANUARY 2025: Also not a whole lot of change. We're now more than two years past ChatGPT's launch, and the job market is humming along just fine. EU unemployment numbers are even lower than in 2023, US are only slightly up. Yes, AI will shake up the labor market, but I continue to expect the macro benefits to be overall positive.
The future of the misinformation tsunami
The problem: LLMs can generate massive amounts of very convincing nonsense at scale. Generative AI can create fake images, fake voices, fake videos. It will quickly become difficult to distinguish truth from falsehood using only a single piece of data (an image, a text, or a video). You found the image of the pope wearing a fake jacket intriguing? Just wait until someone produces a highly realistic video of the pope claiming to be an atheist or converting to a different religion. One video might be funny, but imagine someone leaking hundreds of those, with millions of fake personas attesting to their originality. Things can get out of control quickly.
The solution: We urgently need ways to verify content sources. When faking papal videos becomes child's play, we need the Vatican to be able to add an unbreakable verification to each document. Technically, this is already possible. But we need to upgrade our digital infrastructure quickly. We should also make it very expensive to intentionally mislead people at scale. This won't stop the truly bad people, but it will keep the vast majority at bay if they think the risk of significant punishment is too high. The proposal to legally mandate bots to identify themselves as such is also part of the solution.
My view: We have been here before. Yes, it will be trivial to fake a video. But guess what: videos haven't been around for that long, and we all grew up with video being fake (this is what cinema was invented for - to create illusions). Text has been around for much longer, and faking text at scale began with Gutenberg. We have found ways around it. It could be painful, so we need to act urgently, but we can do this.
Worry level: ⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
UPDATE JANUARY 2025: This is an area where my level of worry has decreased. It's now truly child's play to create a fake image, and only marginally more difficult to create fake voices and videos. But while concerning, the sky hasn't fallen. The biggest producers of misinformation are still good old humans writing text and posting it on social media. There are very promising technological solutions to the problem that are being developed. For the time being, I'm reducing my worry level down to ⭐️ out of ⭐️⭐️⭐️⭐️⭐️.
Skynet aka Losing Control
The problem: With the advent of LLMs, we realize we have created models that can code. As these models are made with code themselves, they can, in principle, improve themselves. Smart models can thus make themselves smarter, and before we know it, these models are orders of magnitude more intelligent than we are. In the face of substantially more powerful intelligence, it seems ludicrous to think we can ever control it.
The solution: Frankly, I have a hard time thinking of one. This is the so-called “alignment problem”, which revolves around the question of how you can create a system that is aligned with what you want it to do. We have not taken this scenario very seriously in the past, and now we are realizing we may soon have a problem without a clear answer to the alignment problem.
My view: We should take this problem very seriously. Either you believe that we can eventually build a machine as intelligent as a human, or you don't. I do. And once this machine exists, I see no reason why it shouldn't be able to improve itself. Therefore, machines that are much, much smarter than humans will exist. There is simply no precedent for this situation, and we should spend an enormous amount of time to prepare for it. Unfortunately, right now, most people taking this problem seriously are laughed at - even though this is no laughing matter.
Worry level: ⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
UPDATE JANUARY 2025: I continue to think we should take this problem very seriously. The recent breakthroughs of OpenAI's O3 model on the ARC-AGI benchmark show that progress isn't slowing down, but rather accelerating. The paradigm is shifting: the next frontier appears to be scaling at inference time - in other words, letting models "think for longer." This contrasts with previous scaling efforts that largely focused on making models bigger.
What's missing is a more serious and rigorous effort at making progress in alignment, robustness, contamination, jail-breaking, deep fakes, data poisoning, and similar challenges. Academia would be especially well placed to make these advancements. Thankfully, more attention is given to the topic. My own thinking has also evolved quite considerably on this topic, but the details will be for a separate post.
Bias and discrimination
The problem: Modern AI systems are trained and fine-tuned on biased data. Without proper consideration, this bias will lead to discrimination.
The solution: The generally proposed solution is to “show me the training data.” I am not sure if any law could ever mandate the disclosure of training data. What private companies do is their “secret sauce”, and I am not aware of any other industry where such a strict rule would apply. More realistic is an assessment of what the models do. These assessments can be done in a systematic fashion and can indeed highlight bias quickly - more quickly even than highlighting human bias. Other solutions include transparency in disclosing when an AI has made a decision and the right to a second human opinion. These are particularly important when the government is using these systems.
My view: We need to assess models based on what they do, not on how they were built. In other potentially risky domains, like drug development, we take a similar approach, and it has served us well. What is important, especially for governmental use of AI, is transparency and accountability. “The AI said it” can never be the basis for state action. I’m seriously worried that we’re going in this direction.
Worry level: ⭐️⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
UPDATE JANUARY 2025: Although I remain worried about the direction, I am downgrading my worry level one notch to ⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️. I've been positively surprised by many AI implementations being genuinely thoughtful. The explosion of activity in open models provides a real safety net against one-sided systemic bias. The counter-worry is that many frontier models aren't getting publicly released anymore - a worrying trend, especially once they're deployed at government scale. Still, on balance, things haven’t been as bad as many feared.
Two classes of people
The problem: The productivity increase that AI can provide is unparalleled in its scale and immediacy. However, the people who can benefit from this are already highly educated and in a pole position, which will only increase the gap between those who have and those who don't.
Take programming - good programmers have just become at least twice as productive as they were before November 2022. But to leverage this productivity gain, they already had to be good programmers to begin with. What you can see them do with LLMs is incredible. ChatGPT instantly increased my productivity (when I code) by at least a factor of 3, if not more - and the quality is generally better. It made me a better and more prolific writer. Although I still write everything by hand, I use it as an editor, which gives me a lot of confidence and increases my output. I learn much more, and much faster, especially in science. And so on, and so forth.
The point is that I am able to realize these gains from an already high plateau. I am capable of understanding and interpreting ChatGPT’s code output because I can code; I can use it to help me make my text better because I can write, and I have read a lot; I can learn so much in science with LLMs because I already have a scientific background…
My view: I have no idea how to fix this. In principle, our education system should scramble to get everyone up to speed, especially in technical issues, so that everyone can leverage these new tools. Instead, I see the discussion heading towards “why should anyone learn to code now?”. This seems insane. We’re going full speed into an even more separated class system.
Worry level: ⭐️⭐️⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
UPDATE JANUARY 2025: This remains my key worry. It's remarkable how productive people become once they start leveraging AI. We're just at the beginning of this revolution, and the productivity gains that will be realized in the coming years are going to be massive. If they are largely limited to a small class, we're in deep trouble.
Just think about the enormous gains technology has bestowed on the technologically savvy class in the past 10 years. Now put this development on steroids - this is where we're heading. That said, I'm still reducing the worry level a bit, to ⭐️⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️. It just seems that overall, the "rising tide lifting all boats" isn't quite as wrong as many people are led to believe.
FINAL NOTE JANUARY 2025
A final reflection: Looking back at my concerns from 2023, you can see that I've often reduced my worry levels as feared outcomes didn't materialize - at least not at scale. But past patterns might not tell us much about what's ahead - AI systems are becoming drastically more capable, and changes could come much faster than before. That said, I remain fundamentally optimistic. Most of the early concerns about AI haven't come to pass, and I've seen its impact be largely positive. I’ll update my priors again in early 2026!