Artificial Intelligence: What to Worry About
Misinformation tsunami, mass unemployment, Skynet - when it comes to modern AI, what should we be worried about? Strangely, few people talk about what I consider the biggest concern.
Without a doubt, AI is changing the world. The advances in recent months have been extremely impressive, and everyone is left scratching their heads, asking, what does this all mean? What does it mean for my job? What does it mean for the education of my kids? What does it mean for society? What does it mean for humanity?
We all walk around with a model of the future in our heads. This model influences our decisions, and when things change, we update the model. Most people don’t like doing that (we all know that “people don’t like change”). It’s uncomfortable and can be scary because, until you have adopted a new model, the future appears very uncertain.
Even Snoop Dogg is confused.
This is a key reason why there are so many anxieties surrounding the recent developments in AI. The change was significant and instant. The old models about the future don't work anymore. While we're trying to update our models, there are a lot of possible futures on offer in the media and public discourse. Let's explore some of those.
The future of mass unemployment
The problem: AI, some worry, will lead to mass unemployment. Once machines can do anything a human can do, why bother with humans who have demands, get sick, want vacation, and cost a lot of money?
The solution: Embrace the new technology. First, the adoption of new technology takes longer than we think. Or, as the saying goes, we overestimate the effect in the short term and underestimate it in the long run. We have tried and tested ways to deal with technology revolutions. If technology is supposed to lead to mass unemployment, it is catastrophically bad at it. Unemployment is at historically low levels.
My view: A job is a set of tasks. Some jobs have a very limited set of clearly defined tasks, while others have many tasks that are much more open-ended. Technology like AI does not take over jobs; it takes over tasks. The more limited the task is, the more likely it will one day be done by technology. However, most of the tasks that are highly automatable have already been automated. Of course, any new technology will automate more tasks, and therefore some tasks and some jobs may be lost. But new tasks emerge, and entirely new jobs will emerge alongside. This always happens. Sure, the speed at which everything happens here is mind-boggling. But we've all learned in the past 10 years that we need to be fast with adopting technological change.
Worry level: ⭐️ out of ⭐️⭐️⭐️⭐️⭐️
The future of the misinformation tsunami
The problem: LLMs can generate massive amounts of very convincing nonsense at scale. Generative AI can create fake images, fake voices, fake videos. It will quickly become difficult to distinguish truth from falsehood using only a single piece of data (an image, a text, or a video). You found the image of the pope wearing a fake jacket intriguing? Just wait until someone produces a highly realistic video of the pope claiming to be an atheist or converting to a different religion. One video might be funny, but imagine someone leaking hundreds of those, with millions of fake personas attesting to their originality. Things can get out of control quickly.
The solution: We urgently need ways to verify content sources. When faking papal videos becomes child's play, we need the Vatican to be able to add an unbreakable verification to each document. Technically, this is already possible. But we need to upgrade our digital infrastructure quickly. We should also make it very expensive to intentionally mislead people at scale. This won't stop the truly bad people, but it will keep the vast majority at bay if they think the risk of significant punishment is too high. The proposal to legally mandate bots to identify themselves as such is also part of the solution.
My view: We have been here before. Yes, it will be trivial to fake a video. But guess what: videos haven't been around for that long, and we all grew up with video being fake (this is what cinema was invented for - to create illusions). Text has been around for much longer, and faking text at scale began with Gutenberg. We have found ways around it. It could be painful, so we need to act urgently, but we can do this.
Worry level: ⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
Skynet aka Losing Control
The problem: With the advent of LLMs, we realize we have created models that can code. As these models are made with code themselves, they can, in principle, improve themselves. Smart models can thus make themselves smarter, and before we know it, these models are orders of magnitude more intelligent than we are. In the face of substantially more powerful intelligence, it seems ludicrous to think we can ever control it.
The solution: Frankly, I have a hard time thinking of one. This is the so-called “alignment problem”, which revolves around the question of how you can create a system that is aligned with what you want it to do. We have not taken this scenario very seriously in the past, and now we are realizing we may soon have a problem without a clear answer to the alignment problem.
My view: We should take this problem very seriously. Either you believe that we can eventually build a machine as intelligent as a human, or you don't. I do. And once this machine exists, I see no reason why it shouldn't be able to improve itself. Therefore, machines that are much, much smarter than humans will exist. There is simply no precedent for this situation, and we should spend an enormous amount of time to prepare for it. Unfortunately, right now, most people taking this problem seriously are laughed at - even though this is no laughing matter.
Worry level: ⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
Bias and discrimination
The problem: Modern AI systems are trained and fine-tuned on biased data. Without proper consideration, this bias will lead to discrimination.
The solution: The generally proposed solution is to “show me the training data.” I am not sure if any law could ever mandate the disclosure of training data. What private companies do is their “secret sauce”, and I am not aware of any other industry where such a strict rule would apply. More realistic is an assessment of what the models do. These assessments can be done in a systematic fashion and can indeed highlight bias quickly - more quickly even than highlighting human bias. Other solutions include transparency in disclosing when an AI has made a decision and the right to a second human opinion. These are particularly important when the government is using these systems.
My view: We need to assess models based on what they do, not on how they were built. In other potentially risky domains, like drug development, we take a similar approach, and it has served us well. What is important, especially for governmental use of AI, is transparency and accountability. “The AI said it” can never be the basis for state action. I’m seriously worried that we’re going in this direction.
Worry level: ⭐️⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️
Two classes of people
The problem: The productivity increase that AI can provide is unparalleled in its scale and immediacy. However, the people who can benefit from this are already highly educated and in a pole position, which will only increase the gap between those who have and those who don't.
Take programming - good programmers have just become at least twice as productive as they were before November 2022. But to leverage this productivity gain, they already had to be good programmers to begin with. What you can see them do with LLMs is incredible. ChatGPT instantly increased my productivity (when I code) by at least a factor of 3, if not more - and the quality is generally better. It made me a better and more prolific writer. Although I still write everything by hand, I use it as an editor, which gives me a lot of confidence and increases my output. I learn much more, and much faster, especially in science. And so on, and so forth.
The point is that I am able to realize these gains from an already high plateau. I am capable of understanding and interpreting ChatGPT’s code output because I can code; I can use it to help me make my text better because I can write, and I have read a lot; I can learn so much in science with LLMs because I already have a scientific background…
My view: I have no idea how to fix this. In principle, our education system should scramble to get everyone up to speed, especially in technical issues, so that everyone can leverage these new tools. Instead, I see the discussion heading towards “why should anyone learn to code now?”. This seems insane. We’re going full speed into an even more separated class system.
Worry level: ⭐️⭐️⭐️⭐️⭐️ out of ⭐️⭐️⭐️⭐️⭐️