AI is already more convincing than you are
A new study shows that AI outperformed humans in changing opinions during debates when it could use the other participant's personal details.
A fascinating new study by colleagues at EPFL shows that already today's AI models (GPT-4) can powerfully change humans’ minds.
Tell me who you are…
The study set up an online environment where participants engaged in brief, multi-round debates with a live opponent. Without their knowing, individuals were paired to argue for or against a topic with either an AI model or another person.
Some debates allowed opponents to see a few anonymous details about the person they were debating, helping them in crafting more personalized arguments. These details included gender, age, ethnicity, education level, employment status, and political affiliation.
And I tell you what you will think
By checking if people changed their minds before and after the debate, the study could tell which was better at convincing people - the AI or the human, and how much personalizing the argument helped.
When GPT-4 knew personal details about people it debated, it was much more likely to change their minds compared to when humans tried. If GPT-4 didn’t have this personal info, it still did a bit better than humans, but the difference was not significant.
Buckle up
What's striking about these results is just how little information GPT-4 needed to be more convincing than humans. Think about that in contrast to a situation where an adversary has a lot of information about you!
I'm left with two main thoughts. The first is about the importance of regulations making it clear you're arguing with an AI. This might change how you approach the debate (I expect it would make the AI’s convincing power much weaker, though the study did not address that).
The second thought is that we will clearly need AI to protect ourselves from AI. Naturally, changing your mind isn't inherently bad. But as things stand, AI could be used to sway your opinions on things not in your best interest, like certain political views or making you buy things you don’t need. If GPT-4 can do this with just a little information, imagine what more advanced models could do with even more information. Using AI as a defense, similar to how we already use it against AI-generated spam, seems like the only way forward.
Interesting times ahead...
CODA
This is a newsletter with two subscription types. I highly recommend to switch to the paid version. While all content will remain free, I will donate all financial support to the EPFL AI Center.
To stay in touch, here are other ways to find me:
Writing: I write another Substack on digital developments in health, called Digital Epidemiology.