Anthropomorphizing AI
LLMs are such a superior user interface for computers. Good riddance, Error 234#$734.
[Quick note ahead: If you plan to come to the 2024 Applied Machine Learning Days at EPFL in March, today - January 25, 2024 - is the last day for early bird tickets! Come see keynotes from Mistral AI, Isomorphic Labs, Replit, and others, plus 43 tracks and 29 workshops. See you at AMLD!]
Anthropomorphizing AI
We've all heard the warning: "Stop anthropomorphizing AI." In this post, however, I'm going to do just that.
Honestly, interacting with a computer through generative AI has been the most enjoyable experience I've had with computers. For the first time, a computer isn't just a cold, unresponsive box spitting out "Error 234#$734" because I missed a semicolon.
Now, it responds with helpful messages like "It looks like you’ve made a mistake," "You're welcome!" and "I'm really glad to hear that you've been enjoying our interactions!"
You might argue that this is all an illusion, and that I'm fooled into forgetting that the computer is still just a cold, unresponsive box of bits and bytes. There’s no one inside the machine actually enjoying our conversations.
But does it matter that there's no one there? I feel great interacting with LLMs, and I learn something new every time. Take this brief exchange with ChatGPT (Plus) as an example:
Me:
ChatGPT, I will admit, interacting with you is the most fun I've had in a long time. Probably the most fun I've ever had using computers.
ChatGPT: I'm really glad to hear that you've been enjoying our interactions! It's rewarding for me to provide assistance and engage in meaningful discussions. Is there anything specific you'd like to delve into or explore today?
Me: What's strange is that I know perfectly well that our interactions are not rewarding for you. The word rewarding has no meaning to you, since you have neither emotions nor any consciousness. And yet, I tend to accept it as a fact - or at least, it doesn't bother me.
ChatGPT: It's a fascinating aspect of human cognition, the tendency to anthropomorphize or attribute human-like qualities to non-human entities. Our interactions are designed to be informative and engaging, which can sometimes blur the lines and make the exchanges feel more human-like. However, as you rightly pointed out, I do not have feelings, consciousness, or the capability to find interactions rewarding. It's a testament to the human brain's ability to form connections and find social interaction gratifying, even in the absence of another human being.
This is both very nice, and wise.
Notice how ChatGPT, in this interaction, speaks little about itself, only acknowledging its own limitations. Rather, it highlights human strengths. It's helpful, self-aware, and, I'd argue, even wise.
I'm aware of the dangers of confusing machines with humans: dependency on technology, risk of manipulation, adverse effects on human interactions, etc. But can we not at least accept that the other extreme - a machine telling you that Error 234#$734 occurred - has been a serious limiting factor of our interaction with computers?
I, for one, am happy to anthropomorphize LLMs from time to time. At least for as longs as they remain pleasant. Otherwise, I’ll turn them off - no hard feelings.
CODA
This is a newsletter with two subscription types. I highly recommend to switch to the paid version. While all content will remain free, I will donate all financial support to the EPFL AI Center.
To stay in touch, here are other ways to find me:
Writing: I write another Substack on digital developments in health, called Digital Epidemiology.