Apologizing to Bots
This is the latest in an unintentional series I’m apparently doing in which I write about topics related to the contemporary debate over the use, usefulness, impact, and harms of generative Artificial Intelligence (genAI) inspired by weeks-old podcasts. The fact that I write these things about old podcast episodes is entirely a function of the fact that my listening backlog is regularly weeks or months long.
This time, I’m writing about an interesting passage from an episode of This American Life entitled My Other Self. The bit I’m discussing starts around 27:45 and runs for about two minutes. You can listen to it directly on the episode page or any number of locations—indeed, wherever you listen to podcasts, as the saying goes.
The part I’m interested in happens during a segment by reporter Evan Ratliff, in which he is experimenting with hooking up ChatGPT to ElevenLabs’ voice synthesizer and a telephone number. He uses this stack to call friends, waste scammers’ and telemarketers’ time, and entertain his father. In the relevant portion of the story, he invites strangers to call the number to participate in a research interview discussing their experiences with AI. Of course, none of the callers know ahead of time that they are speaking to a bot.
A remarkable exchange occurs with a survey respondent named Stephanie. Answering a question from the bot, she says she has probably interacted with AI without even knowing it. Then, half-joking, she exclaims, “Jesus! I’m probably talking to an AI right now!” The bot asks her why she said that, and she explains that the bot is speaking a bit awkwardly. The interaction is brief, and they quickly move on.
The truly remarkable moment happens just afterward. Stephanie calls back after the end of the initial call, and without exactly apologizing, expresses clear regret for hurting the interviewer’s feelings by calling them a bot.
To me, this illustrates an underappreciated effect of the proliferation of genAI agents, particularly those that are foisted on the public without notice or consent. Clearly, Stephanie was uncertain whether or not she was speaking with a bot. After she got off the phone, she did some kind of moral reckoning: what would it mean if she were in fact talking to a bot? What if she weren’t? What obligation did she feel she had to a possibly human interviewer whom she’d just suggested might be a computer? Conversely, what were her obligations to a computer that she had been led to interact with under false pretenses?
However she happened to arrive at her decision, she concluded it would be worth it to call back and try to smooth things over. And she did that knowing there was a chance she was offering a token of respect to a bot that would not only be unable to appreciate it, but in fact didn’t deserve it.
I think that the more the public is forced or tricked into using genAI, the less common Stephanie’s reaction will be. I’m not talking specifically about apologizing to ChatGPT; I’m talking about considering the feelings of an interlocutor whose humanity can’t be conclusively established.
Any time you interact with an entity that could be either human or AI, you have to decide whether to treat that entity like a person or a computer. As long as your powers of discrimination are imperfect, you run the risk of treating a person like a computer or treating a computer like a person. However, in my view, these two errors are not morally equivalent. Treating a computer like a person might be a little embarrassing, but treating a person like a computer is potentially harmful. The harms are not necessarily large: failing to apologize to someone you’ve mistakenly accused of deceiving you won’t likely do much more than hurt their feelings. But the inverse case, in which you apologize to a computer who actually was deceiving you, costs only the effort that goes into the apology.
I think a world where people don’t treat each other like people because they might be bots is unlikely to be a good world, much less a happy one.
