Kendra Pierre-Louis: For Scientific American’s Science Quickly, I’m Kendra Pierre-Louis, in for Rachel Feltman.
When someone we love dies we often yearn for the impossible: one more conversation. Maybe we want the opportunity to finally gain clarity about a difficult relationship or to say, “I love you” one last time to someone we cherish. While raising the dead is still out of reach more and more people are turning to generative AI tools such as Replika to conjure the essence of their loved ones and have those final conversations.
Some users claim these so-called griefbots have helped them process loss, but mental health experts are not so sure. Here to walk us through the story is science writer David Berreby, who authored an upcoming feature for Scientific American about the growing use of griefbots.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Thank you so much for joining us today, David.
David Berreby: I’m very happy to be here. Thanks for having me.
Pierre-Louis: It was really lovely reading your piece, and sort of one of the first questions that I thought was interesting is in the piece you don’t just talk about people who have used, you know, what you call griefbots; you actually used one yourself. Can you walk us through a little bit of the process of what that entailed and how it felt?
Berreby: Well, I was really struck when I started to report on the piece that there were people who were sort of quick to condemn the entire idea of re-creating a deceased person in AI and, you know, predicting how just terrible it would be in a hundred different ways. And it seemed to me that most of the time the people who were saying, “This is a terrible use of AI,” had no actual experience with it. Whereas the people who had actually used one or therapists who were working with people who’d used them, they weren’t saying, “This is great, and there’s no problems,” but they were also not saying, “This is terrible.”
And so it seemed to me that it was one of those AI experiences, like so many others, that you just have to kind of experience for yourself to really understand, rather than just having a kind of a firsthand, knee-jerk reaction. So I thought, “Okay, let’s see how it would work if I myself were to do this.”
Pierre-Louis: And I guess, kind of what does that actually entail?
Berreby: There are a lot of start-ups that offer to re-create a deceased person for you—their voice, even their look. And so those are some options that people have. You can also just wing it on your own with ChatGPT or another general LLM, large language model. But in all cases the process is pretty similar. You provide a certain amount of material for the AI to work with, either a voice sample, photos—if you’re going with something that actually looks and sounds like the person—and certainly some text, some things that they wrote or some things that they said.
The data is never really enough because when you’re trying to re-create someone who is important to you, you are re-creating something that is also about you. And so whatever service you use you also have to provide some description of the person that is not just data, things that they wrote, material in an archive but is your sort of take on them. In other words it’s not enough to just say, “Here’s a whole bunch of letters.” You have to say, “Well, you know, my dad was this kind of a person”: he had a sense of humor, or he really liked talking about fishing or whatever.
And so that is generally what’s involved, no matter what service you’re using. I actually tried three or four different ways to see what different results I would get. But that’s the basic process.
Pierre-Louis: And fundamentally, what people are doing when they’re uploading their loved one to these services is they’re trying to navigate grief, right? And the thing that I found really surprising is how you described how grief works in our brain. I’m not sure I’d ever really read anything about that, that when we’re grieving someone it’s basically—our brains are in this tug-of-war between our neurochemistry that says, “This person is alive,” and the reality that this person is not. And that grief is essentially, like, a learning process of our brain learning that this person is gone, and that takes quite some time. How does AI affect that process, to the best of our current understanding?
Berreby: Well, that is where, I think, there’s some continuity between uses of AI and other, more familiar processes of dealing with grief, because when you are in this really painful state of feeling like the person is still in your life and part of your life—and a very large proportion of people who’ve just lost some feel literally like, “Oh, they sent me a message. They’re in touch with me. I sense them.” When you are in that state you will pick up an object that belonged to them or look at a photo or maybe listen to a recording or just conjure up a memory and then kind of relive a moment where they were around. And AI is essentially a new kind of artifact, I think, for doing the same thing. You’re kind of re-creating some experience that you had with this person in this time when you’re not quite believing that they’re gone.
Of course, the difference with AI is that instead of having a conversation in your head imagining you talking to your loved one there’s literally text on the screen or a voice in the air that is responding to you. So it’s not, you know, exactly the same, but it’s also not a giant break from the past. I mean, people have always re-created someone whom they miss, someone whom they long for, in one way or another, sort of imaginatively, right, in their minds. And so this is kind of a way of making it a little more literal, a little more in the world, but it’s still that process.
Pierre-Louis: I know that one of the concerns was the risk of a lifelike interactive chatbot or griefbot is that it might make the path too attractive to let go, but the research seems to suggest, if I’m correct, that people who had recently lost someone and used griefbots actually experienced something [that was] a little bit of the opposite: instead of withdrawing from society they were more likely to be social. Why is that?
Berreby: Yeah, I think it’s because, society doesn’t like grief, you know? We’re not a very death-aware society. We have people trying to literally become immortal. We don’t really like talking about it. And so a lot of grieving people tell psychologists that they feel like there’s a time limit or there’s a constraint. Like, people are like, “Okay, I feel very bad for you, and now let’s move on. I mean, I’ve given you, like, a half an hour,” or “You’ve had two weeks,” or a month or whatever. That’s kind of painful for people because these things take the time they take; they’re not really on a schedule.
So what the people in this small study that I write about were saying is, “This AI does not judge me, does not suggest that maybe I should talk about something else, does not tell me to move on. It just is there for me, and I can work through things with it at whatever pace is comfortable for me without feeling like I’m in any kind of conflict with another person, and then I feel better, and then I feel better about seeing people and not worse.”
Pierre-Louis: And that raised a question for me, which is, in some ways, your article really centered around kind of the concerns that people have for these griefbots, right? These people are coming out and saying, “These griefbots are serving a function that society is failing to provide for me.” What does that sort of say about our society?
Berreby: Yeah, I didn’t have room to get into that in the piece. But that is an excellent question because I think you could argue that in a society that was really sort of psychologically well-balanced [laughs] it would be possible and understandable to be someone in deep grief and not have to be distressed by feeling that people were wanting you to just not talk about a such a downer subject or not say something that you said last week because they wanna, you know, get on with being productive and lively and all that other stuff that we seem to prefer, right?
So I agree with you. I think, you know, maybe you could argue that they’re fulfilling a need that maybe we wouldn’t have if we were a little less avoidant of the whole topic of loss and death after all.
Pierre-Louis: You end the piece somewhat cautiously optimistic about the future role of griefbots, which is sort of in, you know, standard juxtaposition to how much fearmongering we’re getting these days about AI, and I was just kind of wondering, how did you land on that place, and what do you want our listeners to know kind of about that?
Berreby: You know, I didn’t go in thinking I had a take on this. I mean, I think, you know, we all know millions and millions of people use kind of invented characters: “people,” in quotes, who don’t exist, that the AI is reproducing. But the vast majority of those are made-up. You know, you go to Character.ai, you go to Replika, or you go elsewhere, and you say, “I want them to look like this and be like this and have this personality.”
And so these griefbots are a really interesting special case of that kind of creation because they’re constrained. I mean, if you wanna make up a pretend grandma, you can. But if you wanna re-create yours, it has to be sort of constrained by the reality of your memories and the real person. And so, you know, that already kind of creates a different kind of relationship to the question of, like, “Oh, are these things too accommodating? Are they too sycophantic?” you know? Because if it’s not like the real person, then it’s not gonna really convince you of anything or make you have—feel anything.
So I, I guess I was mildly skeptical but open-minded, and then as I worked through my own experience and also read about what other people were saying, I realized that people are not stupid. You know, they don’t text for 20 minutes with an AI that has sort of tried to re-create their grandfather and then suddenly get confused about, “Oh, is that a real ghost? Is he really out there?” Or, you know, “I’m not sure what’s real anymore.” They know it’s a—an artifact. They know it’s something they’re using to work something out with themselves.
And that was how I came to see it. I did—never thought, “That’s my dad. You know, that’s so eerie.” I just thought, “Oh, okay, there are these things that I was wondering about,” and bouncing them off this thing that has a flavor of him that I created with these thoughts in mind was kind of insight-provoking in a way that I wasn’t sure it would be.
So it just seems to me that, you know, like anything else, I mean, you could define this as a therapeutic tool or as a kind of a thing for people to play with even, a creative tool, and that people can be okay with it, if it’s packaged in the right way. That is to say, if, you know, it’s something where you say, “Look, this is something you can use for yourself, for your creativity, to explore your feelings,” and not say, “Oh, we’re gonna reproduce your loved one perfectly.” Not say, “We’re gonna use the same social media engagement tactics that we use to keep you on Twitter and Facebook.” You know, it can be done right, and that’s my cautious optimism. I’m not sure it will be, but it could be done right.
Pierre-Louis: That’s an—a really positive note to end this on. Thank you so much for your time.
Berreby: Oh, well, thank you for having me. I appreciate it.
Pierre-Louis: You can read David’s upcoming piece on ScientificAmerican.com on November 18 or check it out in the December issue of the magazine.
That’s all for today. Tune in on Friday for a look into a new promising frontier in beating back cancer: vaccines.
Science Quickly is produced by me, Kendra Pierre-Louis, along with Fonda Mwangi and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.
For Scientific American, this is Kendra Pierre-Louis. See you on Friday!
