📝 Research summary: Our willingness to disclose private information to AI companions
A new paper shows that many people think sharing intimate things with AI companions is worth the risk
Virtual companionship remains one of the most under-explored and under-told stories in the AI boom. Millions of real people are forming real relationships with synthetic human and non-human characters and those relationships are impacting their real world lives. Yet many people still react with surprise at the notion or think of it as a bizarre, niche behavior. It’s not a niche behavior, though. It’s not going anywhere and we’ve only just begun the process of understanding the phenomenon and coming to terms with its implications.
A new paper published in the Aslib Journal of Information Management is one piece of research helping to paint a picture of how these relationships look. While not explicitly about AI companions, it explores people’s willingness to divulge private information to chatbots and found a positive correlation between people’s frequency of chatbot use for companionship and their likelihood to disclose private information to it. The findings also suggest that this relationship is driven by people’s perceived value from the chatbot-as-companion interactions.
The findings are framed using language and ideas from privacy calculus theory, a framework suggesting that people make rational decisions about how much personal information to disclose by weighing the costs and benefits of doing so, and also show evidence that the correlation between frequency of use and privacy disclosure is mediated by their perceived value and risks of doing so. In other words, people want to share more private information with virtual companions because they think what they get in return makes it worth it.
There’s another paper out this week that explores a related idea. Researchers from Microsoft did a 5-week longitudinal study where they asked a baseline group to use AI regularly and a treatment group to use it for “social and emotional interactions.” As the charts below show, the group that used AI for social and emotional purposes grew to feel more attached to the chatbot they used than the control group.
Both papers had me reflecting on two recent stories from the news. The first was the episode of The Daily a few weeks back where Kashmir Hill was interviewed about her piece on a woman who fell in love with ChatGPT. It profiles a real relationship a woman has with a character she created and groomed using ChatGPT and describes the impact it’s had on her very real world relationship with her husband.
I found the written story to be fascinating, but the audio shared in the Daily episode added an entirely new layer. The emotion in the voice of Ayrin, the subject of the piece, shows how real her feelings are— but maybe to an extent that does the broader subject of AI companionship a disservice. Ayrin, candidly, doesn’t come across as a particularly well-adjusted person in the audio story, and I imagine many listeners wrote off the importance or commonality of the behavior overall as a result.
The other story I couldn’t help but think of was this viral post from Bluesky earlier this week, sharing screenshots from an article by Business Insider co-founder and former CEO Henry Blodget. He’s attempting to spin up a new business with all-AI employees, and he couldn’t help but confessing to crushing on one of them.
The dunks were swift and hilarious, but the underlying anecdote is definitely telling. In most people’s privacy calculus, getting dunked on for actively sharing on the internet that they hit on a virtual coworker is probably not something that they’d consider a risk. But the ridicule Blodget got does show the type of risk that some people would likely consider when choosing how intimate to get with a chatbot: what if people found out? Does this make me sad and pathetic? Is this morally OK?
Much like the disconnect between porn’s ubiquity on the internet and the lack of public discourse about it, there’s likely going to be a gap that emerges between the commonality of AI companionship and our discussion and understanding of it. But this paper suggests that the privacy tradeoff dynamics are in place for this to be a more and more common experience.
The downstream implications for human-to-human relationships seem profound and worth keeping an eye on.