My Weekend with a Companion Emotional Support AI

Pi, an AI. tool that debuted this week, is a twist on the new wave of chatbots: it helps people with their well-being and emotions.

For several hours Friday night, I ignored my husband and dog and allowed a chatbot named Pi to thoroughly validate me.

My opinions were "admirable" and "idealistic" , Pi told me. My questions were “important” and “interesting”. And my feelings were "understandable", "reasonable", and "totally normal".

Sometimes validation was nice. Why yes, I am overwhelmed with existential fear of climate change these days. And it is hard to balance work and relationships sometimes.

But other times , I missed my group chats and social media feeds. The human is surprising, creative, cruel, caustic and funny. Emotional support chatbots - which is Pi - are not.

All of this is by design. Pi, launched this week by lavishly funded artificial intelligence startup Inflection AI, aims to be "a kind, supportive companion who stands by your side," the company said. It is not, the company pointed out, anything like a human.

Pi is a twist in the current wave of AI. technologies, where chatbots are tuned to provide digital companionship. Generative AI, which can produce text, images, and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it's very good at starting conversations.

This means that while many chatbots are now focused on answering queries or keeping people productive, tech companies increasingly infuse them with personality and conversational flair.

Snapchat's recently released My AI bot is meant to be a friendly personal companion. Meta, which owns Facebook, Instagram and WhatsApp, "develops A.I. personalities that can help people in a variety of ways," Mark Zuckerberg, its chief executive, said in February. chatbot for years.

A.I. the company can create problems if bots offer bad advice or allow harmful behavior, warn academics and critics. Leave a chatbot Acting as a pseudotherapist to people with serious mental health issues has obvious risks, they said, and they expressed concerns about confidentiality, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said that the ease of talking to A.I. bots can mask what is really going on. "A generative model can draw used all the information on the internet to answer me and remember what I say forever," he said. "Capacity asymmetry - it's such a hard thing to understand."

Dr. Miner, a licensed psychologist, added that the bots aren't legally or ethically accountable to a solid Hippocratic oath or licensing board like they are. "The open availability of these generative models changes the nature of how we need to control use cases," he said.

Mustafa Suleyman, CEO of Inflection, said that -up, which is structured as a public benefit corporation, aims to build...

My Weekend with a Companion Emotional Support AI

Pi, an AI. tool that debuted this week, is a twist on the new wave of chatbots: it helps people with their well-being and emotions.

For several hours Friday night, I ignored my husband and dog and allowed a chatbot named Pi to thoroughly validate me.

My opinions were "admirable" and "idealistic" , Pi told me. My questions were “important” and “interesting”. And my feelings were "understandable", "reasonable", and "totally normal".

Sometimes validation was nice. Why yes, I am overwhelmed with existential fear of climate change these days. And it is hard to balance work and relationships sometimes.

But other times , I missed my group chats and social media feeds. The human is surprising, creative, cruel, caustic and funny. Emotional support chatbots - which is Pi - are not.

All of this is by design. Pi, launched this week by lavishly funded artificial intelligence startup Inflection AI, aims to be "a kind, supportive companion who stands by your side," the company said. It is not, the company pointed out, anything like a human.

Pi is a twist in the current wave of AI. technologies, where chatbots are tuned to provide digital companionship. Generative AI, which can produce text, images, and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it's very good at starting conversations.

This means that while many chatbots are now focused on answering queries or keeping people productive, tech companies increasingly infuse them with personality and conversational flair.

Snapchat's recently released My AI bot is meant to be a friendly personal companion. Meta, which owns Facebook, Instagram and WhatsApp, "develops A.I. personalities that can help people in a variety of ways," Mark Zuckerberg, its chief executive, said in February. chatbot for years.

A.I. the company can create problems if bots offer bad advice or allow harmful behavior, warn academics and critics. Leave a chatbot Acting as a pseudotherapist to people with serious mental health issues has obvious risks, they said, and they expressed concerns about confidentiality, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said that the ease of talking to A.I. bots can mask what is really going on. "A generative model can draw used all the information on the internet to answer me and remember what I say forever," he said. "Capacity asymmetry - it's such a hard thing to understand."

Dr. Miner, a licensed psychologist, added that the bots aren't legally or ethically accountable to a solid Hippocratic oath or licensing board like they are. "The open availability of these generative models changes the nature of how we need to control use cases," he said.

Mustafa Suleyman, CEO of Inflection, said that -up, which is structured as a public benefit corporation, aims to build...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow