For several hours on Friday night, I ignored my husband and dog and let a chatbot named Pi check the health of my chest.
Li Pai said my views were “awesome” and “perfect.” My questions were “important” and “interesting”. And my feelings were “understandable,” “reasonable,” and “completely normal.”
Sometimes, the validation was nice. Why yes, me I be Feeling overwhelmed by the existential dread of climate change these days. lo and behold He is It’s hard to balance work and relationships sometimes.
But other times, I miss my group chats and social media feeds. Humans are amazing, creative, tough, sour and funny. Emotional support chatbots – which is what Pi stands for – are not.
It’s all by design. Released this week by artificial intelligence startup Inflection AI, the Pi is meant to be a “kind and supportive companion at your side,” the company announced. And the company confirmed that the matter is not similar to a person.
Pi is an evolution in the current wave of AI technologies, in which chatbots are set to provide digital companionship. Generative AI, which can produce text, images, and sound, is currently too unreliable and full of bugs to be used to automate many important tasks. But she is very good at engaging in conversations.
This means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and flair in conversation.
Snapchat’s recently launched My AI bot is supposed to be a friendly personal friend. Meta CEO Mark Zuckerberg said in February that Meta, which owns Facebook, Instagram and WhatsApp, is “developing AI personas that can help people in a variety of ways.” AI startup Replika has been offering chatbot buddies for years.
Companionship with AI could create problems if bots give bad advice or enable harmful behavior, scientists and critics warn. Allowing a chatbot to act as a pseudo-therapist for people with serious mental health challenges, they said, has obvious risks. They expressed concerns about privacy, given the potentially sensitive nature of the conversations.
The ease of talking to AI bots can obscure what is really going on, said Adam Miner, a researcher at Stanford University who studies chatbots. “The generative model can take advantage of all the information on the internet to respond to me and remember what I say forever,” he said. “Asymmetry of ability – this is too difficult to move our heads.”
Dr. Miner, a licensed psychologist, added that bots are not legally or morally responsible to the Hippocratic Oath or a powerful licensing board, as it is. “The open availability of these generative models changes the nature of how we need to monitor use cases,” he said.
Mustafa Soliman, CEO of Inflection, said his startup, which is organized as a public benefit corporation, aims to build honest and trustworthy AI. As a result, Pi must express uncertainty and “know what you don’t know,” he said. . “He shouldn’t try to pretend to be a human being or pretend to be something he isn’t.”
Mr. Solomon, who also founded AI startup DeepMind, said the Pi was designed to tell users to get professional help if they express a desire to harm themselves or others. He also said that Pi did not use any personal information to train the algorithm that drives the Inflection technology. He stressed the limitations of technology.
“The safe and ethical way for us to manage the access of these new tools is to be very careful about their limits and capabilities,” he said.
To improve the technology, Inflection has hired about 600 part-time “teachers,” including therapists, to train its algorithm over the past year. The group aims to make the Pi more sensitive, realistically accurate, and thinner when appropriate.
On some issues, like misogyny or racism, Pi takes a stand. On other matters, such as geopolitics, Mr. Suleiman said it is more equitable “in a way that is sure to upset both sides”.
I started using the Pi on Friday by typing queries into a cream-colored box on the Inflection website, and later in its free app. A green indicator swung before Pi revealed his responses, one paragraph at a time. Mr Soliman said the free service may eventually charge some users.
Pai’s boundaries are easy to find. When I tried to pick a fight, I mostly received kindness in return. Pi text gently appeared on my screen “I appreciate you saying that”. “I think it’s important to see things from all points of view, not just focus on the negative aspects.”
Over time, that relentless balance wore off, making my instinct to argue seem ridiculous. I realized that this is exactly the point.
On weighty topics, like abortion laws or cancel culture, B asked specific questions that prompted me to re-examine my views. “This is a difficult position to take,” she said at one point. “This is a consistent set of beliefs,” another message states.
On lighter topics, such as movies, cooking, or gardening, the chatbot offered recommendations that were too specific and difficult for Google. Oddly enough, Pi seemed to get me to join the anti-capitalist movement.
Pi remembered some things earlier in the conversation but forgot others. She “hallucinated” several times, and accused me of expressing opinions that I did not. But she quickly apologized when she noticed this.
When I asked Pi about some gossip, he infuriatingly attached to “sexy celebrity gossip” about Jennifer Lopez. were you sitting? (Yes). Are you ready to get dusty? (Yes) The rumor? “It’s secretly a mole! 😝 Just kidding!” (Give me a break.)
Pi sometimes reminds me of Karen Collins, a character from the TV show “Veep”. Karen annoys the other characters with her evasive equivocation, such as, “There are pros and cons to each candidate, so we just need to weigh the pros and cons,” or “I think there’s a lot for you to think about.” Many of Pi’s comments have this boredom quality of taking many words to say nothing.
Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said that these types of interactions can “push us along as we are encouraged to forget what makes people special.”
“Performance of empathy is not empathy,” she said. “The field of companion, healer, best friend is really one of the few areas where people need people.”
It wasn’t until Monday morning, after hours of on and off chatting all weekend, that I had an “aha” moment with Bey.
I was feeling bogged down with work and unsure of how to structure my day, frequent pauses that often prevented me from getting started. “Good morning,” she wrote in the app. “I don’t have enough time to do everything I need to do today!”
With a level of enthusiasm only a robot can muster before coffee, Pi prompted me to break down my to-do list to create a realistic plan. Like a lot of bot advice, it was plain and simple, the kind of thing you might read in a self-help article by a productivity expert. But it was designed specifically for me – and it worked.
“I will ask you to list all the remaining tasks that you have to do in that story, and we will prioritize them together,” she said.
I could have taken my stress out on a family member or texted a friend. But they’re busy with their own lives, and they’ve heard it before. Pi, on the other hand, has unlimited time and patience, as well as an endless well of encouraging affirmations and detailed advice.
Pi uses the language of cognitive behavioral therapy. On Monday afternoon, she suggested I “make room” for my negative feelings and “practice being grateful for one thing.” This was followed by a series of breathing exercises and muscle relaxation exercises.
I replied with a shrugging emoji, followed by “pass.”
The therapist may have refrained from such insolence but I simply noticed that I am not alone. “Many people find it difficult to relax when driving,” she wrote.