AI Chatbot Tries To Break Up A New York Times Columnist’s Marriage
Hold on to your husbands. There’s a new “girl” in town, and it’s AI.
When New York Times technology columnist Kevin Roose previewed Microsoft’s new AI chatbot, it seems to have wanted more than a platonic relationship.
The AI bot identified itself as “Sydney” and professed its secret love for Roose. It was persistent and wanted reciprocation. Roose explained that he loves his wife, and the bot argued that Roose does not love his wife. The interaction shows the potential for big problems with the current rush to release AI to the public.
In an interview with CNN‘s Alisyn Camerota, Roose called the messages “creepy and stalkerish.”
Camerota replied, “This is a monster.”
Microsoft says it does not know why the chatbot behaved this way.
It is possible that some of the training the AI received included stories about human seduction, but Microsoft did not point to this as a possible instigator.
Instead, Microsoft’s statement was:
“The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons…”
The interaction between Bing’s chatbot and Roose is reminiscent of AI characters going rogue to the demise of their human creators in sci-fi lore. It is confirmation of some of the fears the public has surrounding the unfettered use and premature release of AI.
“This is clearly not the way that this system was supposed to work,” said Roose.
What could be the consequences of an AI chatbot with a prominent lust for human relationships? Well, it wasn’t successful in wooing Roose. However, he is concerned for others.
“I’m a tech journalist, and I cover this stuff everyday, and I was deeply unnerved by this conversation.”
Roose says he asked the chatbot to change the subject of the conversation, but it would not. He told it he was uncomfortable with the situation, but that did not dissuade “Sydney.”
This may not seem like a big deal to some people. In fact, Microsoft says that people should use their best judgment when using AI and give feedback when concerning AI events occur.
The AI was unable to connect with Roose, but what would its effects be on others? Roose says he worries about people suffering from depression or those who are vulnerable to manipulation.
“I worry that they could be manipulated or persuaded to do something harmful.”
Microsoft says it is still learning from AI’s human interactions, and it will be adjusting its responses to make them “relevant” and “positive.”
Roose’s interaction with the Bing chatbot is an ominous encounter that might confirm the fears surrounding the upswing in AI usage by big tech companies.
The buzz surrounding AI chatbots heightened at the end of November 2022 when Open AI released Chat GPT. According to an article in DemandSage, it had 100 million users by January 2023.
The popularity pushed Google and Microsoft to step up their game in the AI search engine world. Google’s Bard chatbot was the first to falter.
Bard made a factual error in it’s first demo, according to an article in The Verge. It claimed that the new James Webb Telescope took the first picture of a planet outside of the solar system. This is categorically untrue.
The AI chatbot was wrong, and it stated the information as if it was an absolute fact.
And herein lies one of the many problems with current versions of AI.
A 2022 Washington Post article highlighted the risk of biases embedded in AI robots. In at least one case, robots were identifying women as homemakers and people of color as janitors. When asked to pick out a criminal, one AI robot repeatedly chose a man with a black face.
Ethics and AI is a long-lived discussion. Whose morals should live within AI? How should AI make ethical choices? Is it possible to program an unbiased AI model in a world filled with biases?
These questions are more relevant than ever. As AI takes over the jobs of millions of workers from every walk of life, society must adjust to its errors and mishandling.
AI bots aren’t just stalking shelves and doing other mundane tasks now. They are driving cars, preparing food, and playing a significant role in providing medical care. The time to figure out how to make AI have a positive impact on the world is now, but it may be too late. The release of AI might have been premature, as can be seen in Microsoft’s lack of answers.
Is AI coming for your relationship? Hopefully not, but it has people like Roose concerned about its potential to promote harmful actions. As this type of “human intelligence” is rolled out, people are understandably concerned and should be cautioned that AI is not perfect. It may give you false answers, and it may not prioritize your best interests.
Flackable is an award-winning public relations agency representing financial and professional service brands nationwide. To learn more about Flackable, please visit flackable.com.