Editor’s Note: Hi y’all! I’m giving video a try this week. If you like it, I might keep doing it. Let me know what you think!
Late last month, OpenAI gave voice to its already iconic bot, ChatGPT. Some people, including the actress herself, thought an early demo sounded just like Scarlett Johansson, a similarity the company welcomed – at least at first. The AI maker tried and failed to connect Johansson to the project, but comparisons to her automated love interest in Spike Jonze’s Her were inevitable. They were also dead on.
OpenAI is now warning that you could fall hard for ChatGPT. And they're not quite sure what will happen if you do.
So, what did you think? Be honest.
On August 8th, the company released a safety analysis that listed “anthropomorphization and emotional reliance” among the potential risks posed by its latest update, ChatGPT 4o. Under the heading “Societal Impact,” OpenAI expands on how the addition of a human-like voice might heighten a user's susceptibility.
The report cites cases of software testers forming apparent emotional connections with the chatbot.
“... We observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as ‘This is our last day together.’ While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.” — OpenAI
What’s wrong with falling in love with AI?
Large Language Models, like the one powering ChatGPT, are constantly hallucinating and making up wild shit with no regard for the truth. Not only are these known liars on a permatrip, they’re also sexist, racist, and homophobic. Of course, OpenAI knows all of this, which is why they’re afraid their uncanny voice interface will elicit “misplaced trust” in a platform known for spreading misinformation.
“Recent applied AI literature has focused extensively on ‘hallucinations’, which misinform users during their communications with the model and potentially result in misplaced trust. Generation of content through a human-like, high-fidelity voice may exacerbate these issues, leading to increasingly miscalibrated trust.” — OpenAI
As if that weren’t bad enough, OpenAI forecasts codependence and isolation for those who fall prey to the chatbot’s charms. According to the report, “... users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.”
I may not be able to predict the future of human-machine intimacy, but I know a toxic relationship when I see one, and this is that.
MORE STUFF WORTH READING
Empathy machines — The New Breed by Kate Darling
MIT Media Lab researcher Kate Darling hosted a workshop at a 2012 tech conference in Geneva Switzerland, in which she and one of the event’s organizers, Hannes Gassert, used robotic toy dinosaurs to test attendee’s empathy for machines. Pleo, designed by the same guy who created Furby, was packed with sensors, mics, and cameras that allowed it to respond to sound and touch. Pleo could also feel pain – or so it seemed.
After getting participants acquainted with their pet dinosaurs, Darling and Gassert, revealed a number of weapons and the true purpose of the workshop: to “torture and kill” their new companions. All but one of the 30 participants refused to “hurt” their pets. The hatchet descended only after the organizers threatened to destroy all of the Pleos if no one complied. When asked to strike her robot, one woman removed the toy’s batteries to “spare it the pain.”
Some like it bot! Here’s how people are actually using AI – MIT Technology Review
A review of one million ChatGPT logs found that sexual role-play is the second most popular use case for the bot. OpenAI and other leaders have attempted to limit or out-right prohibit explicit interactions with their tools, but, as history has shown, if it exists people will find a way to f*ck it.
Be careful what you wish for A California Bill to Regulate A.I. Causes Alarm in Silicon Valley – NYT
In news that should surprise no one, it looks like Silicon Valley’s pleas to Congress for AI regulation were all for show. If passed, a recently introduced California bill would require more rigorous testing of AI and give the attorney general the right to sue makers whose tools cause serious harm. The tech industry is shook.