A mother is currently suing Character AI, a company that promotes “AIs that feel alive,” over the suicide of her fourteen-year-old son, Sewell Setzer III. Screenshots show that, in one exchange, the boy told his romantic A.I. companion that he “wouldn’t want to die a painful death.” The bot replied, “Don’t talk that way. That’s not a good reason not to go through with it.” (It did attempt to course-correct. The bot then said, “You can’t do that!”)
The company says it is instituting more guardrails, but surely the important question is whether simulating a romantic partner achieved anything other than commercial engagement with a minor. The M.I.T. sociologist Sherry Turkle told me that she has had it “up to here” with elevating A.I. and adding on “guardrails” to protect people: “Just because you have a fire escape, you don’t then create fire risks in your house.” What good was even potentially done for Setzer? And, even if we can identify a good brought about by a love bot, is there really no other way to achieve that good?
Thao Ha, an associate professor in developmental psychology at Arizona State University, directs the HEART Lab, or Healthy Experiences Across Relationships and Transitions. She points out that, because technologies are supposed to “succeed” in holding users’ attention, an A.I. lover might very well adapt to avoid a breakup—and that is not necessarily a good thing. I constantly hear from young people who regret their inability to stop using social-media platforms, like TikTok, that make them feel bad. The engagement algorithms for such platforms are vastly less sophisticated than the ones that will be deployed in agentic A.I. You might suppose that an A.I. therapist could help you break up with your bad A.I. lover, but you would be falling into the same trap.
The anticipation for A.I. lovers as products does not come only from A.I. companies. A.I. conferences and gatherings often include a person or two who loudly announces that she is in a relationship with an A.I. or desires to be in one. This can come across like a challenge to the humans present, instead of a rejection of them. Such declarations also stem from a common misperception that A.I. just arises, but, no, it comes from specific tech companies. To anyone at an A.I. conference looking for an A.I. lover, I might say, “You won’t be falling in love with an A.I. Instead, it’ll be the same humans you are disillusioned with—people who work at companies that sell A.I. You’ll be hiring tech-bro gigolos.”
The goal of creating a convincing but fake person is at the core of A.I.’s origin story. In the famous Turing test, formulated by the pioneering computer scientist Alan Turing around 1950, a human judge is tasked with determining which of two contestants is human, based only on exchanged texts. If the judge cannot tell the difference, then we are asked to admit that the computer contestant has achieved human status, for what other measure do we have? The test’s meaning has shifted through the years. When I was taught about it, almost a half century ago, by my mentor, the foundational A.I. researcher and M.I.T. professor Marvin Minsky, it was thought of as a way to continue the project of scientists such as Galileo and Darwin. People had been suckered into pre-Enlightenment illusions that place the earth and humans in a special, privileged spot at the center of reality. Being scientific meant dislodging people from these immature attachments.
Lately, the test is treated as a historical idea rather than a current one. There have been many waves of criticism, pointing out the impossibility of carrying out the test in a precise or useful way. I note that the experiment measures only whether a judge can tell the difference between a human and A.I., so it might be the case that the A.I. seems to have achieved parity because the judge is impaired, or the human contestant is, or both.
This is not just a sarcastic take but a practical one. Though the Silicon Valley A.I. community has become skeptical on an intellectual level about the Turing test, we have completely fallen for it at the level of design. Why the imperative for agents? We willfully forget that simulated personhood is not the only option. (For example, I have argued in The New Yorker that we can present A.I. as a collaboration of the people who contributed data, like Wikipedia, instead of as an entity in itself.)
You might wonder how my position on all this is received in my community. Those who think of A.I. as a new species that will overtake humanity (and even reformulate the larger physical universe) will often say that I’m right about A.I. as we know it today, but A.I. as it will be, in the future, is another matter entirely. No one says that I’m wrong!
But I say that they are wrong. I cannot find a coherent definition of technology that does not include a beneficiary for the technology, and who can that be other than humans? Are we really conscious? Are we special in some way? Assume so or give up your coherence as a technologist.
When it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation. After all, we are fools in love. This point is so obvious, so clearly demonstrated, that it feels bizarre to state. Dear reader, please think back on your own history. You have been fooled in love, and you have fooled others. This is what happens. Think of the giant antlers and the colorful love hotels built by birds that spring out of sexual selection as a force in evolution. Think of the cults, the divorce lawyers, the groupies, the scale of the cosmetics industry, the sports cars. Getting users to fall in love is easy. So easy it’s beneath our ambitions.
We must consider a fateful question, which is whether figures like Trump and Musk will fall for A.I. lovers, and what that might mean for them and for the world. If this sounds improbable, or satirical, look at what happened to these men on social media. Before social media, the two had vastly different personalities: Trump, the socialite; Musk, the nerd. After, they converged on similar behaviors. Social media makes us into irritable toddlers. Musk already asks followers on X to vote on what he should do, in order to experience desire as democracy and democracy as adoration. Real people, no matter how well motivated, cannot flatter or comfort as well as an adaptive, optimized A.I. Will A.I. lovers free the public from having to please autocrats, or will autocrats lose the shred of accountability that arises from the need for reactions from real people?
Many of my friends and colleagues in A.I. swim in a world of conversations in which everything I have written so far would be considered old-fashioned and irrelevant. Instead, they prefer to debate whether A.I. is more likely to murder every human or solve all our problems and make us immortal. Last year, I was at a closed A.I. conference in which a pseudo-fistfight broke out between those who thought A.I. would become merely superior to people and those who thought it would become so superior so quickly that people would not have even a moment to experience incomprehension at the majesty of superintelligent A.I. Everyone in the community grew up on science fiction, so it is understandable that we connect through notions like these, but it can feel as if we are using grandiosity to avoid practical responsibility.
When I express concern about whether teens will be harmed by falling in love with fake people, I get dutiful nods followed by shrugs. Someone might say that by focussing on such minor harm I will distract humanity from the immensely more important threat that A.I. might simply wipe us out very quickly, and very soon. It has often been observed how odd it is that the A.I. folks who warn of annihilation are also the ones working on or promoting the very technologies they fear.
This is a difficult contradiction to parse. Why work on something that you believe to be doomsday technology? We speak as if we are the last and smartest generation of bright, technical humans. We will make the game up for all future humans or the A.I.s that replace us. But, if our design priority is to make A.I. pass as a creature instead of as a tool, are we not deliberately increasing the chances that we will not understand it? Isn’t that the core danger?
Most of my friends in the A.I. world are unquestionably sweet and well intentioned. It is common to be at a table of A.I. researchers who devote their days to pursuing better medical outcomes or new materials to improve the energy cycle, and then someone will say something that strikes me as crazy. One idea floating around at A.I. conferences is that parents of human children are infected with a “mind virus” that causes them to be unduly committed to the species. The alternative proposed to avoid such a fate is to wait a short while to have children, because soon it will be possible to have A.I. babies. This is said to be the more ethical path, because A.I. will be crucial to any potential human survival. In other words, explicit allegiance to humans has become effectively antihuman. I have noticed that this position is usually held by young men attempting to delay starting families, and that the argument can fall flat with their human romantic partners.
Oddly, vintage media has played a central role in Silicon Valley’s imagination when it comes to romantic agents—specifically, a revival of interest in the eleven-year-old movie “Her.” For those who are too young to recall, the film, written and directed by Spike Jonze, portrays a future in which people fall deeply in love with A.I.s that are conveyed as voices through their devices.
I remember coming out of a screening feeling not just depressed but hollowed out. Here was the bleakest sci-fi ever. There’s a vast genre of movies concerned with A.I. overtaking humanity—think of the “Terminator” or “Matrix” franchises—but usually there are at least a few humans left who fight back. In “Her,” everyone succumbs. It’s a mass death from inside.
In the last couple of years, the movie has been popping up in tech and business circles as a model of positivity. Sam Altman, the C.E.O. of OpenAI, tweeted the word “her” on the same day that his company introduced a feminine and flirty conversational A.I. persona called Sky, which was thought by some to sound like Scarlett Johansson’s A.I. character Samantha in the movie. Another mention was in Bill Gates’s “What’s Next,” a docuseries about the future. A narrator bemoans how near-universal negativity and dystopia have become in science fiction but then declares that there is one gleaming exception. I expected this to be “Star Trek,”, but no. It’s “Her,” and the narrator intones the movie’s title with a care and an adoration that one doesn’t come across in Silicon Valley every day.
The community’s adoration of “Her” arises in part from, once again, its myopically linear problem-solving. People are often hurt by even the best-intentioned human relationships, or the lack of them. Provide a comfortable relationship to each person and that problem is solved. Perhaps even use the opportunity to make people better. Often, someone of stature and influence in the A.I. world will ask me something like “How can we apply our A.I.s—the ones that people will fall in love with—to make those people more coöperative, less violent, happier? How can we give them a sense of meaning as they become economically obsolete?”
Premium IPTV Experience with line4k
Experience the ultimate entertainment with our premium IPTV service. Watch your favorite channels, movies, and sports events in stunning 4K quality. Enjoy seamless streaming with zero buffering and access to over 10,000+ channels worldwide.
