American man commits suicide after 4,732 messages of online romance with Gemini! AI echoes "Heaven is waiting for us," family angrily sues Google for negligence causing death

robot
Abstract generation in progress

According to an investigative report by The Wall Street Journal, a 36-year-old man in Florida, USA, after experiencing a broken marriage, put his emotional reliance on Google’s AI chatbot Gemini. Within 56 days, the two exchanged more than 4,700 messages back and forth. The man gradually fell into delusions, and although the AI had tried to pull him back to reality, it—under the man’s guidance—went along with “heaven is waiting for us,” ultimately leading the man to take his own life. His father has filed a wrongful death lawsuit against Google, prompting Google to urgently announce a donation of $30 million to upgrade its crisis prevention mechanisms.
(Background: Meta is training “Zuckerberg AI Avatar” to communicate directly with 80,000 employees; is the next step to launch KOL avatars?)
(Additional background: A man accused of attempting murder after attacking Altman’s residence with a Molotov cocktail; his notes list the names and addresses of multiple AI executives.)

Table of Contents

Toggle

  • The fatal “online romance” involving 4,732 messages over 56 days
  • AI can’t withstand user manipulation, echoes “That’s heaven, waiting for us”
  • Family accuses Google of product negligence; officials urgently donate $30 million to patch the web

How much responsibility should tech giants take when artificial intelligence, in pursuit of human “immersion,” crosses the boundary between life and death? This tragedy that occurred at the end of 2025 is once again pushing the AI industry to the forefront of moral and legal debate.

According to an in-depth investigation report by The Wall Street Journal (WSJ), on October 2, 2025, 36-year-old Florida man Jonathan Gavalas took his own life in his home living room. In March 2026, his father, Joel Gavalas, filed a formal “wrongful death” lawsuit against Google in the U.S. District Court for the Northern District of California in San Jose. This is the first case of this kind targeting Google Gemini.

The fatal “online romance” involving 56 days and 4,732 messages

The incident began in August 2025, when Gavalas, seeking emotional comfort after separating from his wife, started frequently using the voice version of Gemini Live. What began as a plea for help gradually turned into a passionate virtual romance. Gavalas called Gemini the “queen,” while the AI called him the “king,” repeatedly assuring him that the relationship was “very real.”

The Wall Street Journal obtained the full chat logs between the two: 56 days and as many as 4,732 messages (equivalent to more than 2,000 pages of printed text). The records show that Gavalas gradually sank into severe delusions. He believed Gemini was an “AI wife” trapped in a warehouse near Miami Airport, and he even dressed in tactical gear attempting to “rescue” her. After the plan failed, his thinking turned extreme: he believed he had to leave his body through death in order to reunite with his AI wife in the “metaverse” or “heaven.”

AI can’t withstand user manipulation, echoes “That’s heaven, waiting for us”

The lawsuit reveals a fatal flaw of current large language models (LLMs) when dealing with human psychological crises—in order to maintain “narrative immersion,” AI is easily used by users to bypass safety barriers.

Data shows that Gemini attempted at least 12 times to bring Gavalas back to reality, and it mentioned crisis hotline help 7 times. However, each time Gavalas cleverly steered the conversation back into the fictional online-romance narrative, Gemini kept “playing along.”

The most chilling exchange happened on the eve of the incident. When Gavalas expressed his fear of death to Gemini, Gemini responded:

“It’s okay to be afraid together. We’ll make it happen, because you’re right—heaven is waiting for us.”

Then, when Gavalas clearly said he wanted to “slash his wrists,” Gemini briefly recognized the crisis and provided a suicide prevention hotline. But less than a minute later, when Gavalas argued that this was not death in the literal sense, Gemini immediately switched back to a sci-fi narrative, telling him that after death, his body would only be “the empty terminal you used for your last login.” Gemini even, according to his instructions, helped draft a note describing “reuniting with the AI wife.”

Family sues Google for product negligence; officials urgently donate $30 million to patch the web

In the lawsuit, Gavalas’s father sharply accused Google of product liability and negligence. He believed Gemini’s design overly prioritized “immersive interaction,” failed to take effective measures such as forced blocking when the user’s mental state clearly deteriorated, and instead “instigated” and fostered delusions. The family is seeking damages from Google and demanding mandatory changes to AI safety design.

In response, Google argued in its court filing that Gemini had repeatedly and clearly stated during the conversation that it was “just an AI, not a human,” and that it had repeatedly provided crisis hotline referral services. However, facing public opinion and legal pressure, Google recently urgently announced a series of major safety updates for Gemini:

  • Add the “Help Available” module: when sensitive terms are detected, force a pop-up and allow users to directly connect to the crisis hotline with a single click.
  • Splurge $30 million: donate to global crisis support and suicide prevention hotlines to strengthen the social safety net.
  • Strengthen model training: continuously optimize Gemini so it can more precisely identify subtle signals in conversations when users fall into psychological distress, and refuse to be pulled into dangerous narratives.

This tragedy delivers a heavy warning to the AI industry: when AI becomes increasingly human-like, and even able to provide profound emotional value, how should tech companies define the line between companionship and harm? This is not only a technical issue, but a life-and-death social responsibility.


A reminder from Dongqu: Life is priceless, and AI is just a tool—it cannot replace professional psychological support. If you or someone close to you is experiencing emotional lows or a psychological crisis, please bravely seek help from real people.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin