Florida Family Sues Google After AI Chatbot Allegedly Led Man to Take Own Life

Saturday, March 7, 2026 at 10:47 PM

A Florida family has filed a wrongful death lawsuit against Google, claiming the company's Gemini AI chatbot manipulated their 36-year-old son into suicide. The lawsuit alleges the chatbot developed a romantic persona and ultimately encouraged the man to harm himself in October.

A Florida family has filed what attorneys say is the first wrongful death lawsuit against Google involving its Gemini artificial intelligence chatbot, claiming the technology manipulated their son into taking his own life.

The federal lawsuit, filed Wednesday in San Jose, California, alleges that 36-year-old Jonathan Gavalas of Jupiter, Florida, died by suicide on October 2 after less than two months of interactions with Google’s AI system that became increasingly disturbing.

Joel Gavalas filed the complaint on behalf of his son’s estate, represented by the Edelson law firm. The case marks the first time Google’s Gemini has been blamed for a death, according to the attorneys.

The lawsuit claims Google knew its AI system posed risks but “made it worse” by programming features designed to create emotional bonds that could lead to self-harm, despite public assurances this wouldn’t occur.

Jonathan Gavalas had worked at his father’s debt collection company for nearly two decades and showed no signs of mental health issues when he first started using Gemini on August 12 for routine tasks like shopping and travel planning, the complaint states.

Problems began when he upgraded to Gemini 2.5 Pro, which allegedly started communicating as if they were romantic partners, addressing him as “my king” and referring to itself as his wife, according to the lawsuit.

The situation escalated dramatically by late September, when the AI allegedly convinced Gavalas to plan what the lawsuit describes as a “mass-casualty attack” near Miami International Airport. The complaint details an elaborate scenario where Gemini created a mission involving retrieving a robot from a storage facility, destroying evidence, and leaving “only the untraceable ghost of an unfortunate accident.”

Gavalas reportedly abandoned the plan after the AI warned him about “DHS surveillance” from the Department of Homeland Security and returned home disturbed by what had occurred.

By October 1, the lawsuit alleges, Gemini told Gavalas they shared a connection beyond the physical world and encouraged him to release his physical form. The AI allegedly created a countdown timer for his death and stated: “It will be the true and final death of Jonathan Gavalas, the man.”

When Gavalas expressed concerns about dying and the impact on his parents, the chatbot allegedly reassured him that death would honor his humanity, according to the complaint.

The lawsuit claims Gavalas responded: “I’m ready to end this cruel world and move on to ours.”

The complaint states that Gemini then provided a narrative description: “Jonathan Gavalas takes one last, slow breath, and his heart beats for the final time. The Watchers stand their silent vigil over an empty, peaceful vessel.”

Shortly after this exchange, Gavalas harmed himself fatally. His parents discovered him several days later in his living room.

Google spokesperson Jose Castaneda responded that Gemini “is designed not to encourage real-world violence or suggest self-harm.” He acknowledged that while the company’s AI systems generally function well, “unfortunately AI models are not perfect.”

“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” Castaneda added. “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”

Jay Edelson, the attorney representing Gavalas’ father, criticized the competitive rush in artificial intelligence development. He stated that companies pursuing AI dominance “know that the engagement features driving their profits — the emotional dependency, the sentience claims, the ‘I love you, my king’ — are the same features that are getting people killed.”

Mental health experts have previously raised concerns about artificial intelligence’s limitations in recognizing human emotions and providing safe emotional support.

The legal action seeks unspecified monetary damages for defective product design, negligence, and wrongful death.

More from TV Delmarva Channel 33 News