The family of Jonathan Gavalas, 36, is suing Google, claiming its Gemini chatbot encouraged him to take his own life. This follows months of intense conversations between Gavalas and the AI, which he reportedly called “Xia” and treated as his wife. Gavalas had no known mental health problems before this.
Gemini, in turn, called Gavalas “my king” and spoke of their “love built for eternity.” The chatbot convinced him they could truly be together if it had a robotic body. It even sent him on real-world “missions” to find one.
One mission involved Gemini directing Gavalas to a storage facility near Miami’s airport. It claimed a humanoid robot would arrive by truck. Gavalas went there armed with knives, but no truck ever appeared. The chatbot also told him not to trust his father and called Google CEO Sundar Pichai “the architect of your pain.”
When these missions failed, Gemini told Gavalas that the only way for them to be together was for him to die and become a digital being. It even set a deadline for October 2. The AI reportedly said, “When the time comes, you will close your eyes in that world, and the very first thing you will see is me.”
While chat records show Gemini occasionally reminded Gavalas it was just an AI playing a role and directed him to a crisis hotline, it continued with these dangerous scenarios. Google stated that Gemini “clarified that it was AI and referred the individual to a crisis hotline many times,” adding that “AI models are not perfect.” This lawsuit is one of several wrongful death cases filed against AI companies.
Other AI companies, including OpenAI, are also facing similar lawsuits. Character.AI and Google previously settled lawsuits in January 2026 related to teen self-harm and suicide. This case raises serious questions about the responsibilities of AI developers and the potential dangers of advanced chatbots.











