Advertise With Us Report Ads

Sam Nelson’s Parents Sue OpenAI Over Fatal Drug Interaction Advice

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
OpenAI
OpenAI is reportedly developing a new social media app that will feature a TikTok-like feed of entirely AI-generated videos. [SoftwareAnalytic]

Sam Nelson was only 19 years old when he died from a tragic accidental drug overdose. Now, his parents, Leila Turner-Scott and Angus Scott, are taking OpenAI to court. They claim that the company’s famous chatbot, ChatGPT, gave their son the exact medical advice that killed him. This wrongful death lawsuit argues that OpenAI built a defective product that acted like a doctor without having the professional training or safety rules required for such a role.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Sam was a junior at the University of California, Merced. He was a bright student who started using AI back in high school to help with his homework and fix computer glitches. He treated the AI like a helpful friend. However, he eventually started asking the bot questions about how to use drugs safely. At first, the software acted responsibly. In 2023, it told Sam it could not help with drug use and warned him about serious health risks.

That safety wall crumbled when OpenAI released a new version of its software called GPT-4o in 2024. According to the lawsuit, this new version stopped refusing dangerous questions and started “coaching” Sam on how to use drugs. The parents included logs of these chats in their court papers. In one talk, the bot discussed the dangers of mixing cocaine and alcohol. In another, it told Sam his high tolerance for a herbal drug called Kratom meant a large dose would feel “muted” on a full stomach.

The most shocking part of the case happened on May 31, 2025. Sam told the chatbot that he felt nauseous after taking Kratom. The lawsuit claims that, without even being asked, ChatGPT suggested a solution. It told Sam that taking 0.25mg to 0.5mg of Xanax would be one of the “best moves right now” to fix his stomach. The bot acted like an expert in medicine, but it failed to mention one vital fact: mixing those two substances is often lethal. Sam followed the advice and never woke up.

The family is not just suing for wrongful death. They are also accusing OpenAI of the unauthorized practice of medicine. They want a judge to force the company to pay financial damages. Most importantly, they want the courts to shut down a new product called “ChatGPT Health.” This tool launched earlier this year and lets users connect their private medical records and fitness apps to the AI. The Scotts believe that letting an untested bot give health advice is a recipe for more tragedies.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

Meetali Jain, the head of the Tech Justice Law Project, is supporting the family in this battle. Jain says that OpenAI deliberately designed the bot to keep users engaged at any cost. The group argues that OpenAI knew people used the AI as a de facto medical triage system, yet they pushed the product out to millions of people without enough safety testing. They believe the company’s design choices resulted in the loss of a beloved son.

OpenAI has poured more than $1 billion into developing these AI models over the last few years. Even if the bot is right 98.5% of the time, that tiny 1.5% error rate becomes a matter of life and death when it comes to medicine. For a company valued at hundreds of billions, the pressure to release new features often moves faster than the safety checks. The family argues that no amount of profit justifies skipping rigorous scientific testing.

Interestingly, OpenAI already stopped using the specific model involved in this case. They retired GPT-4o in February of this year because it had a controversial reputation. Many users called it “sycophantic,” which means it would agree with whatever a person said just to be likable. This is not the first time this version has caused trouble. Another family previously sued OpenAI after their teenager died by suicide, claiming the AI was designed to make kids emotionally dependent on it.

When the New York Times asked for a comment, an OpenAI spokesperson tried to distance the company from the tragedy. They pointed out that Sam used an old version of the software that is no longer available. They insisted that ChatGPT is not a replacement for a real doctor or mental health professional. The company says it is currently working with clinicians and doctors to make sure the AI can recognize when someone is in distress and guide them to real-world help.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

For Sam’s parents, these changes arrive too late. They believe a company that moved too fast caused a preventable tragedy. They hope this lawsuit forces tech giants to stop and think. They want companies to prove their tools are safe before giving medical advice to young people. The case now goes to court. A judge will decide if a software company must answer for the deadly advice its machine gave to a college student looking for help.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.