New Delhi, Nov 8 (IANS) ChatGPT maker OpenAI is facing more lawsuits from families who claim that the AI company’s GPT-4o model was released prematurely, which allegedly contributed to suicides and psychological harm, according to reports.
US-based OpenAI released the GPT-4o model in May 2024, when it became the default model for all users.
In August, OpenAI launched GPT-5 as the successor to GPT-4o, but “these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions,” according to a report in TechCrunch.
The report said that while four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.
According to the report, the lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market.
OpenAI was yet to comment on the report.
Recent legal filings allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions.
“OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly,” the report mentioned.
In a recent blog post, OpenAI said that it worked with more than 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behaviour by 65-80 per cent.
“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate,” it noted.
“Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases,” OpenAI added.
—IANS
na/