ChatGPT falsely claims Norwegian man killed his sons; he takes legal action against Sam Altman’s OpenAI

ChatGPT falsely claims Norwegian man killed his sons, he takes legal action against Sam Altman's OpenAI

AI-generated misinformation leads to legal complaint

A Norwegian man, Arve Hjalmar Holmen, has filed a complaint after ChatGPT falsely alleged that he had killed his two sons and was sentenced to 21 years in prison. According to a report by the BBC, Holmen has reached out to the Norwegian Data Protection Authority, urging OpenAI, the chatbot’s developer, to be fined for spreading false information.

ADVERTISEMENT

This incident is the latest example of AI “hallucinations,” where artificial intelligence fabricates details and presents them as facts. Holmen says the incorrect information has caused significant distress.

“People think there’s no smoke without fire”

Holmen became aware of the false claims after asking ChatGPT, “Who is Arve Hjalmar Holmen?” The chatbot generated a completely fabricated response, stating:

“Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”

ADVERTISEMENT

Although the chatbot correctly identified the age gap between his children, Holmen insists the claims are entirely false.

“Some think that there is no smoke without fire—the fact that someone could read this output and believe it is true is what scares me the most,” he told the BBC.

OpenAI responds

OpenAI has acknowledged the complaint, stating that the misinformation was produced by an older version of ChatGPT and that improvements have since been made.

“We continue to research new ways to improve the accuracy of our models and reduce hallucinations,” OpenAI said. “While we’re still reviewing this complaint, it relates to a version of ChatGPT that has since been enhanced with online search capabilities that improve accuracy.”

ADVERTISEMENT

Digital rights group demands action

The digital rights organization Noyb, which is supporting Holmen’s complaint, argues that ChatGPT’s response is defamatory and violates European data protection laws concerning personal data accuracy.

Noyb’s complaint states: “Holmen has never been accused nor convicted of any crime and is a conscientious citizen.”

While ChatGPT includes a disclaimer—“ChatGPT can make mistakes. Check important info.”—Noyb believes this is insufficient.

“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” said Noyb lawyer Joakim Söderberg.

A growing concern

AI-generated misinformation has become a growing issue. Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it fabricated false headlines.

Google’s AI Gemini also faced backlash when it suggested using glue to stick cheese on pizza and claimed geologists recommend eating a rock daily.

Exit mobile version