OpenAI's chatbot tool ChatGPT wrongly told a Norwegian dad he'd killed his own kids. Now he's fighting back.
Arve Holmen didn't expect much when he typed his name into the popular AI chatbot. What came back shocked him - a fabricated story claiming he murdered two sons in 2020 and got 21 years behind bars.
None of it happened. Holmen has now filed paperwork with Norway's data watchdog demanding fines against OpenAI, the American company behind ChatGPT.
Image: noyb
What bothers Holmen most? The fact people might actually believe these made-up claims, especially since the system somehow knew the rough age gap between his real, very much alive children.
This marks another example of what experts call AI "hallucinations" - completely invented information presented as fact by computer systems.
European digital rights group Noyb filed the complaint for Holmen. They argue these false claims break data protection rules about accuracy of personal information. Their paperwork stresses Holmen never faced any criminal accusations and remains a law-abiding citizen.
Despite ChatGPT showing small warning messages about potential mistakes, Noyb says this fails to address the real harm such fabrications cause.
The problem extends beyond just ChatGPT. Apple recently pulled its news summary tool in the UK after it created fake headlines. Google's Gemini system once told users to stick cheese on pizza with glue and claimed geologists say humans should eat rocks daily.
Why do these systems make up information? Even the experts aren't sure. Simone Stumpf from Glasgow University says understanding these reasoning failures remains an active research area. Even system developers frequently can't explain specific outputs.
OpenAI has updated ChatGPT since Holmen's August search. It now checks current news when answering questions about people.
Noyb tells media outlets that Holmen tried several searches that day, including his brother's name. Each attempt produced different but equally false stories.
The rights group describes large language models as "black boxes" and notes OpenAI ignores requests for data access, making it impossible to learn what information these systems actually contain.
As AI tools become more widespread, cases like Holmen's highlight growing concerns about reputation damage when computers confidently present fiction as fact.
Read next:
• Americans Now Rent Digital Realities for $129 a Month — Clothes No Longer the Priority
• AI Stories Get Hate — Yet Readers Can’t Seem to Look Away
• Google’s AI Overviews: A Never-Ending Loop Back to Google?
• Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves
Arve Holmen didn't expect much when he typed his name into the popular AI chatbot. What came back shocked him - a fabricated story claiming he murdered two sons in 2020 and got 21 years behind bars.
None of it happened. Holmen has now filed paperwork with Norway's data watchdog demanding fines against OpenAI, the American company behind ChatGPT.
Image: noyb
What bothers Holmen most? The fact people might actually believe these made-up claims, especially since the system somehow knew the rough age gap between his real, very much alive children.
This marks another example of what experts call AI "hallucinations" - completely invented information presented as fact by computer systems.
European digital rights group Noyb filed the complaint for Holmen. They argue these false claims break data protection rules about accuracy of personal information. Their paperwork stresses Holmen never faced any criminal accusations and remains a law-abiding citizen.
Despite ChatGPT showing small warning messages about potential mistakes, Noyb says this fails to address the real harm such fabrications cause.
The problem extends beyond just ChatGPT. Apple recently pulled its news summary tool in the UK after it created fake headlines. Google's Gemini system once told users to stick cheese on pizza with glue and claimed geologists say humans should eat rocks daily.
Why do these systems make up information? Even the experts aren't sure. Simone Stumpf from Glasgow University says understanding these reasoning failures remains an active research area. Even system developers frequently can't explain specific outputs.
OpenAI has updated ChatGPT since Holmen's August search. It now checks current news when answering questions about people.
Noyb tells media outlets that Holmen tried several searches that day, including his brother's name. Each attempt produced different but equally false stories.
The rights group describes large language models as "black boxes" and notes OpenAI ignores requests for data access, making it impossible to learn what information these systems actually contain.
As AI tools become more widespread, cases like Holmen's highlight growing concerns about reputation damage when computers confidently present fiction as fact.
Read next:
• Americans Now Rent Digital Realities for $129 a Month — Clothes No Longer the Priority
• AI Stories Get Hate — Yet Readers Can’t Seem to Look Away
• Google’s AI Overviews: A Never-Ending Loop Back to Google?
• Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves