Добавить новость
smi24.net
MercuryNews.com
Октябрь
2025
1 2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

Former FBI operative James Keene, whose memoir was made into Apple TV+ series Black Bird, claims Google’s AI chatbot called him a killer

0

James Keene went undercover for the FBI to ensnare a suspected serial killer in the late ’90s, and now, in the age of artificial intelligence, he is accusing Google’s AI chatbot Gemini of calling him a murderer.

Keene’s memoir, “In with the Devil,” about his work — while serving 10 years for a marijuana-related conspiracy — to extract confessions from a prisoner suspected in the disappearances of dozens of women and girls was made into the 2022 Apple TV+ mini-series Black Bird, with Keene as executive producer.

Now, in a lawsuit filed last week, Keene is accusing Google of defamation. He claimed that in May of this year, the Mountain View company’s AI chatbot Gemini was reporting that he was serving a life sentence for multiple convictions, and describing him as the murderer of three women. The false statements were accessible to tens of millions of internet users daily, the lawsuit filed Sept. 22 in Illinois court said.

Keene is seeking a minimum of $250 million in damages.

Google did not respond to questions about the lawsuit. On Friday, the judge in the case granted the company’s request for more time — until Oct. 29 — to respond in court to the lawsuit.

Keene claimed that after he contacted Google, the company privately apologized for the erroneous information, blaming the AI platform. But Gemini continued to put out false information about him, including that he “was serving a life sentence without parole,” the federal court lawsuit claimed.

He contacted Google again about the “defamatory statements,” and again, Google apologized, the lawsuit said.

Despite the apologies, the false statements about Keene were accessible for at least two months, the lawsuit alleged.

“Google has shown that they have a reckless disregard for the truth or that they are publishing such statements with knowledge of their falsity and actual malice,” the lawsuit alleged.

The propensity of AI platforms to “hallucinate” and fabricate false information has led to several public figures getting painted in a falsely negative light. Marietje Schaake, a fellow at Stanford University’s Cyber Policy Center, was erroneously dubbed a terrorist in 2022 by an AI chatbot from Facebook parent Meta.

In 2023, a ChatGPT bot falsely asserted that George Washington University law professor Jonathan Turley had been accused of making sexualized comments to a student during a school-sponsored Alaska trip, despite Turley never having taken a trip to Alaska with any students, nor having been accused of such behavior.

Neither incident produced a lawsuit. In 2023, a mayor in Australia, Brian Hood, threatened to sue San Francisco ChatGPT creator and AI giant OpenAI after its chatbot falsely said he had been imprisoned for bribery. Hood did not file the lawsuit, reportedly because OpenAI ensured the fake result would not reoccur.

In Keene’s case, legal precedent may not be on his side. In May, a Georgia judge sided with OpenAI in a lawsuit by a radio-show host. The company’s ChatGPT, responding to a series of queries by a journalist, erroneously said host Mark Walters had been accused in a lawsuit of embezzlement. However, Gwinnett County Judge Tracie Cason threw out Walters’ case, finding that OpenAI actively seeks to prevent false chatbot output, that its terms of use note the chance of erroneous results, and that the bot itself advised the journalist of limitations in its information sourcing.

However, unlike in Keene’s case with his purported complaints to Google, Cason noted that Walters “testified that he did not ask OpenAI to correct or retract the false claim that he was accused of embezzling funds.”















Музыкальные новости






















СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *