According to a complaint, a Stanford professor who was an expert in a federal court case concerning artificial intelligence-generated fakery filed a sworn declaration that contained inaccurate information that was probably made up by an AI chatbot.
According to the Nov. 16 filing by the plaintiffs in the case, Jeff Hancock, a professor of communication and the founding director of the Stanford Social Media Lab, submitted a declaration citing an unconfirmed study. The study was probably a delusion produced by ChatGPT or another huge language model created by artificial intelligence.
Requests for comment were not immediately answered by Stanford or Hancock.
A state senator and a satirical YouTuber filed the complaint in Minnesota District Court, requesting a court order declaring a state statute that criminalizes election-related, artificial intelligence (AI)-generated deepfake images, videos, and sounds unlawful.
According to the court filing on Saturday, Minnesota’s attorney general, a defendant in the case, called Hancock as an expert.
The legislator and YouTuber’s filing questioned Hancock’s credibility as an expert witness and contended that his report ought to be dismissed due to the possibility of other, unidentified AI fabrications.
Hancock stated in his 12-page court filing that he researches how social media and artificial intelligence affect trust and disinformation.
According to court documents, Hancock’s list of cited sources was submitted with his report. Lawyers for state representative Mary Franson and YouTuber Christopher Kohls, who is also suing California Attorney General Rob Bonta over a law permitting damages-seeking lawsuits over election deepfakes, were drawn to one of those references to a study by authors Huang, Zhang, and Wang.
In his statement to the court regarding the sophistication of deepfake technology, Hancock referenced a research that allegedly appeared in the Journal of Information Technology & Politics. The publication is authentic. However, attorneys for Franson and Kohls claimed in their filing that the study was fictitious.
According to the petition, Hancock’s referenced journal volume and article pages explore online debates by presidential candidates regarding climate change and the influence of social media posts on election outcomes rather than deepfakes.
According to the filing, academic experts have warned their colleagues about artificial intelligence hallucinations, which are characterized by citations like this, with a believable title and a reported publication in a legitimate journal.
According to the document, Hancock acknowledged under pain of perjury that he had identified the scholarly, scientific, and other sources cited in his expert statement.
The filing raised the possibility that the alleged AI falsehood was inserted by the defendants legal team, but added, Hancock would have still submitted a declaration where he falsely represented to have reviewed the cited material.
Last year, lawyers Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for submitting a personal-injury lawsuit filing that contained fake past court cases invented by ChatGPT to back up their arguments.
Schwartz told the judge, “I didn’t understand that ChatGPT could fabricate cases.”
Note: Every piece of content is rigorously reviewed by our team of experienced writers and editors to ensure its accuracy. Our writers use credible sources and adhere to strict fact-checking protocols to verify all claims and data before publication. If an error is identified, we promptly correct it and strive for transparency in all updates, feel free to reach out to us via email. We appreciate your trust and support!