Stanford misinformation expert admits his chatbot use led to misinformation in sworn federal court filing

An artificial intelligence chatbot has been held accountable by a Stanford University disinformation specialist who was called out in a federal court case in Minnesota for providing a signed affidavit that contained false material.

In an apologetic court declaration, professor Jeff Hancock stated that he did not mean to mislead the court or any lawyers, but that the bot produced more errors than the ones that the plaintiffs in the case pointed out.

Hancock wrote, “I sincerely apologize for any confusion this may have caused.”

In a court filing last month, attorneys for a YouTuber and a Minnesota state lawmaker attempting to repeal a Minnesota statute said that Hancock’s expert-witness declaration included a reference to an unidentified paper by authors Huang, Zhang, and Wang. They demanded that the proposal be rejected because they thought Hancock had used a chatbot to prepare the 12-page document, which may contain further, unidentified AI fabrications.

It did: According to his brief in Minnesota District Court, Hancock discovered two further AI hallucinations in his declaration after the attorneys called him out.

In a lawsuit filed by the state legislator and the satirist YouTuber, Minnesota’s attorney general called the professor, who founded the Stanford Social Media Lab, as an expert defense witness. The politician and the social media influencer are requesting a court ruling that would declare a state statute that criminalizes deepfake images, videos, and sounds relevant to elections unlawful.

Since San Francisco’s OpenAI debuted its ChatGPT bot in November 2022, generative AI has taken the globe by storm. Hancock’s legal predicament exemplifies one of the most prevalent issues with this technology. Artificial intelligence (AI) chatbots and picture generators frequently create mistakes called hallucinations, which might include false information in text and ridiculous things like six-fingered hands in photos.

See also  Elon Musk, not Melania or Kimberly Guilfoyle, appears in curious new Trump family photo

Hancock, who investigates how AI affects misinformation and trust, explained in his apprehensive court file how he used OpenAI’s ChatGPT to create his expert submission, which resulted in the mistakes.

Hancock said that, in addition to the fraudulent paper by Huang, Zhang, and Wang, he had listed four false authors for another study and a nonexistent 2023 piece by De Keersmaecker & Roets in his declaration.

Hancock stated in the lawsuit that he co-wrote the seminal article on communication mediated by AI in an attempt to support his credibility by providing specifics of his experience. Hancock remarked, “I have written a lot about misinformation in particular, including its prevalence, psychological dynamics, and potential remedies and interventions.”

Hancock stated in the filing that he utilized ChatGPT 4.0 to assist him in finding and summarizing articles for his submission, but the mistakes most likely occurred later when he was writing the document. He had added the term “cite” to the text he provided to the chatbot as a reminder to himself to support his arguments with scholarly references.

“I think the hallucinated citations came from the response from GPT-4o, which was to generate a citation,” Hancock said, adding that he thought the chatbot also created the four false authors.

Relevant ArticlesAI chatbots are interviewed by a Bay Area native’s internet talk program.Education | Lawyers say a Stanford AI fakery expert’s court ruling featured evident AI fakery.Education | Silicon Valley tech boom boosts California’s gloomy budget view Education | Apple prepares more chatty Siri in effort to catch up in AIEducation | Magic: The intelligence of generative AI is increasingIn their Nov. 16 petition, the YouTuber and lawmaker claimed that Hancock had admitted under pain of perjury that he had identified the scholarly, scientific, and other sources cited in his expert report.

See also  Court declaration by Stanford AI fakery expert contained apparent AI fakery, lawyers claim

Hancock’s credibility as an expert witness was also called into question in that filing.

In his apology to the court, Hancock said that none of the scientific data or expert opinions he provided were affected by the three mistakes.

The case’s judge has scheduled a hearing for December 17 to decide whether to reject Hancock’s expert declaration and whether the Minnesota attorney general can submit an updated version of the submission.

Questions concerning whether Hancock would be subject to disciplinary action were not immediately answered by Stanford, which suspends students and requires them to perform community service if they use a chatbot to finish an assignment or test without their instructor’s consent. Similar inquiries were not immediately answered by Hancock.

It’s not the first time that Hancock has filed a court document using AI-generated gibberish. Attorneys Steven A. Schwartz and Peter LoDuca were fined $5,000 apiece in federal court in New York last year for using fictitious prior court cases created by ChatGPT to support their claims in a personal injury lawsuit.

Schwartz told the judge, “I didn’t understand that ChatGPT could fabricate cases.”

Note: Every piece of content is rigorously reviewed by our team of experienced writers and editors to ensure its accuracy. Our writers use credible sources and adhere to strict fact-checking protocols to verify all claims and data before publication. If an error is identified, we promptly correct it and strive for transparency in all updates, feel free to reach out to us via email. We appreciate your trust and support!

See also  Former Walmart truck driver falsely accused of fraud awarded $34.7 million by California jury

Leave a Reply

Your email address will not be published. Required fields are marked *