A Stanford professor who is an expert in a federal court case involving artificial intelligence made up a sworn declaration that contained false information that was likely created by an AI bot, according to a legitimate filing.
According to the plaintiffs ‘ Nov. 16 registration, Jeff Hancock, a professor of conversation and founding director of the Stanford Social Media Lab,” cites a review that does not exist.” ” Possibly, the investigation was a’ illusion’ generated by an AI big language design like ChatGPT”.
When people asked for reply, Hancock and Stanford did not respond right away.
The petition was brought in Minnesota District Court by a state senator and a comedian YouTuber seeking a judge order declaring illegal a state legislation criminalizing election-related,  , AI-generated “deepfake” images, video and audio.
Hancock, according to the judge processing Saturday, was brought in as an expert by Minnesota’s attorney general, a plaintiff in the case.
The senator and YouTuber’s lawsuit raised questions about Hancock’s credibility as an independent expert witness and suggested that his report should be discarded because it might include more hidden AI fabrications.
Hancock claimed to be studying” the effects of artificial intelligence technology and social media on propaganda and trust” in his 12-page surrender to the court.
Submitted with Hancock’s statement was his list of record of” cited links”, court documents show. Attorneys for state official Mary Franson and YouTuber Christopher Kohls, who are also suing California Attorney General Rob Bonta over a law that allows damages-seeking lawsuits over vote deepfakes, were drawn to one of those links.
In his briefing before the court about the style of photoshopped systems, Hancock cited the study, which presumably appeared in the Journal of Information Technology &, Politics. The release is true. But the research is “imaginary”, the filing by doctors for Franson and Kohls alleged.
According to the filing, Hancock’s book volume and article pages do not handle deepfakes but rather address virtual discussions between political candidates about climate change and the effects of social media posts on election results.
For a reference, with a realistic name, and purported publication in a true journal “is characteristic of an artificial intelligence ‘ hallucination,’ which scientific researchers have warned their colleagues about”, the filing said.
Hancock has declared under charges of fraud that he “identified the scientific, clinical, and other materials referenced” in his professional submission, the filing said.
The defendants ‘ legal team might have inserted the alleged AI falsehood, but the filing also stated that Hancock would still have” submitted a declaration where he falsely represented that he had reviewed the cited material.”
For submitting a personal injury lawsuit filing that contained fake prior court cases created by ChatGPT and that were false, attorneys Steven A. Schwartz and Peter LoDuca were fined$ 5, 000 each in federal court in New York last year.
” I did not comprehend that ChatGPT could fabricate cases” , , Schwartz told the judge.
___
©# YR@ MediaNews Group, Inc
Distributed by Tribune Content Agency, LLC.