The dean of the law school urges “rigid point checking” to prevent problems with artificial intelligence.
A self-described “expert” on “misinformation” and artificial intelligence apologized after the AI system he used inserted false quotes into a legitimate short. A law college professor, among others, was one of the prominent higher education experts who spoke to The College Fix about what can be done to stop related things from happening in the future.
Stanford University Professor Jeff Hancock submitted a subsequent court issuing in a situation concerning Minnesota’s laws against “deepfakes”, which are artificial intelligence-created clips that seem reasonable. The video may appear practical, making it useful for campaigns who might hope to portray their opponent negatively.
Hancock, the director of a book about “misinformation”, played his own part in spreading false information, when he submitted evidence that included “hallucinated quotes”, meaning they were not true. In his first registration, he declared himself an “expert” on “technology”.
But Hancock ( pictured ) later had to amend his filing to “acknowledge three citation errors”, after opposing counsel flagged the mistakes. He made the error by creating plain citations that his program had already filled in by adding them.
The Fix emailed the Stanford doctor many times in the past few days to request additional information on the condition, but he did not respond.
A doctor of law at the University of Kansas argued that AI models can maintain their steadfast quotes and maintain their impunity.
Associate Dean of Graduate and International Law Andrew Torrance told The Fix in a phone interview,” When you press]large language models ] on it, sometimes they will double down and defend that this thing indeed is true.”
” So, you have to do comprehensive reality checking. Every word an AI generates should be checked thoroughly, Professor Torrance told The Fix.
He and several other professors made the point that it should be clear how AI was used in a 2023 papers, which he made with The Fix.
In a report titled” Talk GPT and Works Scholarly,” Torrance and University of California Irvine Professors Bill Tomlinson and Rebecca Black wrote,” Obviously share the use of AI-assisted reading devices in your work.”
The group also said writers should publish what” tools and techniques” they used in the study.
The professors even urged you to be open about the limitations of AI-assisted publishing tools. This includes describing any possible prejudices or errors that might be in the text produced by the tool.
Use ChatGPT as a’ board,’ but for much more, professional says
A higher schooling group spokesman said the use of AI may be minimized.
” If you don’t have someone else to bounce ideas off of, it can be a valuable tool”, Chance Layton, communications director for the National Association of Scholars, told The Fix via internet.
When writing scientific documents, Layton said AI may “only be used as a board”. He said the Stanford writer’s use of AI contributes to” the lack of trust in skills” which Layton called a “big problem”.
Some people are concerned about cheating because of the increase of artificial knowledge. The Fix spoke with a scholar in 2022 who had used ChatGPT for two final examinations and received As on both.
” I used it for my numerous alternative championships, two of them, and got a 95 on one of them and the other one, a 100″, he told The Fix. ” Quarter the boys in my group used it”, the student said.
In some cases, ChatGPT has made up stories about rules faculty being accused of sexual assault.
Legitimate scholar Eugene Volokh discovered in 2023 that ChatGPT do fabricate false information about George Washington University Professor Jonathan Turley and even quote a non-existent Washington Post article when requested to write about sexual assault.
The Fix conducted a check that was comparable to Volokh’s, and it also found five instances of sexual abuse against professors, none of which were real.
All cited the New York Times, Washington Post, or different blogs. But, ChatGPT declined to give example when prompted afterwards.
Further: ChatGPT is socially biased to the left
IMAGE: Psych of Tech Institute/YouTube
Follow The College Fix on Twitter and Like us on Twitter.