Many legal, ethical, and legal rights queries raised
Recent media coverage of a new AI-infused classroom management tool with facial recognition capabilities that measures student behavior, assesses student psychological state, and monitors classes for distraction, tiredness, and confusion.
However, the development of systems and related developments have raised a number of legal, ethical, and legal liberties related issues, related to those posed by the COVID pandemic and the campus security surveillance programs and check proctoring initiatives.
A Guilford College group led by Chafic Bou-Saba, associate professor for computing technology and information systems, has created the AI-infused school control tool.
According to articles about his new program, Bou-Saba usually believes that the classroom management tool benefits educators who are unwilling or unable to handle their students more effectively while also improving the academic outcomes of those in their classes.
” When you’re in a class in ( real time ) it’s not quick picking up on every student and knowing if they are getting the ideas”, Bou- Saba is quoted as saying in an October reports account published on Guilford’s website.
” We want to see if there’s a way to track students ‘ ( facial ) responses with how they are learning in class”.
In classrooms equipped with the plan, the device” did report student behavior, if needed, by taking five- to 10- next videos”, Bou- Saba later told Inside Higher Ed. From there, instructors can strike up discussions with students about how to behave when they do n’t look in the right direction or when they interact with other students when they should n’t.
Chad Marlow, senior policy advisor for the ACLU and primary author of a thorough report on what he refers to as the “EdTech Surveillance Industry,” is one of the people who worries about the emerging technologies.
He claimed that those who are not considered to be” the average” student might have issues using facial recognition in classroom management software.
” For instance, persons with disabilities, including neurodivergent persons, may work or look different from what is deemed normal, but that does not think they are bored, confused, or distracted”, he said via email.
” In fact, a student with ADHD is likely to appear distracted ]at ] times because they may be fidgeting or looking around, but such a conclusion would be wrong”.
He added that cultural differences could present challenges for AI because” students from cultural backgrounds different from where the program is developed may be flagged based on misinterpretations of their behaviors or looks.”
He also made note of the fact that it has been shown that facial recognition programs are less capable of identifying the faces of young people, women, and people of color, which could have a negative impact on AI-based attendance tracking.
Furthermore, Marlow questioned how well such programs actually work.
He claimed that based on a random person’s expression and behavior, it is “very difficult for anyone, let alone a computer, to determine how a random person is feeling.” For example, some people have a resting face that looks happy and others upset, but that does n’t mean they are either”.
” ]I ] t is pretty irresponsible for someone to suggest these programs, especially the’ affect detection ‘ ones, work”, Marlow said.
The College Fix contacted Professor Bou-Sabata in late March via email to ask him questions about his program. The Fix proceeded to email Bou- Saba several questions, including some regarding criticisms of programs like his made by Marlow, however, Bou- Saba did not respond to these questions by the time of publication.
Erik Learned- Miller, a computer vision researcher and professor of information and computer sciences at the University of Massachusetts, Amherst, said that he shares some of Marlow’s concerns, noting that many emotions “have a high degree of ambiguity”.
He noted that cognitively disturbed people may be incorrectly assessed by such tools, but he did not want to discount all applications of classroom management tools that rely on facial recognition and artificial intelligence.
After putting forth a “general disclaimer” that he did not know the exact details” of how face recognition is integrated into the workflow of teachers in]Bou- Saba’s ] research”, Learned- Miller said,” This can make all of the difference”.
” If the tool is used assuming it is 100 % reliable and dependable, that is a problem”, Learned- Miller said via email to The Fix. ” However, if it is used as a way to alert a teacher to a’ possible problem’, then that could be helpful”.
” It is helpful for]AI] systems to have ‘ confidence levels’ in the output. However, it is notoriously difficult in AI systems in general to produce good measures of confidence”, he wrote, arguing many AI systems will claim to be 99 percent confident, but still make errors 20 percent of the time,” which is clearly a problem”.
Discussing the use of such programs at the K- 12 level, Learned- Miller wrote,” My view is that such a system is best used to alert a teacher that there *might* be an issue that needs attention. After such an alert, ideally, the teacher would assess the student themselves”.
” Of course”, he added,” this is often difficult if a teacher is burdened with too many students”.
When Marlow was questioned about the unintended consequences of using classroom management systems that use AI and facial recognition, Marlow responded,” Absolutely not, but it will take much better public education about their lack of efficacy and unintended harms of these products. The “EdTech Surveillance industry” ( the “EdTech Surveillance industry” ), which spend millions deceptively marketing these products to schools in an effort to make a lot of money, are currently the main topic of conversation.
By the end of the semester, Bou-Saba plans to test his program, according to Inside Higher Ed.
Marlow argued that students who oppose the use of such technologies should work to raise awareness of the shortcomings and lack of evidence of the programs ‘ efficacy as well as” the negative impact of surveillance on their student experience.”
He argued that no use of such programs should be made unless independent, data-driven, and verifiable testing demonstrates that they consistently identify the issues they claim to be solving without using false positives or negatives. He added that they should only be used if the problem’s solution is not in need of better, alternative solutions.
MORE: NSF funded universities to create social media AI censorship tools.
IMAGE: Memory Man / Shutterstock
Follow The College Fix on Twitter and Like us on Facebook.