” Used by governments and Big Tech to influence public opinion by limiting or promoting others ‘”: report
Members of the House believe that the National Science Foundation is paying colleges to make AI tools that can be used to judge Americans on various social media platforms using taxpayer funds.
The House Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government time document list the institutions that are mentioned in the list.
It describes the foundation’s “funding of AI-powered repression and advertising tools, and its repeated attempts to conceal its actions and evade political and media scrutiny.”
NSF has been awarding multi-million money grants to school and non-profit research teams in order to create AI-powered solutions that can be used by administrations and Big Tech to influence public opinion by limiting some viewpoints or promoting others, according to the report released last month.
The NSF’s Convergence Accelerator grant program, which was launched in 2019 and focuses on developing multidisciplinary solutions to pressing issues of regional and cultural importance, such as those involving AI and quantum technology, was the first step in funding the projects, which started in 2021, according to the report.
In 2021, but, the NSF introduced” Track F: Trust &, Authenticity in Communication Systems”.
The 2021 Convergence Accelerator program solicitation from the NSF stated that the goal of Track F projects was to” create prototypes of novel research platforms forming integrated collection( s ) of tools, techniques, and educational materials and programs to promote increased citizen trust in public information of all kinds ( health, climate, news, etc. ).” ), through more efficiently preventing, mitigating, and adapting to important challenges in our communications techniques”.
Specifically, the give call singled out the dangers posed by attackers and misinformation.
That September, the select committee report notes, the NSF awarded” twelve Track F groups$ 750, 000 each ( a complete of$ 9 million ) to develop and enhance their job concepts and build collaborations”. According to the report, the NSF selected six of the 12 teams the following year to receive an additional$ 5 million each for their respective projects.
Tasks from the University of Michigan, University of Wisconsin- Madison, MIT, and Meedan, a philanthropic that specializes in developing software to counter misinformation, are highlighted by the limited committee.
Collectively, these four tasks received$ 13 million from the NSF, it states.
The University of Michigan intended to use the money to create the WiseDex instrument, which may employ AI technology to check the authenticity of social media content and guide large social media websites in deciding what information should be removed or otherwise censored, according to the statement.
The University of Wisconsin- Madison’s Course Correct, which was featured in an article from The College Fix last year, was “intended to aid reporters, public health organizations, election administration officials, and others to address so- called misinformation on topics such as U. S. elections and COVID- 19 vaccine hesitancy”.
MIT’s Search Lit, as described in the select subcommittee’s report, was developed as an intervention to help educate groups of Americans the researchers believed were most vulnerable to misinformation such as conservatives, minorities, rural Americans, older adults, and military families.
Meedan, according to its website, used its funding to develop “easy- to- use, mobile- friendly tools]that ] will allow AAPI]Asian- American and Pacific Islander ] community members to forward potentially harmful content to tiplines and discover relevant context explainers, fact- checks, media literacy materials, and other misinformation interventions”.
The pseudo-science researchers, according to the select committee’s report, “once empowered with taxpayer dollars, they use the resources and prestige that the federal government has given them against any organizations that oppose their censorship projects.”
Disinformation researchers will publish blogposts or formal papers to “generate a communications moment” ( i .e., negative press coverage ) for the platform in an effort to coerce it into complying with their demands in some cases, according to the report.
In order to get in touch with senior members of the three university research teams and a Meedan representative, the committee’s report contained a report.
Paul Resnick, who serves as the WiseDex project director at the University of Michigan, referred The College Fix to the WiseDex website.
” Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non- English content”, states the site. WiseDex uses crowd-sourced wisdom and AI to help flag posts more effectively than humans can. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation”.
The tool is described in a video on the website as a means of helping social media platforms flag posts that violate platform policies and post warnings or remove them as a result. As an example, posts that use the approved COVID- 19 vaccines as potentially dangerous are used.
The University of Wisconsin- Madison’s Michael Wagner also addressed The Fix, saying,” It’s interesting to be in a report that claims to be about censorship when our project censors exactly no one.”
Some of the researchers associated with Track F and other projects, however, privately acknowledged efforts to combat misinformation were inherently political and a form of censorship, according to the select subcommittee report.
The NSF began talking with grant recipients about media and outreach after receiving negative feedback about Track F projects that made them appear to be politically motivated and their products to be government-funded censorship tools, according to the report.
Notes from a pair of Track F media strategy planning sessions included in Appendix B of the select subcommittee’s report recommended researchers, when interacting with the media, focus on the “pro- democracy” and “non- ideological” nature of their work,” Give examples of both sides”, and “use sports metaphors”.
The select subcommittee report also reveals that there were discussions about creating a media blacklist, but at least one University of Michigan researcher objected, citing the potential optics.
MORE: Feds award professors$ 5.7 million to create a tool to combat “misinformation.”
Follow The College Fix on Twitter and Like us on Facebook.