The Department of Defense funded a “large range social fraud” program, according to people paying statements. The Federalist has discovered documents that show how the federal government created network of phony social media accounts worthy of stifling Americans ‘ online freedoms of speech and protection and, possibly, psychological warfare.
The DOD awarded more than$ 9.1 million for Thomson Reuters Special Services ( TRSS) for “ACTIVE SOCIAL ENGINEERING DEFENSE… LARGE SCALE SOCIAL DECEPTION” starting in 2018, according to government funding disclosures. Of the total amount promised, the federal government apparently paid — or “outlayed” — more than$ 268, 000 for the project.  ,
TRSS is a company of Thomson Reuters, which owns the communist press shop News. According to the bank’s site, it offers “scalable solutions to governments and international institutions”, and its administration “leverages real-world experience in the US Intelligence Community, Department of Defense, legislation enforcement and the secret market”. The Air Force awarded the contract, and the DOD’s Defense Advanced Research Projects Agency ( DARPA ) — the military’s shady research branch — funded the work in “large scale social deception” . ,
A DARPA funding document shows the Air Force’s 711th Human Performance Wing, Human Effectiveness Division, enabled this project in 2018 — initially promising just$ 1 million before increasing the obligated amount to more than$ 9.1 million. According to its website, the division is” composed of a diversified group of scientists and technicians studying developing technologies particular to the human factor of warfighting.”
Many other arrangements were part of the same project, dedicated to building a network of fraudulent online transactions, which the government had presumably use to defend against” cultural engineering” problems. Although it may seem harmless, the system’s mechanics allow for American-use.
‘ Social Engineering Defense ‘ is being funded.
The$ 9.1 million contract was for” Active Social Engineering Defense” — a DARPA program that is” now complete”, according to its webpage. It aimed to “develop the core technology to enable the ability to automatically identify, disarm, and investigate social engineering attacks.” It claimed to be primarily focused on scam and phishing attempts.
The contract’s “program manager” was DARPA scientist Joshua Elliott, the documents show. He was a DARPA program manager from 2017 to 2023, according to his Linked In. Beforehand, he studied things like” socio-technical change“, and worked in academia for 10 years on things like” computational climate economics“. At DARPA, he was allowed to “program”$ 600 million in federal research and development funding, according to the Federation of American Scientists. Afterward, Elliott worked for the radical group Quadrature Climate Foundation, and more recently, Renaissance Philanthropy — started by a former staffer for Presidents Bill Clinton and Barack Obama.  ,
Also in 2018 as part of the” Active Social Engineering Defense Program”, the government promised funding for other contracts — also under Elliott— including$ 2.5 million for HRL Laboratories,$ 8.5 million for SRI International,$ 7.1 million for the University of Southern California,$ 4.2 million for Raytheon BBN,$ 507.9 million for the MITRE Corporation, and$ 2.4 million for the Canadian Commercial Corporation ( the nation’s government contracting agency ).
In 2019, according to the DARPA document, the government promised funding for “active social engineering defense” from additional groups, including nearly$ 1 million for Carnegie Mellon University,$ 9.5 million for Northrop Grumman, more than$ 774, 000 for Purdue University, nearly$ 1.9 million for the State University of New York Research Foundation, and$ 1.3 million for the infamous University of California, Berkeley.
Although DARPA Program Manager Walter Weiss took over in fiscal year 2019, Elliott was the program manager in the 2018 fiscal year, according to government records. According to a 2021 Georgetown study, the “active social engineering defense” program would “build systems that can detect social engineering attacks and autonomously respond to trick adversaries into revealing identifying information.”
Many of these groups played integral roles in the “active social engineering defense” project, which enabled a massive network of fake, government-controlled online accounts to ensnare scammers, sometimes collecting information for further investigation. However, it seems that nothing can stop these networks from tampering with Americans ‘ privacy and speech.
Networks Of Phony Online Accounts
HRL created “defense systems against social engineering attackers” as part of DARPA’s” Active Social Engineering Defense” program. In a webpage from 2018, the group boasted its system ( “CHESS” ) would “exploit attackers ‘ methods by drawing them in with automated responses” to capture their data, operating “across various media, including email, social media, and text messages” . ,
The website states that” CHESS seeks to activate virtual bots that act on behalf of victims and manage communications with the attacker across all media.” Its system would “gather as much personal information about an attacker as possible, including identifying individual bad actors and any organizations that might be behind them,” according to  .
Although this may seem benign, it is a shocking admission because the government sponsored a sizable network of fictitious social media accounts that it could use to spoof user data.
The Federalist’s CEO, Sean Davis, wrote on X that” this DARPA contract involved the development and deployment of technology to create and manage fake social media accounts at scale.” This is far more treacherous than a straightforward government loan to a news agency.
As part of the program, SRI developed a similar system, but with a more ominous moniker —” Project NEMESIS“. The system was” capable of integrating multiple dialog generation strategies for… integration into a live defensive service,” according to the Defense Technical Information Center. The group was “directed by DARPA” to develop NEMESIS, which proved” comparable to what can be achieved by human testers, but scalable to much larger populations” . ,
NEMESIS — to which data firm Jataware contributed – “integrates all elements of our detection, dialog engagement, and attribution services”, according to an Air Force Research Laboratory document. One document states that “our team has demonstrated the creation and management of a multi-virtual-persona social media interaction system, which provides a strong foundation for our understanding of how to construct the key components of Nemesis ‘ virtual persona management.” Our dialog management system was integrated with a number of services that managed fake social media accounts.
Again, while it sounds appealing to bait foreign adversaries or scammers to get their information, the system appears to let the government target Americans using its phony online network. NEMESIS engages “adversaries” who do things like share so-called “disinformation” — or speech not approved by the regime.  ,
In addition to this program, the University of Southern California established a new system called PIRANHA, according to a document from the Air Force Research Laboratory. ” The PIRANHA team focused on methods to augment neural dialogue approaches worked on by other teams on the A]ctive ] S]ocial ] E]ngineering ] D]efense ] program”.
Using” clues from the content of a message and… information obtained from search engines and social media,” PIRANHA “gathers information and conducts external vetting” to identify targets deemed threatening. It explores any URLs linked in a message, “examine]s ] the style of a message”, and “promotes agenda pushing in automated responses” to help with “gathering more information to feed back into external vetting”.
Another similar system, called SIENNA, was created by Raytheon, which purposefully deployed a network of phony online accounts to gather user information.
According to an online project description, SIENNA used” the creation and deployment of a bot framework driven by conversational technology” that Raytheon members originally created for videogames. When a conversation is deemed hostile, the system” will deploy a set of bots to engage and investigate the attackers.”
” Each bot has a role, goals, and speaking style ( its persona ) … to exploit what it knows so far about the nature and goals of each attacker”, the project description reads. ” The bots ‘ true purpose is to engage, build trust, provide fake information, and most importantly to elicit information from the attacker and waste their time and resources”.
Raytheon created two technologies, according to an Air Force Research Laboratory document. “SIENNA-Bot”, a” chatbot designed to converse with an interlocutor”, and” Cervantes”, which engages in “domain-specific dialogue development” and uses “quests, i. e., series of questions of increasing complexity intended to elicit information from the interlocutor”.

Image CreditThe pipeline from a user deemed threatening, to the SIENNA chatbot, to information gathering. Screenshot | Air Force Research Laboratory
According to the DARPA funding document, the government also collaborated with MITRE for “active social engineering defense,” but the Army Communication Electronics Command, rather than the Air Force, was in charge of this contract while it was still under the control of DARPA Program Manager Joshua Elliott.  ,
The Naval Postgraduate School published a slide show titled” Social Engineering Impacts On Government Acquisition” in May 2023. It said a” social engineering attack” on a contractor could cause an “adverse effect on future government acquisitions”, and recommended” ]u ] tilization of AI]artificial intelligence ] and ML]machine learning ] tools”.
In October 2022, MITRE published a paper on the subject. It advised strategies like” AI and ML” and “partnership with the government and private industry technology”
” Leveraging automated, AI- and ML-enabled threat detection, reporting, and mitigation … can take the form of funneling attackers to a hollow Potemkin network, a ‘ vulnerable and publicly accessible’ chatbot posing as an acquisition officer”, the paper reads.
The paper’s authors presented it at the May 2023 Acquisition Research Symposium of the Naval Postgraduate School.
MITRE “worked closely” with the Cogsec Collaborative,” which built and connected groups responding to perceived disinformation at no charge”, according to InfluenceWatch. Meanwhile, MITRE allegedly “developed a parallel framework… which employed similar techniques and tactics”.
There appears to be another contract with the same ID number but with a different recipient for the$ 2.4 million contract with the Canadian Commercial Corporation ( shown in public grant disclosures and the DARPA funding document ).
The other contract — also with contract number FA865018C7889 — was with Toronto, Canada company Uncharted Software, and ran from September 2018 to October 2018. But it was also part of the” Active Social Engineering Defense” project. Uncharted worked with “proven Defense Advanced Research Projects Agency ( DARPA ) collaborators”, and the project was overseen by the Air Force’s 711th Human Performance Wing.
Uncharted created a system called” ReCourse”, which would” coordinate, monitor and selectively moderate automated, conversational, enterprise-scale bots for defense against social engineering”, according to the contract. The company designed a “human-in-the loop” system, and ReCourse would” shape bot tactics at the global enterprise level”.
Uncharted even described how to create fake online profiles by creating a fake profile called” Gabby.”  ,

Image Credit Uncharted’s bot account” Gabby”. Screenshot | Air Force Research Laboratory

Image CreditThe steps involved in creating a network of bot accounts. Screenshot | Air Force Research Laboratory

The various social engineering systems, including NEMESIS, were subjected to a series of accuracy tests. The tests included the University of Southern California’s aforementioned system, PIRANHA. The” Friend/Foe” test, shown in the Air Force records, was particularly concerning.  ,

Image CreditHow the different systems tested in identifying “friend” or “foe” online. Screenshot | Air Force Research Laboratory
ReCourse, which had a 35 % accuracy rate for classifying friendly or malicious accounts, still had a 6 percent “false alarm” rate. The lowest accuracy was NEMESIS — at zero percent, with a 19 percent “false alarm”. One system even resulted in a 49 percent “false alarm” rate.
So even if these systems were simply engaged by scammers and malicious foreign actors, given the benefit of the doubt, they might not even function as intended. But regardless, the government has spent millions on creating a phony, regime-controlled online network capable of accessing Americans ‘ information, stifling their speech, and engaging in psychological warfare.
Logan Washburn is a staff writer who writes about election ethics. He is a The College Fix spring 2025 fellow. He graduated from Hillsdale College, served as Christopher Rufo’s editorial assistant, and has bylines in The Wall Street Journal, The Tennessean, and The Daily Caller. Logan grew up in rural Michigan but is originally from Central Oregon.