
Meta is moving the majority of its inner security and privacy checks to artificial intelligence, replacing a program that has typically relied heavily on people judgment.
Up to 90 % of Meta’s risk assessments are anticipated to be automated, according to internal records obtained by NPR. Prior, specialised teams had to assess how changes might affect users ‘ privacy, harm adolescents, or encourage the spread of misinformation. The role for these evaluations will essentially be transferred to AI systems under the new program.
Meta is the parent firm of Fibers, Instagram, WhatsApp, and Facebook.
AI to choose solution challenges
In accordance with this new construction, product teams will complete a questionnaire with updates and fill out an update request. An AI system will then make an informed decision right away, including identifying potential risks and establishing important prerequisites for the project. People oversight will only be necessary in a limited number of situations, such as when initiatives introduce novel dangers or when a team specifically requests it. This method is described as one where teams may “receive an’instant decision'” based on AI evaluation, according to a slide from Meta’s inside presentation.
This change enables developers to relieve features much more quickly. However, professionals, including former Meta officials, worry that rate will come at the expense of caution.
A former Meta executive told NPR on the condition of privacy that “in this process, more stuff launches faster, with less demanding attention and opposition, means you’re creating higher hazards.”
The new procedure is meant to” streamline decision-making,” according to Meta, who stated in a statement, and that “human knowledge” will still be employed for “novel and complex issues.” Although the company claimed that only “low-risk decisions” are being automated, internal documents obtained by NPR reveal that more sensitive areas like AI safety, youth risk, and content integrity ( including violent or false content ) are also slated for automations.
Critics claim that it could have a negative impact.
Some Meta employees and those who work there are cautioning that over-reliance on AI for risk evaluations may be premature. This about seems self-defeating, according to another former Meta staff who spoke to NPR anonymously. Every time they release a new product, there is a lot of attention on it, and attention frequently uncovers issues the organization may have taken more seriously.
Katie Harbath, past Facebook government policy director and current Anchor Change CEO, offered a more balanced perspective.
More AI is going to be needed if you want to move rapidly and produce high quality because people can only accomplish so much in a short amount of time, she told NPR. She remarked that” those techniques also need to have checks and balances from people.”
Regulation stress and exclusions in Europe
Meta has been subject to a Federal Trade Commission (FTC ) agreement since 2012 that mandates it to conduct privacy checks for product updates. Following a arrangement regarding how the company handled user data, this monitoring occurred.
Meta stated that in response to these commitments, it has “invested over$ 8 billion in our private software” and is working on improving its procedures. A company director told TechCrunch that as dangers evolve and our system matures, we improve our processes to better recognize risks, optimize decision-making, and enhance people’s experiences.
Interesting is that people in the European Union might not experience the exact level of automation. According to internal communications, Meta’s headquarters in Ireland will continue to be in charge of making decisions regarding EU-related products, in part because of the Digital Services Act, which has greater information and data protection laws.
The move toward technology is in line with other new policy changes at Meta, such as the rest of its love speech policies and the phase-out of its fact-checking system.
Meta noted in its Q1 2025 Integrity Report that its AI techniques are now outperforming people in some scheme areas. The organization wrote that” This gives our readers more room to concentrate on material that is more likely to violate.”