Google announced on Tuesday that it is changing the rules governing how it uses innovative technology and artificial intelligence. The business removed the phrase “technologies that trigger or are likely to cause general harm,” “weapons or other technologies whose main purpose or implementation is to produce or instantly inflict harm on people,” “technologies that gather or use information for surveillance violating globally accepted norms,” and “technologies whose purposes violate commonly accepted principles of international law and human rights.”
A statement explaining the changes was added to a 2018 website post’s title to explain the changes. ” We’ve made changes to our AI Principles. Visit AI. Google for the latest”, the statement reads.
In a website blog on Tuesday, a pair of Facebook executives cited the increasingly popular use of AI, evolving requirements, and political battles over AI as the “backdrop” to why Google’s guidelines needed to be overhauled.
Google’s decision to work on a US military aircraft program was the first to be released in 2018 as a response to domestic protests. In response, it declined to maintain the government commitment and furthermore announced a set of rules to guide future purposes of its innovative technology, such as artificial intelligence. Among other actions, the guidelines stated Google do not create arms, specific security systems, or technologies that undermine human rights.
But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Google is instead given more freedom to pursue potentially sensitive use cases in the revised document. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights”. Google also asserts that it will work to “mitigate unintended or harmful outcomes.”
” We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights”, wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research lab. ” And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security”.
They added that Google will continue to concentrate on AI projects that “acquire our mission, our scientific focus, and our areas of expertise, and remain in line with widely accepted principles of international law and human rights.”
Got a Tip?
Are you a current or former Google employee? We’d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/ WhatsApp/Telegram at + 1-415-565-1302 or [email protected], or Caroline Haskins on Signal at + 1 785-813-1084 or at [email protected]