Google Updates AI Principles Amid Geopolitical Pressures
Google has revised its AI principles, removing commitments against using AI for harmful applications like weapons and surveillance, amidst growing competition in the global AI landscape.
Sundar Pichai, CEO of Alphabet Inc., during Stanford's 2024 Business, Government, and Society forum in Stanford, California, April 3, 2024.
Google has removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company's updated "AI Principles."
A prior version of the company's AI principles stated that the company would not pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," and "technologies that gather or use information for surveillance violating internationally accepted norms."
Those objectives are no longer displayed on its AI Principles website.
"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," reads a Tuesday blog post co-written by Demis Hassabis, CEO of Google DeepMind. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
The company's updated principles reflect Google's growing ambitions to offer its AI technology and services to more users and clients, which has included governments. The change is also in line with increasing rhetoric out of Silicon Valley leaders about a winner-take-all AI race between the U.S. and China, with Palantir's CTO Shyam Sankar saying Monday that "it's going to be a whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win."
The previous version of the company's AI principles stated Google would "take into account a broad range of social and economic factors." The new AI principles state Google will "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides."
In its Tuesday blog post, Google said it will "stay consistent with widely accepted principles of international law and human rights -- always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."
The new AI principles were first reported by The Washington Post on Tuesday, ahead of Google's fourth-quarter earnings. The company's results missed Wall Street's revenue expectations and drove shares down as much as 9% in after-hours trading.