Google announced on Wednesday that it has revised its artificial intelligence (AI) principles, removing a key pledge that previously committed the company to avoiding the use of AI technology in weapons development. The updated principles, published online, no longer include a section that explicitly prohibited the development of AI applications that could cause harm or be used for surveillance in ways that violate internationally accepted norms.
The removed section had stated that Google would not pursue AI applications in the field of weapons or those that involve gathering or using information for surveillance in ways that conflict with global standards. In its place, the updated principles now emphasize “responsible development and deployment” of AI, which includes implementing “appropriate human oversight, due diligence, feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
In a blog post, Google’s senior vice president, James Manyika, and Demis Hassabis, head of Google’s AI lab, DeepMind, explained that the company decided to update its AI principles, which were first published in 2018, to reflect the rapid evolution of AI technology. They noted that AI has transitioned from a niche research topic to a ubiquitous technology, comparable to mobile phones and the internet, with billions of people now using AI in their daily lives.
“AI has become a general-purpose technology and a platform that countless organizations and individuals use to build applications,” they wrote. “It has moved from the lab to a technology that is becoming as pervasive as mobile phones and the internet itself, offering numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”
The blog post also highlighted the growing international collaboration on establishing common AI principles, which Google supports. However, Manyika and Hassabis acknowledged that the global competition for AI leadership is unfolding within an “increasingly complex geopolitical landscape.” They expressed their belief that democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. They also called for collaboration among companies, governments, and organizations that share these values to ensure AI is developed in ways that protect people, promote global growth, and support national security.
The decision to revise Google’s AI principles comes amid ongoing debates among AI experts, governments, regulators, tech firms, and academics about how to monitor and regulate the development of powerful emerging technologies. While previous international summits have resulted in non-binding agreements to develop AI “responsibly,” there is still no binding international law governing AI development and deployment.
Google’s past contracts with the U.S. and Israeli military, particularly for cloud services, have sparked internal protests from employees. The company’s latest move has raised concerns among some experts. James Fisher, chief strategy officer at AI firm Qlik, warned that Google’s decision to alter its AI principles highlights the need for stronger international governance, particularly in countries like the UK.
“Changing or removing responsible AI policies raises concerns about accountability and the ethical boundaries of AI deployment,” Fisher told the PA news agency. “AI governance must evolve as the technology develops, but adherence to certain standards should be non-negotiable. For businesses, this decision signals a complex AI landscape ahead, where ethical considerations will compete with industry rivalry and geopolitical pressures.”
Fisher emphasized the importance of the UK establishing robust and enforceable AI governance frameworks, given its ambition to lead in AI safety and regulation. “The UK’s ability to balance innovation with ethical safeguards could set a global precedent, but it will require collaboration between government, industry, and international partners to ensure AI remains a force for good,” he said.
As AI continues to advance, the ethical and regulatory challenges surrounding its use are likely to intensify, with Google’s decision underscoring the tensions between technological progress, corporate responsibility, and global governance.