Workers developing artificial intelligence systems at Google DeepMind in the United Kingdom have voted to unionize, citing growing ethical concerns over the company’s collaborations with military organizations, including the U.S. Department of Defense and the Israeli military.
The move comes just days after Google announced a new Pentagon agreement allowing its AI systems to be used for classified work, intensifying internal debate over how the technology is being deployed in conflict-related contexts.
Growing Concerns Over Military Partnerships
Google recently signed a deal with the U.S. Department of Defense under which its advanced AI tools will support classified government operations. The company also continues to provide cloud computing and AI services to the Israel Defense Forces (IDF) through a $1bn (£740m) programme known as Project Nimbus.
For many DeepMind staff, these partnerships have raised difficult questions about the role of their work in military and surveillance applications. One worker, speaking to The Guardian, said colleagues had increasingly struggled with what they saw as indirect involvement in harm abroad.
Our technology helped the IDF, I want AI to benefit humanity, not to facilitate a genocide.

A Push for Ethical Safeguards
In a letter set to be delivered to management and shared with The Guardian, DeepMind employees formally requested recognition of the Communication Workers Union and Unite the Union as joint representatives of UK-based staff.
The workers wanted to organize to put pressure on Google so that it would commit not to develop technology “whose primary purpose is to cause harm or injury to people.” They also want the company to establish an independent ethics oversight body and give workers the right to refuse to contribute to specific projects on moral grounds.

A Pattern of Internal Dissent at Google
This is not the first time Google has faced internal resistance over defense-related contracts. In 2024, the company dismissed around 50 employees who had protested against Project Nimbus, its cloud and AI agreement with the Israeli government signed in 2021.
Earlier, in 2018, widespread employee backlash erupted over Project Maven, a Pentagon initiative that used Google’s AI to analyze drone footage. The controversy ultimately led Google to decline renewing the contract in 2019 and to release a set of AI principles that originally included a pledge not to develop AI for weapons applications.

The unionization of DeepMind workers signals a renewed wave of internal scrutiny over how artificial intelligence is deployed in military contexts. As AI becomes increasingly central to both civilian and defense systems, tensions within major tech companies highlight a growing demand from employees for clearer ethical boundaries, and a greater say in where their work ends up being used.
We Said This: Don’t Miss… How Politics Shape What You See on Digital Maps in the Middle East

