Google recently expanded its contract with the U.S. Department of Defense (DoD) to provide its Gemini AI for classified operations, or for “any lawful purpose.” At the same time, the company backed out of a $100 million Pentagon competition to build autonomous voice-controlled drone swarms.
Internally, Google faces dissatisfaction over its decision to provide Gemini for classified projects. The company responded by telling staff it is ‘proud’ of the Pentagon AI contract. This raises questions about how Google’s ethics and policies are changing and if they are shifting to gain a highly profitable, though ethically questionable, share of government work.
Google’s move away from its old motto, “Don’t Be Evil,” seems to be happening in the eyes of some employees. This isn’t the first time the company has changed its stance. Google’s AI principles once stated that the company would not use its AI tools where they were “likely to cause harm” and would not “design or deploy” AI tools for surveillance or weapons.
Google reportedly pulled out of the Pentagon drone swarm competition due to a lack of resources. However, Bloomberg reports the real reason was an internal ethics review. This suggests that the internal ethics board is still working and has some influence.
On the other hand, by making Gemini available on classified networks, the Pentagon can use it for “any lawful purpose.” This clause, however, might offer little real protection. Laws can change. Before the turn of the century, it was illegal for communication providers to install backdoors for law enforcement, but acts like CALEA and the Patriot Act changed that. Similarly, the CLOUD Act changed rules that once prevented federal law enforcement from seizing data on foreign servers.
This effectively gives the Pentagon a future-proof loophole if their intended use suddenly becomes legal. Therefore, the “any lawful purpose” clause doesn’t offer much protection against using AI for autonomous weapons or mass domestic surveillance, as Anthropic had argued. It’s further weakened by a clause in the Google-DoD contract that states the company cannot “control or veto lawful government operational decision-making.” OpenAI also encountered a similar clause in its Pentagon deal.
This gives the Pentagon almost free rein over how it uses Gemini in its classified projects. Mass surveillance has existed for decades, but AI’s role is simply to make it smarter, more targeted, and more efficient.
Working as a government and military contractor is appealing because of the large sums of money involved. Shortly after Anthropic stepped back from certain government uses, OpenAI secured an expanded contract to fill that exact role. Similarly, Microsoft and Amazon have already won many contracts for cloud, AI, and cybersecurity tools, and it seems Google is trying to catch up.
Google’s employees have consistently challenged the ethics of working with the government. In 2018, employee protests led Google to drop out of Project MAVEN, which involved using Google technology to analyze drone strike footage. These protests also led to Google’s “do no harm” AI principles, which are now missing. Google also faced similar dissent when employees opposed the company’s potential involvement in providing technology to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).
As is now typical, Google’s employees are again forming digital picket lines. Over 600 employees signed a letter to CEO Sundar Pichai, asking him to reject any use of Google’s AI technology for military purposes. In response, Kent Walker, Google’s president of global affairs, wrote in an internal memo on Tuesday, seen by The Information: “We have proudly worked with defense departments since Google’s earliest days, and we continue to believe that it’s important to support national security in a thoughtful and responsible way.”











