šŸ”’ Google’s dangerous U-turn on AI for military and surveillance: Parmy Olson

Key topics

  • Google drops pledge against AI for weapons and surveillance.
  • Automated warfare risks escalating conflicts beyond human control.
  • Calls grow for global regulations on military AI development.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

The seventh BizNews Conference, BNC#7, is to be held in Hermanus from March 11 to 13, 2025. The 2025 BizNews Conference is designed to provide an excellent opportunity for members of the BizNews community to interact directly with the keynote speakers, old (and new) friends from previous BNC events ā€“ and to interact with members of the BizNews team. Register for BNC#7 here.

If you prefer WhatsApp for updates, sign up to the BizNews channel here.

By Parmy Olson

Googleā€™s ā€œDonā€™t Be Evilā€ era is well and trulyĀ dead.Ā  Ā ___STEADY_PAYWALL___

Having replaced that motto in 2018 with the softer ā€œDo the right thing,ā€ the leadership at parent company Alphabet Inc. has now rolled back one of the firmā€™s most important ethical stances, on the use of its artificial intelligence by the military. 

This week, the company deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018. Its ā€œResponsible AIā€ principles no longer include the promise, and the companyā€™s AI chief, Demis Hassabis, published a blog post explaining the change, framing it as inevitable progress rather than any sort of compromise. 

ā€œ[AI] is becoming as pervasive as mobile phones,ā€ Hassabis wrote. It has ā€œevolved rapidly.ā€ 

Yet the notion that ethical principles must also ā€œevolveā€ with the market is wrong. Yes, weā€™re living in an increasingly complex geopolitical landscape, as Hassabis describes it, but abandoning a code of ethics for war could yield consequences that spin out of control. 

Bring AI to the battlefield and you could get automated systems responding to one another at machine speed, with no time for diplomacy. Warfare could become more lethal, as conflicts escalate before humans have time to intervene. And the idea of ā€œcleanā€ automated combat could compel more military leaders toward action, even though AI systems make plenty of mistakes and could create civilian casualties too.   

Automated decision making is the real problem here. Unlike previous technology that made militaries more efficient or powerful, AI systems can fundamentally change who (or what) makes the decision to take human life. 

Itā€™s also troubling that Hassabis, of all people, has his name on Googleā€™s carefully worded justification. He sang a vastly different tune back in 2018, when the company established its AI principles, and joined more than 2,400 people in AI to put their names on a pledge not to work on autonomous weapons. 

Less than a decade later, that promise hasnā€™t counted for much. William Fitzgerald, a former member of Googleā€™s policy team and co-founder of the Worker Agency, a policy and communications firm, says that Google had been under intense pressure for years to pick up military contracts.  

He recalled former US Deputy Defense Secretary Patrick Shanahan visiting the Sunnyvale, California, headquarters of Googleā€™s cloud business in 2017, while staff at the unit were building out the infrastructure necessary to work on top-secret military projects with the Pentagon. The hope for contracts was strong.

Fitzgerald helped halt that. He co-organized company protests over Project Maven, a deal Google did with the Department of Defense to develop AI for analyzing drone footage, which Googlers feared could lead to automated targeting. Some 4,000 employees signed a petition that stated, ā€œGoogle should not be in the business of war,ā€ and about a dozen resigned in protest. Google eventually relented and didnā€™t renew the contract.  

Looking back, Fitzgerald sees that as a blip. ā€œIt was an anomaly in Silicon Valleyā€™s trajectory,ā€ he said.  

Read more: Trump is wrong to weaponise aid to South Africa: Melanie Verwoerd

Since then, for instance, OpenAI has partnered with defense contractor Anduril Industries Inc. and is pitching its products to the US military. (Just last year, OpenAI had banned anyone from using its models for ā€œweapons development.ā€) Anthropic, which bills itself as a safety-first AI lab, also partnered with Palantir Technologies Inc. in November 2024 to sell its AI service Claude to defense contractors.     

Google itself has spent years struggling to create proper oversight for its work. It dissolved a controversial ethics board in 2019, then fired two of its most prominent AI ethics directors a year later. The company has strayed so far from its original objectives it canā€™t see them anymore. So too have its Silicon Valley peers, who never should have been left to regulate themselves. 

Still, with any luck, Googleā€™s U-turn will put greater pressure on government leaders next week to create legally binding regulations for military AI development, before the race dynamics and political pressure makes them more difficult to set up.

The rules can be simple. Make it mandatory to have a human overseeing all AI military systems. Ban any fully autonomous weapons that can select targets without a human approval first. And make sure such AI systems can be audited. 

One reasonable policy proposal comes from the Future of Life Institute, a think tank once funded by Elon Musk and currently steered by Massachusetts Institute of Technology physicist Max Tegmark. It is calling for a tiered system whereby national authorities treat military AI systems like nuclear facilities, calling for unambiguous evidence of their safety margins. 

Governments convening in Paris should also consider establishing an international body to enforce those safety standards similar to the International Atomic Energy Agencyā€™s oversight of nuclear technology. They should be able to impose sanctions on companies (and countries) that violate those standards. 

Googleā€™s reversal is a warning. Even the strongest corporate values can crumble under the pressure of an ultra-hot market and an administration that you simply donā€™t say ā€œnoā€ to. The donā€™t-be-evil era of self-regulation is over, but thereā€™s still a chance to put binding rules in place to stave off AIā€™s darkest risks. And automated warfare is surely one of them.    

Read also:

Ā©Ā 2025Ā Bloomberg L.P.

GoHighLevel
gohighlevel gohighlevel login gohighlevel pricing gohighlevel crm gohighlevel api gohighlevel support gohighlevel review gohighlevel logo what is gohighlevel gohighlevel affiliate gohighlevel integrations gohighlevel features gohighlevel app gohighlevel reviews gohighlevel training gohighlevel snapshots gohighlevel zapier app gohighlevel gohighlevel alternatives Agency Arcade, About Us - Agency Arcade, Contact Us - Agency Arcade, Our Services - Agency Arcade gohighlevel pricegohighlevel pricing guidegohighlevel api gohighlevel officialgohighlevel plansgohighlevel Funnelsgohighlevel Free Trialgohighlevel SAASgohighlevel Websitesgohighlevel Experts