Key topics
- Google drops pledge against AI for weapons and surveillance.
- Automated warfare risks escalating conflicts beyond human control.
- Calls grow for global regulations on military AI development.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
The seventh BizNews Conference, BNC#7, is to be held in Hermanus from March 11 to 13, 2025. The 2025 BizNews Conference is designed to provide an excellent opportunity for members of the BizNews community to interact directly with the keynote speakers, old (and new) friends from previous BNC events ā and to interact with members of the BizNews team. Register for BNC#7 here.
If you prefer WhatsApp for updates, sign up to the BizNews channel here.
By Parmy Olson
Googleās āDonāt Be Evilā era is well and trulyĀ dead.Ā Ā ___STEADY_PAYWALL___
Having replaced that motto in 2018 with the softer āDo the right thing,ā the leadership at parent company Alphabet Inc. has now rolled back one of the firmās most important ethical stances, on the use of its artificial intelligence by the military.
This week, the company deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018. Its āResponsible AIā principles no longer include the promise, and the companyās AI chief, Demis Hassabis, published a blog post explaining the change, framing it as inevitable progress rather than any sort of compromise.
ā[AI] is becoming as pervasive as mobile phones,ā Hassabis wrote. It has āevolved rapidly.ā
Yet the notion that ethical principles must also āevolveā with the market is wrong. Yes, weāre living in an increasingly complex geopolitical landscape, as Hassabis describes it, but abandoning a code of ethics for war could yield consequences that spin out of control.
Bring AI to the battlefield and you could get automated systems responding to one another at machine speed, with no time for diplomacy. Warfare could become more lethal, as conflicts escalate before humans have time to intervene. And the idea of ācleanā automated combat could compel more military leaders toward action, even though AI systems make plenty of mistakes and could create civilian casualties too.
Automated decision making is the real problem here. Unlike previous technology that made militaries more efficient or powerful, AI systems can fundamentally change who (or what) makes the decision to take human life.
Itās also troubling that Hassabis, of all people, has his name on Googleās carefully worded justification. He sang a vastly different tune back in 2018, when the company established its AI principles, and joined more than 2,400 people in AI to put their names on a pledge not to work on autonomous weapons.
Less than a decade later, that promise hasnāt counted for much. William Fitzgerald, a former member of Googleās policy team and co-founder of the Worker Agency, a policy and communications firm, says that Google had been under intense pressure for years to pick up military contracts.
He recalled former US Deputy Defense Secretary Patrick Shanahan visiting the Sunnyvale, California, headquarters of Googleās cloud business in 2017, while staff at the unit were building out the infrastructure necessary to work on top-secret military projects with the Pentagon. The hope for contracts was strong.
Fitzgerald helped halt that. He co-organized company protests over Project Maven, a deal Google did with the Department of Defense to develop AI for analyzing drone footage, which Googlers feared could lead to automated targeting. Some 4,000 employees signed a petition that stated, āGoogle should not be in the business of war,ā and about a dozen resigned in protest. Google eventually relented and didnāt renew the contract.
Looking back, Fitzgerald sees that as a blip. āIt was an anomaly in Silicon Valleyās trajectory,ā he said.
Read more: Trump is wrong to weaponise aid to South Africa: Melanie Verwoerd
Since then, for instance, OpenAI has partnered with defense contractor Anduril Industries Inc. and is pitching its products to the US military. (Just last year, OpenAI had banned anyone from using its models for āweapons development.ā) Anthropic, which bills itself as a safety-first AI lab, also partnered with Palantir Technologies Inc. in November 2024 to sell its AI service Claude to defense contractors.
Google itself has spent years struggling to create proper oversight for its work. It dissolved a controversial ethics board in 2019, then fired two of its most prominent AI ethics directors a year later. The company has strayed so far from its original objectives it canāt see them anymore. So too have its Silicon Valley peers, who never should have been left to regulate themselves.
Still, with any luck, Googleās U-turn will put greater pressure on government leaders next week to create legally binding regulations for military AI development, before the race dynamics and political pressure makes them more difficult to set up.
The rules can be simple. Make it mandatory to have a human overseeing all AI military systems. Ban any fully autonomous weapons that can select targets without a human approval first. And make sure such AI systems can be audited.
One reasonable policy proposal comes from the Future of Life Institute, a think tank once funded by Elon Musk and currently steered by Massachusetts Institute of Technology physicist Max Tegmark. It is calling for a tiered system whereby national authorities treat military AI systems like nuclear facilities, calling for unambiguous evidence of their safety margins.
Governments convening in Paris should also consider establishing an international body to enforce those safety standards similar to the International Atomic Energy Agencyās oversight of nuclear technology. They should be able to impose sanctions on companies (and countries) that violate those standards.
Googleās reversal is a warning. Even the strongest corporate values can crumble under the pressure of an ultra-hot market and an administration that you simply donāt say ānoā to. The donāt-be-evil era of self-regulation is over, but thereās still a chance to put binding rules in place to stave off AIās darkest risks. And automated warfare is surely one of them.
Read also:
- BN Briefing: Trump on Elon, Magda āMusk is US President by proxyā, SONA āDear Cyril ā¦ā
- A legal analysis of EWCās controversial ānil compensationā clause
- Polish Trumpās ego ā or pay a āvery dearā priceā¦
Ā©Ā 2025Ā Bloomberg L.P.