Innovation

The Horrifying Truth Behind Israel’s ‘Assassination’ AI

TERMINATOR

These models are more common in our militaries than you think—and it's creating disastrous results.

A photo depicting an aerial view showing the destruction caused by Israeli strikes in Wadi Gaza, in the central Gaza Strip
Mahmud Hams / Getty

Israel’s military has reportedly deployed artificial intelligence to help it pick targets for air strikes in Gaza.

There are reasons to worry. The biggest problem with military AI, experts told The Daily Beast, is that it’s created and trained by human beings. And human beings make mistakes. An AI might just make the same mistakes, but faster and on a greater scale.

So if a human specialist, scouring drone or satellite imagery for evidence of a military target in a city teeming with civilians, can screw up and pick the wrong building as the target for an incoming air raid, a software algorithm can make the same error, over and over.

ADVERTISEMENT

The result, in the worst case, is a lot of dead civilians. More civilians than might die if actual human beings were doing all the analysis ahead of an air strike. “There are serious risks,” Carlo Kopp, an analyst at the Air Power Australia think-tank who has decades of experience studying surveillance and targeting problems, told The Daily Beast.

The grim truth is that it’s a tool many of the world’s leading militaries are scrambling to develop and deploy in order to solve a centuries-old problem: where to aim their heaviest weapons.

+972 Magazine was among the first to report on the Israeli Defense Forces’ targeting AI. An algorithm called “Habsora”—Hebrew for “gospel”—is one reason “for the large number of targets, and the extensive harm to civilian life in Gaza,” resulting from Israel’s military offensive, according to the magazine.

The IDF launched its attacks on Gaza several weeks after terrorists from Hamas, the authoritarian regime in Gaza, infiltrated southern Israel and murdered or abducted more than a thousand people, including children, on Oct. 7.

More than 15,500 Gazans, as well as 75 Israeli troops, have died in back-and-forth attacks since then. The Gazan deaths—most of them civilians—are largely the result of Israel’s intensive bombardment of Gaza, one of the most densely-populated places on Earth: 650,000 people packed onto 18 square miles.

Habsora has helped turn the IDF’s aerial targeting into a “mass assassination factory,” one unnamed former intelligence officer told +972 Magazine. But Habsora is neither new nor unique to the Israeli military. The grim truth is that it’s a tool many of the world’s leading militaries are scrambling to develop and deploy in order to solve a centuries-old problem: where to aim their heaviest weapons.

The Israeli military, like all big militaries, constantly surveys its enemies and rivals from space, from the air and even from the ground by way of undercover operatives. The amount of data available to targeting officials is immense, and growing more immense at an exponential rate as satellites, drones and other sensors grow more sophisticated and numerous.

The targeter’s job is an unenviable one. Scanning videos and photos as well as non-visual intel—charts of radio transmissions, maps of heat sources, even patterns of vehicle and foot traffic—they have to propose which buildings to blow up. Usually, commanders ultimately make the final decision, often with input from military lawyers.

“Targets are developed using a traditionally time-consuming collection, identification and analysis process,” Kopp said. “They are then attacked, and the damage assessed, to determine whether re-attack is warranted.”

So much of the targeter’s work is strictly analytical, so it would seem to make sense to automate it. Instead of inspecting photos of a thousand individual rooftops for evidence of military-grade radio gear, they might task an AI to do the inspection.

In principle, an algorithm can spot telltale details faster and more accurately. “Big data, machine-learning and AI tech are now seen as tools for both accelerating and scaling [surveillance] product analysis, as they have proven very effective at sifting vast volumes of digital data, orders of magnitude faster than humans can,” Kopp said.

The problem, for any military targeting specialist but especially for Israeli specialists supporting a sometimes chaotic offensive in a very crowded territory, is that people encode AIs; people train these AIs with datasets that people create; and people choose when, where, and how to deploy an AI.

Where targeting is highly automated, human beings usually make the final decision to pull the trigger. But as wars move faster, the humans sometimes give up even that cursory control. “There is still a lot of discrepancy between claims that humans will be in the loop, and the emerging autonomy driven by … fast-paced innovation,” Samuel Bendett, a senior non-resident associate with the Center for Strategic and International Studies in Washington, D.C., told The Daily Beast.

People are fallible, which is why so many militaries want to reinforce them with algorithms. But if the algorithms have human frailty in their proverbial DNA, are the algorithms any better? Sure, they might help human beings work faster as they decide who lives and dies. But can they help the human beings work better, more justly, at this awful task? “An AI model trained from human-generated datasets will usually make the same mistakes as human analysts do,” Kopp said.

In America’s own counterterrorism and counterinsurgency campaigns, there have been countless examples of targeters getting it wrong, often with significant assistance from automated technology. Mistaking farmers for infiltrating militants. Assuming that someone parked by the side of the road is burying a bomb rather than replacing a flat tire. Observing a wedding and concluding it’s a confab for terrorists.

You can count on AI to make many of the same mistakes, but at the speed of electrons—and at a volume bound only by the installed processing power of some computer. The purported AI-powered “assassination factory” that Israeli forces are running in Gaza isn’t the first mass-bloodshed that machines have perpetrated on behalf of their human masters.

And it won’t be the last.

Got a tip? Send it to The Daily Beast here.