The use of artificial intelligence “Lavender” in the Israeli offensive on Gaza: Disturbing revelations from new research
A recent research published by the Israeli magazine +972 has revealed alarming details about the use of artificial intelligence in Israel’s current military offensive in the Gaza Strip. According to the report, the Israeli army has been using an artificial intelligence platform known as “Lavender”, which has played a central role in the bombing campaign carried out against the Palestinian population.
The information comes from six Israeli intelligence officers, who worked directly with artificial intelligence systems during the operation. According to their statements, the Lavender programme has been used as an automated tool for identifying and targeting human targets, mostly suspected Hamas militants and other armed groups, were then subjected to air strikes without significant human intervention in the initial selection phase.
Automation of conflict
The report describes Lavender as a system that crosses massive data on communications, movements, affiliations, social networks and other indicators in order to assign a probability of militancy to each individual. The system would have generated up to 37,000 names of Palestinians as potential targets, including their homes, which were subsequently attacked by the Israeli air force.
The report’s most disturbing feature is the military’s heavy reliance on this technology, which would imply that lethal decisions were made based on algorithmic suggestions, with minimal or superficial human verification. According to the sources consulted, the review by officers was in many cases merely a formality lasting seconds, fully relying on the accuracy of the system.
Ethical and humanitarian implications
This type of use of artificial intelligence raises serious ethical and legal concerns. The idea of delegating life-and-death decisions to a machine, however sophisticated it may be, raises questions about the responsibility, proportionality, and accuracy of attacks. International human rights organizations have warned that the automation of military targets can lead to violations of international humanitarian law, especially if it does not properly distinguish between combatants and civilians.
In addition, the breadth of the number of targets suggested-tens of thousands-suggests that the threshold of suspicion applied by Lavender may have been too low, which could explain the high number of civilian casualties reported in this offensive, women, children and the elderly.
A war of the future… today
The use of technologies such as Lavender reflects a growing trend towards the militarization of artificial intelligence, something that technology, ethics and defense experts have been warning about for years. While its advocates argue that these systems can increase efficiency and reduce risk for soldiers, their critics denounce that dehumanizing the conflict can lead to massive errors and less accountability.
So far, neither the IDF nor the government has officially responded to the allegations detailed by +972 magazine. However, the revelations have generated a new call from various international sectors to examine the use of autonomous technologies in the context of war, and to demand transparency, regulation and strict supervision of their implementation.




