Decryption – The emergence of autonomous 'killer robots': How AI has already become an essential weapon of war?

This gain in autonomy granted to machines was, in part, a response to a war problem. With the digitization of weapons (for example, remotely piloted drones), one strategy is effective in neutralizing them: cutting off communication between the weapon and the soldier. As long as a stable and permanent connection is required to control a machine remotely, an adversary will do everything to disrupt this connection. Faced with this, the Ukrainian military has already admitted to using drones with full autonomy. Courier International reported that the drones can carry 3kg of payload and autonomously identify and attack 64 types of “military objects” at a range of 12km.

The ability to identify targets is the essence of a system currently used by the Israeli military. The 'Lavender' system is perfect for tracking and killing tens of thousands of people. Initially, the investigation of '+972 Magazine' and 'Local Call' explains, the Israelis collected information on a few hundred proven Hamas fighters, and from their data, they asked AI to identify similar profiles. These Palestinians (a figure of 37,000 is given) thus become potential targets. To establish this list, the AI ​​is based on WhatsApp messages, residence, contacts and possible links with other militants.

More efficient technology

Alain de Neve, an expert in defensive technologies, explains “What is special about this technology is that the price keeps coming down“Indeed, like many elements of everyday life, the technologies being disseminated are becoming less and less expensive as they become better mastered and mass-produced.”The first drones were DIY-based, but as expertise grew, there was technology sharing, and now it's more sophisticated“.

Now, with current technologies, killer drones can be made from a 3D printer and at absolutely ridiculous prices. Creating lethal devices with less sophisticated materials would make them harder to detect with counterintuitive technologies. It represents a new advantage.

The contribution of AI to warfighting practices is not new. What we see in today's wars is the result of a process that began in the 1950s.”AI was born with nuclear deterrence. It came when there was a need to find a solution to overcome the enemy's military capabilities without using the atomic bomb.“, notes Alain de Neve.

See also  Vladimir Putin's silence has been able to find out.

What are the limitations?

Like anything new, the appearance of this technology on the battlefield raises a mountain of questions. One of them is: Do we have the right to use it as we please? “Those who raise alarms about AI and compare it to nuclear weapons are right in a way. This comparison is useful because it reminds us of the situation we are going through at this moment in history“, the ERM expert analyzes further.

We must find balance with the Soviet Union and set up a framework on the brink of the Cuban Missile Crisis in 1962.“For nuclear weapons. Will a new major crisis be needed to limit the use of AI in warfare? Only the future will tell us, but Alain De Neve rages and argues that the two scenarios are completely different.

AI can actually be used in purely defense, medical and other aspects. Not only offensive and destructive.

Are there regulations?

For Paul Warnott, an expert on disarmament law and artificial intelligence in security matters, “Compliance with international humanitarian law depends on weapons and not the other way around“. He recalled that in its opinion on nuclear weapons, the International Court of Justice confirmed that the main principles of the law of war and related provisions apply even to new technologies.

Also, all states must verify any new weapon, any new means or any new method of warfare that they deploy, purchase, develop, etc. Very legal. Therefore, any new weapon must respect the law, not that the law must adapt to these new weapons.“, concludes the expert.

Alain De Neve has a slightly different take. to him,”The laws are a bit late and we need to catch up“.”This battle is to catch the moving train, the train is speeding up“, he alludes to the conflict in Ukraine.

See also  Russian ship found off Belgian coast: 'We shouldn't be naive'

As of November 2023, 31 countries have validated the principle of self-restraint in military applications of AI. UN Secretary-General Antonio Guterres and International Committee of the Red Cross President Mirjana Spoljaric Eger have called on states to reach a binding agreement by 2026.

Risk of abuse?

The advent of autonomous machines that make decisions about action makes war a little more complicated. Who is to blame if the AI ​​makes the wrong decision? “An unwillingness to accept responsibility may be the problem. Some may hide behind AI to avoid accountability“, analyzes Marie-des-Neiges Rufo, doctor of philosophy and teacher with a master's degree in cyber security at UNamur.

Robots' actions depend on how and why they are programmed. Even with the best adjustments, the decision to kill is still based on algorithms. “The problem is when we reduce decision making to a simple calculation“, says the woman who is also an associated researcher at the prestigious military academy of Saint-Cyr Cote d'Azur in France.

For there to be fair use, there must be a call for human responsibility. This is a debate as old as the use of the crossbow. You must know what you are doing“. Especially the first one.”Modern wars are causing more and more civilian casualties“.

To support his argument of “pushing the decision to a calculation,” let's take the lavender method used by the Israeli military. It is about 90% reliable. That means 10% of the 37,000 Palestinians targeted had no connection to Hamas. However, they may find themselves dead with a target on their back.

The end of human players?

Such a hypothesis is unlikely to materialize immediately, obviously. But with the advancement of these technologies, there is less need for players to expose themselves to attack enemies.

With the arrival of the atomic bomb, we thought the war was over, but no. We were told that there would be no need to send soldiers into the field as more efficient missiles were always coming. The same can be said with AI“Alain de Neve restates.”But I do not believe that the human factor will disappear, even if there is a risk that it will be reduced more or more, it will not wait for zero.“. Even for the maintenance of all these systems, human intervention remains essential.

See also  War in Ukraine: In the south, Ukrainian soldiers want to reach Kherson by Christmas (report)

In absolute terms, we must ask ourselves who can determine the disappearance of the human element in relation to technological development. That is for the politicians to decide“, ERM researcher assures.”The military always has a skeptical view and considers the human element to be very important, while the politician has a more adventurous view, generally speaking.“, he believes.

If the intervention of AI in conflicts should not mean the end of field players, it could mean the emergence of new tensions, Alain de Neve still thinks. “AI will disrupt many of the foundations on which policy and prevention depend.

To back up his words, the ERM expert describes the global situation as follows: “The US and Russia have roughly the same nuclear arsenal. If one country decides to send its missiles to another, it faces retaliation, mainly from submarines capable of launching nuclear missiles. Surface-to-surface missile sites can be identified, but submarines cannot. So, if a country develops the technology to locate these submarines and know their position, it will completely change the balance of the established order. This is one scenario among others we read.

For now, although Belgium has not embarked on acquiring intelligent weapons like the Ukrainians, AI is beginning to be taught in the courses of the Royal Military Academy that trains our military officers. “There is no course specifically for this yet, but it will become necessary eventually.“, predicts Alain de Neve.

To conclude, “The use of AI should not be demonized“, says Marie-des-Neiges Rufo.”It is effective to remove doubt about targets and ensure security.“.

Leave a Reply

Your email address will not be published. Required fields are marked *