LAWS: The Gatling Gun of the 21st Century?

Lethal Autonomous Weapon Systems (“LAWS”) are advanced weapons that predictably could be invented in the near future, with full autonomy, implying little or no human involvement to determine targets in launching attacks. Some modern weapons, including South Korea’s Samsung SGR-1 automated sentry gun and Sweden’s Bonus System, already exhibit such features, although at a somewhat rudimentary stage. 

LAWS would pose many questions — in international human rights law, humanitarian law (“IHL”) and criminal law, among other concerns. However, this piece is tailored to a brief analysis of LAWS in the context of IHL. More specifically, it will try to address two critical questions – first, at the outset, could LAWS comply with IHL? And second, how could such LAWS change the impact of armed conflicts?      

Could LAWS Comply with IHL?

The first contentious requirement for LAWS would be ensuring the capacity to distinguish civilians from combatants in attacks. This requirement is the “cornerstone” of IHL, often termed as cardinal and inviolable. For compliance in determining unquestionable targets or non-targets like persons hors de combat, developers could programme basic binaries between characteristics of the two. In case of doubt, LAWS could be programmed not to attack. The major challenge would be in execution, as the sensory capabilities of LAWS would have to be tremendous, both in processing and identifying stimuli. 

Of course, for more complex situations, such as discerning when a civilian is directly participating in hostilities, decisions are more contextual and qualitative. However, in the case of learning algorithms, LAWS might theoretically adapt to such subjectivities upon experience. Such learning, however, must have its limits — if combatants begin feigning civilian traits, the algorithm must not deviate from the program and start targeting civilians. Another vital consideration would be ensuring safeguards against possible algorithmic biases, as the social and moral prejudices of programmers can easily translate into targeting algorithms. While these are undoubtedly difficult challenges, depending on technological advancement, LAWS may potentially be able to comply with the requirement of distinction, if programmed to. 

What about the requirement to verify the military nature of a target? Would human verification of all targets determined by LAWS be required? Possibly not, since only those precautions are to be taken which are feasible (practically possible).  It could be unfeasible to expect humans to second-guess these decisions during armed conflicts, as LAWS could make targeting decisions significantly faster and with much greater accuracy — making this precaution unfeasible. 

Some authors argue that the use of LAWS will be at odds in any event with the Martens Clause, primarily based on the belief that mercy, compassion, and kindness cannot be programmed algorithmically. LAWS proponents respond that they need not be kind — they need only comply with relevant provisions of IHL which nowhere obligate kindness. Even accepting the contention, roboticists like Arkin believe LAWS could be more compassionate than humans. This speculation has its merits since machines cannot feel fear, anger, the need for self-preservation, the desire for revenge and similar innate human factors. Notably, while some instances of opinio juris suggest that “meaningful human control” or “appropriate human judgment” may be required in enforcing IHL obligations, there are no agreed definitions for such standards.

Needless to say, the preceding discussion does not encapsulate the discourse on LAWS in entirety, since there is understandably more nuance to these requirements, and indeed several other relevant rules. However, it does provide an understanding to address the more vital question: the possible impact of LAWS on armed conflicts if ever invented.

The Impact of LAWS on Armed Conflicts

LAWS-proponents have two major claims on their on-ground impacts: (i) military casualties will reduce in armed conflicts where human troops may no longer be deployed as much; and (ii) non-combatant casualties in armed conflicts will reduce since LAWS will prove greater compliance with IHL (assuming they are programmed to). 

The first claim seems probable. Indeed, in an armed conflict where there is no need for human military troops, military casualties may likely decrease. However, that incentive comes at high risk — the risk that LAWS will make conducting hostilities unimaginably more cost-effective and practical for States, creating scope for new military operations that may have been hitherto impossible. The world may witness a significant increase in acts of bravado with precarious political and legal implications, such as the USA’s recent drone-strikes (a semi-autonomous weapon) that killed Iranian General Qassim Soleimani while he was in Iraqi territory. 

It is because of this risk of increased military operations and armed conflicts that the second claim becomes futile. No doubt, as we’ve seen above, LAWS might ensure lesser civilian casualties within armed conflicts because of better IHL compliance. Yet, due to the sheer increase in the frequency of armed conflicts in a new world where hostilities are cheaper and more convenient to conduct, the overall casualties could burgeon exponentially.

Similar to certain LAWS proponents, Richard Gatling was horrified by military casualties in the American Civil War and invented what came to be known as the Gatling Gun, which could fire over 3,000 rounds per minute. With good intentions, he believed that such a machine’s use in warfare would significantly mitigate the number of military casualties by making troops redundant; or otherwise, with its destructive scale, act as a deterrent of wars. Ironically, the Gatling Gun facilitated countless more casualties in several following wars, as it made warring more convenient. Our history, therefore, teaches us that destructive machines can go against the well-wished motive of mitigating the effects of armed conflicts — cementing the grim conjectures on LAWS.

Such ease in conducting hostilities is why organizations including Human Rights Watch have been adamantly demanding an international ban on the development of LAWS. While some authors are sceptical of this conjecture, the worldwide increasing frequency and impacts of drone-strikes for similar reasons strengthen this likelihood.

The Future for LAWS

Since LAWS may be lawful within existing IHL frameworks, their deployment is presently unrestricted. This is owing to the fact that restrictions upon the independence of states cannot be presumed, as per the Lotus principle. However, considering the risks posed by LAWS, there is an urgent need for States as also the ILC to pre-emptively convene regularly and deliberate the contents of possible new conventions on the development and use of LAWS. In the end, as discussed, humans will inevitably be involved in developing the algorithms of LAWS, which necessarily carries the dangers of social and moral biases in their targeting. Indeed, biases in existing AI technology, such as racial differences in face recognition technology exacerbates this fear. It is undeniable that humans would be involved in the processes that continuee to decide as to whether, why and how such LAWS are to be deployed. It may then be wiser to take cognisance of the entrustment of such destructive choices.


Abhijeet Shrivastava is a B.A., LL.B. (Hons.) student at Jindal Global Law School.

Image: CNS photo/Annegret Hilse, Reuters

One thought on “LAWS: The Gatling Gun of the 21st Century?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s