Algorithmic Warfare: The Dystopia of AI Weapons and The Legal Vacuum in International Humanitarian Law: Part II

Part I of this piece examined the dangers of Lethal Autonomous Weapon Systems (LAWS), highlighting the power asymmetry in their development and the insufficiency of the current International Humanitarian Law (IHL) framework. Building on this analysis, this part examines the need for a robust normative framework. It explores key legal principles, accountability mechanisms, and enforcement strategies to regulate autonomous warfare effectively.

A Normative Framework for Use of Laws in Warfare

There is a need to anticipate and regulate the use of LAWS, given the speed at which tests are being conducted and the ease at which AI tools are available in warfare already. Therefore, a comprehensive framework is necessary, with particular emphasis on two crucial elements: a space for allowance of use of certain autonomous weapons with ‘sufficient’ human control, and the status of economically weaker nations in IHL with respect to LAWS. 

While an argument to incorporate a deterrence theory in the use of LAWS can be considered just as it has been used to prevent use of nuclear weapons, autonomous weapons would not benefit from such a framework due to the fact that attacks would be so precise that they may be untraceable to a country. It, therefore, becomes imperative to have a legally binding instrument that prohibits the use of LAWS that is being used or is designed to target civilians directly, and does not have “sufficient” human control. Here, “sufficiency” can be defined using the proportionality principle. LAWS that lack human judgment to decide, on a case-to-case basis, when and on whom the weapon should be used and would not have necessary mechanisms to recall it once launched, would be violative of the “sufficient human control” criteria.

1. The Sufficiency Test

While an argument to incorporate a deterrence theory in the use of LAWS can be considered just as it has been used to prevent use of nuclear weapons, autonomous weapons would not benefit from such a framework due to the fact that attacks would be so precise that they may be untraceable to a country. It, therefore, becomes imperative to have a legally binding instrument that prohibits the use of LAWS that is being used or is designed to target civilians directly, and does not have “sufficient” human control. 

Currently, IHL does not establish a formal “sufficiency” test concerning autonomous warfare, particularly in the context of LAWS. There is a pressing need, however, to develop this principle given the increasing reliance on autonomous systems in combat. “Sufficient human control” can be defined drawing onto the proportionality principle- prohibiting attacks where civilian harm would be excessive compared to the military advantage that is anticipated. Sufficiency of human control would, therefore, be assessed based on three key criteria: (i) the ability to assess operational context before deploying force; (ii) the requirement of “meaningful” human-decision making at critical junctures, particularly when determining the legitimacy of a target or authorising an attack; and (iii) the availability of technical mechanisms to abort missions and recall the weapon when there is a change in circumstance.  LAWS that operate without mechanisms that would enable human operators to exercise judgment on a contextual basis- both during deployment and engagement of such weapons- would therefore fail to meet the threshold for “sufficient human control.”

To operationalise the test, it is essential to move beyond mere human oversight towards a more deliberative and accountable model of control. Human moral agency is critical in warfare, especially when there exists risk to human life. If such an agency is exercised in the deployment of the weapon and it satisfies the three aspects of the test laid above, the use of the weapon would be lawful. The extent of the sufficiency framework, however, must not be limited to the test and deployment stage of such a weapon. It should be extended to incorporate standards for accountability. Furthermore, the framework should be extended to incorporate accountability mechanisms to ensure that human actors are recognisable, notwithstanding the complex responsibility structures inherent in autonomous warfare.

The sufficiency test could be codified through an Additional Protocol to the CCW or via a standalone treaty, establishing a presumption against autonomous weapons that lack these controls while not calling for an outright ban. Such an approach would ensure the applicability and enforceability of IHL principles even in the face of evolving technological paradigms in warfare. 

2. Consideration for the Status of Economically Weaker Nations 

A treaty is required urgently, in the form of incorporation of a new protocol to the CCW, with there being sufficient consideration for the hegemonies that such technological asymmetry would cause. A recommendation would be to have necessary consideration for smaller nations while formulating the treaty to ensure that the power asymmetries and the resulting subjugation is taken into account, especially when it concerns warfare and the use of AI where larger nations would use smaller ones as battlegrounds for their political agenda without losing any soldiers of their own.

A binding legal requirement for obtaining consent before testing and deploying AI weapons in another nation’s territory is essential for upholding the principle of territorial sovereignty under international law. The protocol governing LAWS must explicitly prohibit use of autonomous weapons against territories of non-consenting nations, reinforcing the principle that if coercion is used to obtain consent, the consent so obtained is invalidated. Furthermore, strict liability should be imposed upon nations that deploy autonomous weapons, with a burden of proof being placed on the state utilising the autonomous weapon, especially when there exists a technological asymmetry between the deploying state and the affected state. Given the complexities of attribution that would arise with the use of autonomous weapons, the shift in burden of proof is necessary to prevent powerful states from escaping accountability due to the lack of transparency in AI decision-making processes.  

Just like the law on the use of force in international law, the principles enshrined in Article 2, paragraph 4, and Article 51 of the UN Charter should continue to apply, with a special provision under the CCW to ensure that states that do not have the financial capabilities to operate sophisticated autonomous weapons such as the LAWS, can protect themselves and their territorial sovereignty by invoking collective self-defence recognised under Article 51 of the Charter. However, the collective self-defence provision in cases involving autonomous weapons must be envisaged in such a way that it accords that right only to the states that demonstrably lack the capacity to develop their such weaponry themselves. This limitation would reduce the risk of powerful nations using this legal provision as a justification for military coalitions that disproportionately target states which have no power to protect themselves against sophisticated technology.

Substantive restrictions, while essential in regulation of LAWS, must be complemented by robust dispute resolution mechanisms to ensure effective enforcement and accountability. Given the well-documented power asymmetries in international law, it is important that arbitration and adjudication panels incorporate equal representation from both developed and developing nations, creating opportunities for decision making that is not restricted to Euro-centric perspectives. In addition to impartial adjudication and arbitration, progressive sanctions must be allowed to ensure that states are held accountable for their actions, thereby discouraging unlawful deployment of autonomous weapons.

While a legal framework is necessary to ensure that technological progress in weaponry and warfare is accounted for, the effectiveness of these instruments hinges on collective action and political will. States must be willing to take action to regulate the use and deployment of LAWS and special exemptions must not be granted to powerful states that actively oppose a legal framework for the same. As emphasised by the Human Rights Watch report, the majority of states that have addressed the issue of autonomous weapons recognise human decision making and control as critical to the legality of weapon systems. Autonomous weapons are likely to become an inevitability in modern warfare. Therefore, demanding for an outright ban on them may be impractical, for they can help prevent wide scale destruction through precision and reduce military casualties for deploying states. However, regulatory safeguards are imperative to protect the interests of states that do not have access to such technologies and hence may risk losing their territorial sovereignty due to concentration of military power in the hands of technologically dominant states. 

Conclusion

As countries continue to test and develop autonomous weapons, the creation and use of LAWS in warfare is not a distant reality. Within the current IHL framework, there exists a legal vacuum in the regulation of such weapons, allowing exploitation of the legal framework by developed nations, severely disadvantaging economically weaker countries. Technology asymmetries, therefore, would lead to subordination and subjugation. There is an urgency to develop a framework that addresses the use of LAWS before it starts actively being used in warfare, causing large scale damage to innocent civilians in war-torn areas. The absence of a binding legal instrument and the slowly approaching reality of usage of AI driven weapons in warfare would prove disastrous for vulnerable nations who do not have access to such advanced technologies and are already suffering the devastating consequences of war.

Click here to read part I.


Anushka Mahapatra is an undergraduate law student at NLSIU, Bangalore.


One thought on “Algorithmic Warfare: The Dystopia of AI Weapons and The Legal Vacuum in International Humanitarian Law: Part II

Leave a comment