Written by Gabriel Udoh, PhD candidate in Autonomous Weapons Systems and International Humanitarian Law, Europa-Universität Viadrina, Frankfurt (Oder)
The introduction of autonomy into weapons systems has raised a lot of concern in the law and ethics of armed conflicts. Besides the ongoing discussions at the Group of Governmental Experts of the Committee on Conventional Weapons, there appears to be no actual action towards providing international legal and ethical guidelines on the use of new tech weaponry. International humanitarian law, in its current state, does not sufficiently cover the field.
There are already well recognized concerns regarding the ability of autonomous weapons systems (AWS) to adhere to the rules of armed conflicts regarding humanity, distinction, proportionality, necessity, responsibility and others. Questions are also asked as per whether the use of AWS in armed conflicts fall in line as permitted means and methods of warfare. These aside, there are serious ethical concerns also.
One of the main ethical concerns is the lack of accountability and responsibility associated with autonomous weapons systems. Unlike traditional weapons, which require a human operator to make decisions about their use, autonomous weapons systems can make decisions on their own. This means that no one is held accountable if something goes wrong, and it is difficult to assign blame or responsibility in case of errors or unintended consequences.
Another ethical concern is the potential for human rights violations. If autonomous weapons systems are not programmed with appropriate ethical guidelines, they may make decisions that violate human rights. For example, they may target civilians or make decisions based on race or gender, leading to discrimination and violations of human rights.
The introduction of autonomous weapons systems is expected to encourage asymmetric warfare by creating a power imbalance between those who possess these systems and those who do not. Asymmetric warfare refers to conflicts in which one side has a significant military advantage over the other, making it difficult for the weaker side to defend itself.
There is also an increased likelihood of accidents or unintended consequences associated with autonomous weapons systems. Unlike human operators, autonomous weapons systems are not capable of making judgment calls based on situational awareness or moral considerations. This means that they may make decisions that have unintended consequences or cause harm to innocent people. This could escalate conflicts or lead to new wars. For example, if an autonomous weapons system misidentifies a target, or if a malfunction causes it to act unpredictably, this could lead to unintended casualties or the escalation of a conflict. This could potentially lengthen wars or lead to new conflicts.
Also, development and deployment of autonomous weapons systems could lead to a new arms race as countries seek to maintain or gain a military advantage. This could lead to increased weapons proliferation, as countries race to develop and deploy these systems to keep up with their rivals.
Second, the use of autonomous weapons systems could potentially prolong wars by reducing the human and economic costs of conflict. The ability to deploy autonomous weapons systems could allow countries to engage in prolonged conflicts without suffering the same level of human or economic costs as traditional warfare. This could make it easier for countries to justify prolonged conflicts, leading to longer wars.
Overall, the ethical concerns associated with the introduction of autonomy in lethal weapons systems highlight the need for careful consideration and regulation of these systems. It is important to ensure that ethical guidelines and standards are in place to prevent human rights violations and to ensure accountability and responsibility for their use.