Navigating the Legal and Ethical Maze of AI in Defence
The defence industry has always been at the forefront of technological innovation, from materials to medicines to smartphones and the internet, and the rise of artificial intelligence is no exception. AI has the potential to revolutionise military operations, from enhancing the decision-making process to automating routine tasks and taking over jobs we consider too “dull, dirty, or dangerous” for humans.
But as AI is becoming increasingly used for defence purposes (albeit slowly), questions of accountability, transparency, and legality are at the forefront of public discussion. In particular, the rise in the use of autonomous/unmanned vehicles (commonly known as drones) in both military and commercial settings has been accompanied by heated debate as to how legislation is going to be able to keep up with such rapid advances in technology and the ethical implications of their use in a military setting.
So, what do we mean when we talk about “AI” and “autonomy”? The defence industry refers to AI as “a family of general-purpose technologies that may enable machines to perform tasks normally requiring human or biological intelligence”. But there is no legal or statutory definition of AI. Instead, it is often defined by reference to the combination of two key characteristics: first, adaptivity – being “trained” and operating via inferring patterns and connections in data that are not easily discernible to humans; and second, autonomy – making decisions without the express intent or ongoing control of a human.
Let’s expand on the latter for a moment.
The notion that a machine could make decisions without human instruction is understandably concerning. Throw in the controversial application of autonomous weapon systems (AWS), and the mind tends to jump straight to a “killer robot” scenario.
The UK government had previously stated that it did not possess fully autonomous weapons and had no plans to develop them. But during a parliamentary debate in the House of Lords in 2021, it became apparent that the UK’s new posture does not rule out the possibility that “the UK may consider it appropriate in certain contexts to deploy a system that can use lethal force without a human in the loop.”
While the UK does adhere to the International Humanitarian Law principles, which prohibit indiscriminate or unnecessary use of force, Currently, there is no comprehensive legislation that specifically governs the use of AI and autonomous systems in the UK. There is industry-specific guidance, such as the UK Ministry of Defence’s “Pro-Innovation Approach to AI Regulation 2023”, and a handful of more generic regulatory policies, such as the General Data Protection Regulation (GDPR), which addresses concerns around data privacy and security that arise from the use of AI systems. But essentially, issues are dealt with on a case-by-case and sector-by-sector basis. This lack of clarity presents several key challenges for legal professionals:
- Safety and security
- Privacy
- Bias and discrimination
- Responsibility and accountability.
Setting aside the most obvious concerns over safety, security, and privacy, arguably the most significant reoccurring concern raised by my clients when discussing AI is who should be held accountable when/if something goes wrong, and who was responsible for the outcome? Is it the developer, is it the owner of the technology, or is it the end user?
Unfortunately, it is not that simple.
The “black box” nature of most modern AI systems, with layers of interconnected nodes that are designed to process and transform data in a hierarchical manner, prevents anyone from establishing the exact reasoning behind a system’s predictions or decisions. This makes it almost impossible to assign legal responsibility in the event of an accident or errors caused by the system.
Multiple studies on algorithmic errors across various industries have shown that even the very best algorithms, operating exactly as designed, can generate internally correct outcomes that nonetheless cause chaos. For example, in 2016, a chatbot designed by Microsoft to mimic a teenager began emitting racist hate speech within hours of its release online. Another system, which Amazon designed to help its recruiting efforts but ultimately didn’t release, inadvertently discriminated against female applicants, and in 2015, users discovered that Google Photos was categorising some African Americans as primates.
These are all deeply concerning examples that present a minefield of challenges for legal professionals, but imagine these mistakes and misidentifications being made in a defence scenario where decisions made by AI systems can have life-or-death consequences. What happens if an autonomous weapon system is unable to distinguish between civilians fleeing a conflict and insurgents making a tactical retreat? Or between hostile forces and children playing with toy guns? Who is then held responsible for those catastrophic errors? You can’t punish a machine, and prosecutors are likely to find it incredibly difficult to establish both Actus Reus and Mens Rea where an AWS has been deployed.
One particular phrase I come across a lot when discussing the regulation of unmanned systems is “PlayStation mentality”. This refers to the notion that the geographical and psychological distance between the operator and the target lowers the threshold for launching an attack and makes it more likely for weapons to be launched, as individuals are far removed from the human consequences of their actions. This lack of accountability can lead to irresponsible behaviour and unethical decision-making, potentially causing harm to both individuals and society as a whole.
There is also the risk that delegation of tasks or decisions to AI systems could lead to a “responsibility gap” between the systems that make decisions or recommendations and the human operators responsible for them. I have found that often, the military response regarding who is to be held accountable in these scenarios is that it will always fall on the commanding officer. But attributing blame up or down the chain does not simplify this legal and moral complexity, and in the absence of clear legislation, it is difficult to hold organisations responsible for the actions of their AI systems. Crimes may go unpunished, and we may even find that eventually the entire structure of the laws of war, along with their deterrent value, will be significantly weakened if lawmakers cannot come to an agreement on some kind of universal legislation.
Intrinsically linked to the accountability/responsibility issue above are bias and discrimination.
Although the media might have you believing otherwise, we are nowhere near a world where AI is thinking and making decisions completely of its own accord. The reality is that AI systems are only as good as the data they are trained on, and while machine learning offers the ability to create incredibly powerful AI tools, it is not immune to bad data or human tampering. Whether that’s flawed or incomplete training data, limitations with technology, or simply usage by bad actors, It is all too easy to embed unconscious biases into decision-making, and without legislation addressing how these biases can be mitigated or avoided, there is a risk that AI systems will perpetuate discrimination or unequal treatment.
To try to alleviate these issues, industry experts have been considering the possibility of an ‘ethics by design’ approach to developing AI. This is a concept I have personally been exploring in my PhD, which looks at whether there is an operational benefit to maintaining black box theory. Whether it is possible to move legal responsibility up the chain to the developer, and if so, whether there should be rules of development (ROD) as well as rules of engagement (ROE), and what those ROD might be.
Of course, this approach would bring with it yet more new obstacles for the legal profession. Though one would hope that, in comparison to the previous complexities of establishing causal links where AI is involved, these obstacles should be much easier to navigate, With proper universal regulations and ethical principles in place for tech companies to follow when developing new systems, the path of causation should be significantly more straightforward, allowing lawyers to establish clear accountability.
So, where do we go from here?
With the rise of intelligent systems such as ChatGPT, the stakes are higher than ever, and we are seeing more and more high-profile industry experts calling for international regulation of AI.
The good news is that potential solutions to these issues have been maturing for several years.
In 2021, the European Commission proposed the first ever legal framework on AI, which addresses the risks of artificial intelligence. The proposed regulation aims to establish harmonised rules for the development, deployment, and use of AI systems in the European Union and outlines a legal framework that proposes a risk-based approach that separates AI into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Each category is subject to different levels of regulatory scrutiny and compliance requirements.
This innovative new framework led to the 2022 proposal for an “AI Liability Directive”, which aims to address the specific difficulties of legal proof and accountability linked to AI. Although at this stage the directive is no more than a concept, it offers a glimmer of hope to legal professionals and victims of AI-induced harm by introducing two primary safeguards: first, the presumption of causality. If a victim can show that someone was at fault for not complying with relevant obligations and that there is a likely causal link with the AI’s performance, then the court can presume that this non-compliance caused the damage. The second is access to relevant evidence. This allows victims of AI-related damage to request the court disclose information about high-risk AI systems. This should help in identifying the person or persons that may be held liable and potentially provide insight into what went wrong.
While one might argue that this new conceptual legislation would not solve all our legal issues, it’s certainly a step in the right direction.
In addition, there are policy papers such as the UK 2022 Artificial Intelligence (AI) Strategy and the US Department of Defence Responsible Artificial Intelligence Strategy and Implementation Pathway 2022.
These provide important guidance to both tech developers and their military end users on adhering to international law and upholding ethical principles in the development and use of AI technology across defence. They also present opportunities for data scientists, engineers, and manufacturers to consider using ethical design approaches when creating new AI technology. Aligning the development with the related legal and regulatory frameworks will ensure that AI and autonomous systems are developed and deployed in defence in a manner that is safe, effective, and consistent with legal and ethical standards.
Obviously, this is not a comprehensive list of all the challenges we face as legal professionals when dealing with artificial intelligence and advancing technology. And I am not going to attempt to tell you I have all the answers; I will leave that to the experts, but hopefully this thought piece has offered some insight into the battles, barriers, and challenges we face navigating law and AI in the defence sector.
Yasmin Underwood is a defence consultant at Araby Consulting and member of the National Association of Licensed Paralegals (NALP)