Within contemporary discourse on the ethics of AWS (autonomous weapons systems), one of the most popular in-principle moral arguments against the use of such technologies is the so-called ‘responsibility gap’ argument popularized by philosopher, Rob Sparrow. The argument can be summarized as follows:
- Waging war requires that we are able to justly hold someone morally responsible for the deaths of enemy combatants that we cause.
- Neither the programmer of an AWS nor its commanding officer could justly be held morally responsible for the deaths of enemy combatants caused by AWS.
- We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause the deaths of enemy combatants.
- There are no other plausible candidates for whom we might hold morally responsible for the deaths of enemy combatants caused by AWS.
- Therefore, there is no one whom we may justly hold responsible for the deaths of enemy combatants caused by AWS.
- Therefore, it is impermissible to wage war through the use of AWS. To do so would be to ‘treat our enemy like vermin, as though they may be exterminated without moral regard at all.’[i]
I have argued elsewhere, and continue to argue, that this argument fails and that AWS do not pose any new or novel in-principle moral problems because of such perceived responsibility gaps that are, in reality, not actually present. Furthermore, I argue that the responsibility gap thesis and corresponding responsibility gap implying/connoting language, existent within present AWS discourse and DOD policy (“out-of-the-loop”[ii] and “fully autonomous”[iii]systems for instance), actually serves to obfuscate rather than clarify our moral responsibilities when it comes to the use of such technologies.
My argument is a fairly simple disjunctive syllogism based solely upon conceptual analysis of the terms being used. It can be summarized as follows:
- Either an AWS is an actual autonomous agent and decision-maker or it isn’t.
- If an AWS is an actual autonomous agent and decision-maker, then it is the bearer of responsibility for its own targeting decisions, precisely in virtue of it being an actual autonomous decision-maker.
- If it is an actual autonomous agent and responsibility-bearer, however, then in virtue of being one, it would have rights or at least interests warranting our moral concern.
- If it is not an actual autonomous agent and decision-maker, then it would not be the bearer of responsibility for targeting decisions and moral responsibility would therefore necessarily fall back on the designers, implementers, and users of the AWS as it would with any other standard collective action problem. In this case the AWS would not be the bearer of rights or interests warranting our mora concern.
- Choosing either disjunct fully exhausts the logical space where moral responsibility can be located.
- Therefore, there is no responsibility gap with regard to AWS.[iv]
Given these two arguments, we may now return to the somewhat intentionally provocative title of this piece and ask; does Terminator have rights?
Personally, I think the answer is decidedly ‘no’, though some might argue otherwise.[v] Personally, I believe that neither Terminator, nor any A.I. program whatsoever, no matter how sophisticated its raw computational power, no matter its processing speed, will ever be the kind of thing that begets actual, honest-to-goodness, sui generisconsciousness. Proponents of the possibility of Strong A.I, may toss out sophisticated-sounding references to things like Moore’s Law, ‘learning machines,’ ‘deep neural nets’, ‘complexity,’ ‘emergence’, etc. but upon closer inspection, such appeals get us no closer to explaining the mystery of consciousness than mere appeals to ‘magic.’ And while the epistemic bar for passing the Turing Test might keep raising, when it comes to the things that we truly care about and the things which make us most human—creativity, compassion, novel thought and action, responsiveness to moral, epistemic, and aesthetic reasons, etc.—I believe we needn’t worry that these unique categories of the human condition will ever be genuinely replicated by mere mathematical algorithms, no matter how sophisticated.
Now some might disagree here and argue that some special mathematical arrangement functionally realized on some special substrate could in fact beget authentic consciousness and authentic agency. Fair enough. But note, that in such an instance, such an emergent conscious entity, in virtue of possessing actual consciousness and actual decision-making, would then be the proper locus of moral responsibility when it came to its actions on the battlefield or otherwise. This would also imply, however, that such an entity, in virtue of being a genuine agent, would also have rights or at least interests warranting our moral concern. Even so, there would still be no gap.
When it comes to the ontology of what we call ‘Artificial Intelligence,’ weaponized or otherwise, I believe that what we are actually observing is fundamentally the collective intentionality of human programmers instantiated in the physical world. In other words, morally and metaphysically-speaking, we are fundamentally just looking at a very complex collective action no different in kind from any other complex collective action and thereby no different in terms of how we think moral responsibility would function. If this is indeed the case, and this is the whole crux of the matter, then when an anomalous and unanticipated harm occurs on the battlefield resulting from an AWS, then the last thing we want is for the designers, programmers, and implementers of these technologies is to believe that they are wholly metaphysically, causally, and morally separated from that harm, and hence fully ‘off the hook’ morally speaking, on account of a perceived mysterious responsibility gap that is in fact not actually there. Rather, the types of attitudes, dispositions, and practices of due diligence that we would want to foster in such designers, programmers, and implementers would be the very same ones we would hope to foster within any other corporate, institutional, or professional organization (law, medicine, etc.) charged with stewardship over an exceptionally morally weighty domain of human activity. The responsibility gap thesis and related responsibility-gap-implying language (i.e. ‘out-of-the-loop’systems, ‘fully autonomous’ systems, ‘quasi-decisions’, ‘quasi-agents’ etc.) do not help to foster such dispositions, attitudes, and practices within these communities but rather functions to discourage them.
If we do not believe that AWS are, in-principle, capable of being authentic decision-makers, and hence authentic responsibility-bearers and rights-bearers, then moral responsibility for any harms resulting from such technologies must fall back fully on the members of the collective organizations and institutions who create, oversee, and employ them, and with no moral remainder left over. In terms of assigning moral responsibility for AWS, stipulated concepts in tort law such as ‘proximate causes’ and ‘strict liability’ (in the case of pharmaceutical distribution chains for instance) could begin to get us some framework for thinking about responsibility and accountability in such complex collective action cases, however imperfectly. Just because assigning responsibility in such cases is both extremely complicated and imprecise, and just because we do not possess the viewpoint or language of God, does not mean that such assignment of moral responsibility is wholly impossible and that we are thereby absolved of any moral duties in this regard. That said, I do also believe that there is metaphysical space for genuine accidents where bad unforeseeable things sometimes happen and where no one is responsible. But I do not think such a situation is unique to AWS. All that being said, the linguistic and conceptual map that we attempt to lay over the rocky and complicated metaphysical and moral terrain of AWS, however imperfectly, should reflect the subtle, fine-grained, and important contours beneath, and should thereby be free of any gaps, explicit or otherwise.
[i] Argument summarized in Purves, Duncan, Ryan Jenkins, and Bradley Jay Strawser. “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.” Ethical Theory and Moral Practice, vol. 18, 2015, pp. 851–872. Original argument by Sparrow in Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy, vol. 24, no. 1, 2007, p. 62-67.
[ii] Schmitt, Michael N., and Jeffrey S. Thurnher. ““Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict.” Harvard National Security Journal, vol. 4, no. 2, 2013, pp. 231–281.
[iii] Scharre, Paul. “NOTEWORTHY: DoD Autonomous Weapons Policy.” Center for a New American Security, 2025, https://www.cnas.org/press/press-note/noteworthy-dod-autonomous-weapons-policy.
[iv] Robillard, Michael. “No Such Thing as Killer Robots.” Journal of Applied Ethics, vol. 35, no. 4, 2018, pp. 705–716.
[v] Basl, John, and Joseph Bowen. “AI as a Moral Right Holder.” The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, Oxford University Press, 2020, pp. 1–20. (Basl and Bowen both reject the idea that current A.I. systems are rights-bearers but investigate certain strands of argumentation that argue that future A.I. systems might hold the potential to be genuine interest-bearers or rights-bearers.)