I. Introduction
In the science fiction novel Starship Troopers, author Robert Heinlein, a graduate and later English professor here at the Naval Academy, envisioned a far-off future of technologically enhanced super soldiers: faster, stronger, more aggressive, more resilient, and seamlessly integrated with their technologies of war-fighting. At the time, this vision of Heinlein’s belonged purely to the realm of science fiction.1
Today, however, that vision is no longer just speculative; it is fast becoming science fact. From powered exoskeletons that augment strength and endurance, to brain–computer interfaces, to pharmaceuticals that sharpen focus and suppress fatigue, what we are witnessing is the early emergence of a new kind of human and a new kind of warfighter. Indeed, a brave new world is no longer on the horizon; it is very much here.
That said, in this short article, I want to do three main things. First, I want to map the landscape of human and soldier enhancement, highlighting both current technologies as well as those on the horizon. Second, I want to explore some of the central ethical arguments regarding human enhancement in general. And third, I want to examine the unique ethical challenges that arise when enhancement is applied specifically to soldiers. By “soldier” I use the term broadly here to refer to not just soldiers, but also sailors, marines, pilots, guardians, etc.; any and all warfighters across all of the various services.
Additionally, it is also crucial to note here that these developments are not simply occurring in a vacuum. In an era of renewed great power competition, the strategic incentive to enhance human performance, especially in the military domain, are only intensifying. So, whether we are comfortable with it or not, future leaders, policymakers, and citizens will all have to grapple with this emerging set of questions and moral issues.
II. Human Enhancement and Soldier Enhancement
That said, what exactly do we mean by human enhancement? And what do we mean by soldier enhancement?
Generally speaking, philosophers define human enhancement as the set of technological interventions designed not merely to treat illness or to restore health back to normal functioning, but to improve human capacities beyond normal physiological, bio-medical, or cognitive baselines. In other words, enhancement is not merely about repair, it is about surpassing the otherwise ordinary limits of human performance and human capacity.
We can think of human enhancement as broadly falling into four major domains: physical, cognitive-emotional, genetic, and human–machine pairing.
Physically-speaking, prosthetics, wearables, pharmaceuticals, exogenous steroids, and early-stage exoskeletons are already in use. Here, these specific technological means serve the specific ends of increased strength, endurance, and recovery.
Cognitively and emotionally-speaking, we now have nootropics, neurostimulation, and AI tools that are increasingly functioning as a kind of external brain. These specific means aim at facilitating the specific ends of sharper focus, better decision-making, and greater emotional regulation.
Genetically-speaking, we’re now treating disease with gene therapy and selecting embryos through IVF. Here, these means are directed toward the specific ends of disease prevention, risk reduction, and the optimization of baseline human traits.
Lastly, in the space of human–machine pairing; smartphones, biosensors, and A.I. assistants, are already extending our capacities in new and novel ways. These particular means serve the specific ends of expanded memory, radically faster informational processing, and seamless interaction with the digital world.
Now all of this, of course, is only accelerating. M.I.T., for instance, is in the process of developing soft robotic exo-suits that augment human movement.2 CRISPR Therapeutics have already developed and rolled out the first approved CRISPR-based treatment that can effectively cure diseases like sickle cell.3 And Elon Musk’s Neura-link4 as well as China’s Neuracle Medical Technology5 have both begun human trials of brain–computer interfaces (or BCIs), allowing paralyzed patients to control computers with just their thoughts within days.
On the horizon, we might see such innovations as lab-grown organs, nano-medicine repairing the body from within, and agentic AI co-pilots assisting human decision-making. Gene-editing tools may begin radically enhancing both bodily recovery and human intelligence. And in the long term, this could lead to radically extended lifespans, engineered bodies, and the eventual convergence of all of humanity, robotics, and Artificial General Intelligence. The so-called ‘Singularity.’ On this last point, I am deeply skeptical.
So how does all of this apply to soldier enhancement?
In one sense, soldier enhancement is nothing new. From the earliest days of warfare, humans have sought to improve their combat effectiveness, through training, weapons, diet, and even stimulants. Ancient warriors like the Vikings and the Aztecs were thought to have consumed medicinal plants to heighten aggression prior to combat. Early modern states recognized the importance of nutritional provisioning for troops during extended campaigns. And 20th century militaries have used amphetamines and other stimulants for pilots and soldiers to maintain performance during long and re-occurring missions.
What is new, arguably, in all of this, is the scale, precision, level of invasiveness, and totalizing nature of these emerging technologies. Today, it seems we are not simply equipping soldiers, we are beginning to fundamentally transform them at an essential level.
DARPA, for example, has funded brain–machine interface programs aimed at improving soldier decision-making under stress and fatigue.6 China has likewise identified brain–computer interfaces as a strategic priority, explicitly linking them to cognitive enhancement and human–machine integration on the battlefield.7
Meanwhile, the United Kingdom’s Ministry of Defense report, Human Augmentation: The Dawn of a New Paradigm, argues that human augmentation will become a central feature of future conflict, reshaping not just what individual soldiers can do, but how whole militaries think about capability itself.8 In parallel, the U.S. D.O.W. Bio-Futures 2050: Defense Impacts & Opportunities report, projects that biotechnology, alongside AI, nanotechnology, and advanced manufacturing, will fundamentally transform the nature of conflict between now and 2050.9
On the horizon, we may see fully powered exoskeletons, AI-assisted battlefield awareness, and brain–machine interfaces that allow soldiers to control drones or robotic systems directly. Further out, both reports point toward more radical possibilities such as genetically engineered resilience, synthetic organs, and tightly integrated human–machine systems that blur the line between operator and platform; human and machine.
The result of all of this is arguably a shift not just in degree, but in kind. Making the question no longer just about how we equip human soldiers, but how far we are willing to go in fundamentally redesigning such human soldiers; perhaps entirely.
III. For & Against Human Enhancement
This brings us to the ethical debate regarding human enhancement in general.
On one side of the debate, critics such as Michael Sandel argue that human enhancement can be deeply morally problematic. In his book, The Case Against Perfection, Sandel suggests that human enhancement reflects a kind of hubris, a desire to master and control human nature itself. In doing so, he argues, we risk losing an appreciation for the “giftedness” of human life: the idea that our abilities and limitations are not entirely of our own making, and that this fact underpins such important values as humility, solidarity, and moral responsibility.10
Sandel likewise worries about fairness and social inequality. If enhancement technologies are unequally distributed, then they could severely widen pre-existing socio-economic divides. Lastly, Sandel also raises concerns about the erosion of merit: if success is engineered rather than personally earned, does it still carry the same moral weight and significance?
Echoing sentiments similar to Sandel, other scholars such as Notre Dame’s O. Carter Snead, strongly emphasize concerns about human dignity, bodily integrity, and the moral dangers of treating the human body as an object to be optimized rather than a person to be respected.11
On the other side of the debate, bioethicists such as Julian Savulescu argue not only that human enhancement is permissible, but that it may in fact be morally required. If we have the ability to improve human well-being, reduce suffering, and expand opportunity, then failing to do so could itself be unethical.12
Savulescu argues that just as we have obligations to educate children or provide healthcare, we may also have obligations to enhance ourselves and future generations. On this view, enhancement is not a threat to morality, rather it is an improvement upon it.
The debate, then, is not simply about technology. It is about competing visions of what constitutes human flourishing as well as what it means to be fully human.
IV. Soldier Enhancement Ethics
When we turn specifically then to the subject of soldier enhancement, these ethical questions take on a much sharper edge.
Militaries have always sought to improve performance. But the technologies now emerging raise a set of distinct and pressing ethical concerns.
The first concern is that of consent and coercion. In a military context, can consent ever truly be voluntary? When orders come through a chain of command, the line between choice and obligation becomes blurred. If an enhancement promises improved survival or mission success, can a soldier realistically or permissibly refuse it?
The second moral concern is one of risk and experimentation. Military institutions have historically accepted higher levels of risk in pursuit of strategic advantage. But there is a serious moral distinction between soldiers accepting risk in combat versus in the experimentation lab. The fact that soldiers accept danger on the battlefield up to the point of death, does not necessarily mean that they have consented to be used as a mere means to any end whatsoever that the military dreams up. Indeed, there remains a strong duty to protect soldiers from such exploitation.
A third moral concern here is the dignity and bodily autonomy of soldiers. At stake here is a fundamental question: is the soldier a citizen-warrior, or a weapons platform? As enhancements become more invasive, especially in genetic or neurological domains, the risk of instrumentalization significantly increases. In so doing, we may begin to treat soldiers less as persons and more as a mere means to be optimized.
A fourth moral concern is the impact on virtue and the warrior ethos. Traditionally, military excellence has been understood in terms of virtues: courage, discipline, prudence, judgment. But if these traits can be pharmacologically or technologically induced, we must ask: are they still virtues? If courage can be chemically engineered, is it still courage, or is it simply compliance?
A fifth moral concern is the relationship between the military and wider civilian society. Enhanced soldiers may become physically, psychologically, and even cognitively distinct from the civilian population that they serve: and radically so. This raises the risk of a widening civil-military gap, and perhaps even the emergence of a totally separate warrior class, what I have elsewhere referred to as the ‘21st Century Coriolanus,’ one increasingly alienated from, and who might end up looking down upon the broader non-enhanced democratic community that they are sworn to protect.
Finally, there is the question of soldier identity and reintegration. What happens after service? If enhancements persist, who is the veteran who returns home? Will they be useful, relatable, or even safe within a non-enhanced domestic space? If such enhancements wear off or are removed, what or who exactly is lost? These are not just medical or logistical questions, they are deeply personal and moral ones; ones tied to identity, agency, and social belonging.
Despite these worries, there are arguably many upsides to such soldier enhancement technologies worth noting. As Joe Thomas has recently argued, human decision-makers are still fundamentally biological entities subject to making mistakes and errors under the fog, friction, and duress of combat.14 Emotional volatility, physical exhaustion, cognitive bias, and the propensity to overreact towards panic, recklessness, or even sometimes barbarity, gives strong reason in favor of adopting certain soldier enhancement technologies. If such enhancements could reliably aid warfighters in making cool, rational, and adequately informed moral and prudential decisions while in combat such that there would be less chance of collateral damage, war crimes, or undue risk to one’s own troops, then the adoption of such enhancement technologies might not only be permissible but required.
Lastly, despite this list of prima facie or contingent moral concerns surrounding the question soldier enhancement, I believe they do not rise to the level of warranting an absolute prohibition on adoption. Indeed, were it the case that some just cause or some sufficiently good end could only be achieved by the use of enhanced soldiers, or, conversely, that some sufficiently unjust or bad end could only be averted by the use of enhanced soldiers, then, all-things-considered, adoption of such enhancement technologies might not just be permissible but indeed obligatory.
V. Moral Enhancement
Before concluding, I want to address a related concept within this space; that of so-called “moral enhancement”; the claim that character, moral reasoning, and moral understanding can be improved by means of physical, medical, or other technological interventions.
As both Ingmar Perrson and Julian Savulescu have argued, if we can enhance capacities in other domains, we may have reason, even an obligation, to enhance our moral capacities as well.15
This idea, I argue, is intelligible, but only in a weak sense. A useful analogy here might be the distinction between weak vs. strong AI. Whereas Weak AI claims to only simulate aspects of the mind; Strong AI claims to instantiate an actual mind. Similarly, we can distinguish between weak vs strong moral enhancement.
The weak moral enhancement thesis would hold that interventions can support moral functioning indirectly by reducing things such as impulsivity, regulating emotion, or improving attention. And while these effects may create better conditions for moral deliberation, they do not generate moral understanding itself.
The strong moral enhancement thesis, on the other hand, would go much further. It would claim that moral reasoning itself can be entirely engineered through biological, pharmacological, or computational means. And it is here where I believe a healthy skepticism is well warranted.
If, like me, you are uneasy about the idea of autonomous weapons systems being trusted with life-and-death decisions, on account of morality being the kind of thing that is fundamentally un-codifiable, then the same basic concern applies here. For the domain of the moral is not reducible to the neurochemical any more than it is reducible to the digital. To assume otherwise makes a severe category error and unjustifiably derives an ‘ought’ from an ‘is.’
There is, however, a limited way in which moral enhancement does make sense. Consider someone with a compulsive violent disposition. If a medical intervention reduces that compulsion, then it may improve the conditions under which moral agency may then operate. Thus, in this weak sense, we can say that the subject has been morally enhanced by the medical intervention.
But this remains a far cry from producing actual moral understanding. Such interventions may reduce weakness of will, increase empathy, or improve focus, but they do not replace the agent’s own grasp of moral reasons or the moral reasons themselves, nor could they if morality is to mean anything at all.
This becomes clearer if we imagine identifying certain neurochemical states associated with different normative theories. In such an instance, who would decide which configuration/moral theory is the ‘correct’ one? And by what criteria? Deontological? Consequentialist? Virtue Theoretic? Indeed, any answer would already presuppose a prior standard of the good, one that itself can neither be derived from mere neurochemistry nor computation nor physics.
To call something a moral ‘enhancement’ then, is already to assume an account of what human beings ought to be and what specific ends ought to be realized. Without that, the notion of ‘enhancement’ collapses into mere preference optimization or means-ends satisfaction rather than genuine improvement towards the Good.
Indeed, moral reasoning begins with the apprehension of moral reasons, the recognition that some actions are worth doing and others ought not be done. And that requires an autonomous rational agent capable of grasping the Good as Good.
And while technology may suppress compulsion, it cannot generate true understanding. It may influence behavior, but it cannot produce virtue. It can incline action, but it cannot make that action intelligible as morally right to the one performing it.
In Aristotelian terms, such interventions may shape the passions, but they cannot replace nor generate the telos.
So, the central question here is not whether we can use such technological means to influence outcomes. It is whether such technologies preserve or improve the conditions that make moral agency possible in first place: that is, a rational agent capable of recognizing and responding to moral reasons.
If enhancement technologies bypass that dimension, if they engineer behavior without forming character, they will have succeeded in producing increased compliance, but they will have failed in producing actual virtue.
And that distinction matters. For a world of compliant actors is not the same as a world of morally responsible persons.
Returning then to the topic of soldier moral enhancement, I believe we should proceed with limited expectations of what such enhancement technologies can actually deliver. We should not assume that just because we can use technological means to influence behavior, that we can therefore engineer virtue ex nihilo.
What’s more, there is also a deeper risk. If these technologies end up becoming substitutes rather than supports, if they become load-bearing beams rather than just supportive scaffolding, then, paradoxically, they may actually serve to weaken rather than strengthen the very moral attributes they aim to enhance. That is, we may improve outward behavior while eroding the inner life that gives actual moral motivation and moral understanding.
Put another way, we may produce soldiers who act correctly, but not for the right reasons.
And that is the fundamental difference: a difference between obedience and virtue, between compliance and character, and ultimately, between a well-functioning weapons system and a free and morally responsible human warfighter.
1 Robert A. Heinlein. Starship Troopers. G.P. Putnam’s Sons, 1959.
2 https://news.mit.edu/2022/soft-assistive-robotic-wearables-get-boost-rapid-design-tool-0503
3 https://ir.crisprtx.com/news-releases/news-release-details/crispr-therapeutics-announces-us-food-and-drug-
administration/
4 https://pmc.ncbi.nlm.nih.gov/articles/PMC11076062/
5 https://www.scientificamerican.com/article/china-just-approved-its-first-brain-implant-for-commercial-use-a-
world-first/
6 Defense Advanced Research Projects Agency (DARPA). “Biological Technologies Office Programs.” U.S.
Department of Defense.
7 https://www.washingtontimes.com/news/2025/jun/5/brain-control-warfare-chinas-bleeding-edge-strategy-
winning-without/
8 https://assets.publishing.service.gov.uk/media/609d23c6e90e07357baa8388/Human_Augmentation_SIP_access2
.pdf
9 https://oodaloop.com/analysis/archive/bio-futures-2050-defense-impacts-and-opportunities-for-advantage/
10 Michael J. Sandel. The Case Against Perfection: Ethics in the Age of Genetic Engineering. Harvard University Press,
2007.
11 O. Carter Snead. What It Means to Be Human: The Case for the Body in Public Bioethics. Harvard University
Press, 2020.
12 Julian Savulescu. “Procreative Beneficence: Why We Should Select the Best Children.” Bioethics, vol. 15, no. 5–6,
2001, pp. 413–426.
13 https://blog.uehiro.ox.ac.uk/tag/enhancement-of-soldiers/
14 [article forthcoming]
15 Ingmar Persson, and Julian Savulescu. Unfit for the Future: The Need for Moral Enhancement. Oxford University
Press, 2012.
*ChatGPT was used in the generation of this article under the direction of the author.