Embracing the Future: Overcoming Path Dependence and Risk Aversion in AI Adoption

Over the past several years, the United States Naval Academy has served as a micro-laboratory for the integration of Artificial Intelligence (AI). Initially, the conversation was confined to Generative AI, machines mimicking human creativity by identifying patterns in existing data. Early faculty workshops, sponsored by the Stockdale Center, revealed a palpable anxiety, a fear that AI was “merely a tool for cheating.

However, as Generative AI has evolved into Agentic AI, autonomous systems capable of executing complex tasks and making decisions on behalf of users, the horizon of possibility has expanded. Recent workshops show a marked shift, faculty are now embracing AI as a “thinking partner” for both instructors and Midshipmen. This evolution represents more than a technical upgrade, it is a fundamental mindset shift. At the Academy, the future belongs to those willing to adopt cutting-edge tools to solve the world’s most intractable problems.

In the rapidly evolving technological landscape, AI adoption presents unprecedented opportunities shadowed by significant institutional challenges. Central to these challenges is Path Dependence: the phenomenon where historical decisions and legacy successes constrain current behaviors and restrict future choices. For leaders, recognizing this “invisible anchor” is critical. By understanding how historical precedents inform, and often hinder, contemporary decision-making, leaders can pivot from a defensive posture to one of organizational agility.

Transformative technology is rarely greeted with immediate applause. The transition from horse-drawn carriages to automobiles is a salient example. Initially dismissed as dangerous, loud, and impractical compared to the reliability of the horse, the automobile faced immense skepticism.

However, as infrastructure developed and early adopters like Henry Ford catalyzed industry-wide shifts, a new path-dependent reality emerged. Once the world was paved for cars, the “horse-path” became obsolete. This illustrates the double-edged sword of path dependence, while initial skepticism creates friction, once a technology becomes embedded in societal structures, the pressure to remain on that path becomes absolute, often at the expense of superior alternatives.

One of the most persistent barriers to AI adoption is risk aversion, a direct symptom of path dependence. Leaders frequently evaluate AI through the lens of past technological failures rather than future potential. The Gartner Hype Cycle perfectly captures this psychological rollercoaster.

We see this in the trauma of the late-1990s dot-com bubble. The subsequent crash fostered a generation of leaders wary of “revolutionary” digital promises. Today, that historical scar tissue often manifests as a reluctance to invest in Agentic AI. When organizations encounter the inevitable “Trough of Disillusionment”—the friction of implementation—they often retreat to the safety of legacy systems rather than pushing through to the “Slope of Enlightenment.”

To break the cycle of risk aversion, leaders must cultivate an environment that acknowledges risk without being paralyzed by it. The following strategies are essential:

  • Cultivate a Culture of Experimentation: Leaders must frame AI adoption as a process of continuous iteration rather than a “one-and-done” procurement. By encouraging “failing small and fast,” organizations can demystify the technology. Companies like Google and Amazon have successfully embedded this culture of testing, allowing them to pivot before path dependence sets in.
  • Identify High-Impact Use Cases: Broad mandates for “AI integration” often fail. Instead, leaders should target specific “wins” where AI provides tangible utility. In healthcare, AI-driven diagnostic imaging has demonstrated clear accuracy gains, building the requisite institutional trust to expand into more complex areas.
  • Master Technology Diffusion: Adoption is rarely linear; it is driven by social dynamics and economic incentives. Leaders must anticipate resistance by clearly demonstrating the “value-add” to the end-user. The rapid global diffusion of mobile technology occurred because the benefit to the individual was undeniable, AI must be framed with the same clarity.
  • Invest in Cognitive Change Management: The transition to AI is as much a psychological challenge as a technical one. Initiatives must address employee concerns regarding job displacement and provide the “upskilling” necessary to turn operators into AI supervisors. This empowers the workforce to view AI as an enhancer of their own “Primal Intelligence.”1

Path dependence provides a vital lens for understanding why organizations struggle to innovate. By acknowledging the historical roots of our current risk aversion, leaders can intentionally break free from limiting mindsets. Through experimentation, targeted use cases, and robust change management, we can pave a new path for AI integration. Ultimately, embracing these modern tools will do more than enhance efficiency.  It will position the Academy, and the broader defense enterprise, to thrive in an era where the only constant is accelerated change.


1 Fletcher, A., & Logan, J. K. (2021). Creative Thinking in the Military: A Design for Building Human Intelligence. Journal of Military Learning, 5(2), 3–21.

Share Your Thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *