AI Adoption: A Familiar Story With Higher Stakes
A Familiar Pattern—Under New Conditions
AI adoption brings familiar challenges to the surface.
When fear rises, incentives drift, accountability blurs, and sensemaking is deferred, progress slows—not because of the technology, but because of how humans and systems respond to change.
I’ve watched this pattern repeat over and over, across different industries and moments of change.
Every major wave of technology—from large-scale systems implementations to automation to the move to the cloud—has carried with it a quiet, persistent fear: Will this replace me? Over time, I’ve helped organizations name that fear and respond to it. Often, the reassurance was grounded in truth: the technology would automate repetitive work and free people up for higher-value contributions.
In many cases, that framing held.
This time, it doesn’t hold in the same way.
With AI, some roles are genuinely being eliminated. Not eventually—now. People see the headlines. They feel the contraction. Treating AI as just another productivity story glosses over a lived reality and widens the trust gap between organizations and their workforce. We’ve encountered this pattern before—but under conditions that are fundamentally different.
When AI Reshapes the Work Itself
Previous technologies changed where work lived and how it moved. AI reaches directly into judgment, decision-making, and knowledge work itself.
It collapses distance between junior and senior cognitive labor. It reshapes how decisions are framed, how quality is judged, and how accountability is understood. And it does so at a pace that leaves little time for quiet adaptation.
What’s becoming increasingly clear is that AI isn’t just changing experienced roles—it’s changing how people enter the workforce at all.
Historically, entry-level roles were designed around learning through repetition: drafting, analyzing, summarizing, preparing first passes that more senior leaders would review. Those activities weren’t just “lower-value work”; they were how people developed judgment over time. AI now performs many of those tasks instantly, raising uncomfortable questions about how—and whether—those roles still exist in the same form.
We’re already seeing organizations respond by rethinking hiring models. In early 2026, PwC announced a major restructuring that reduced entry-level advisory associate hiring locations from 72 offices to just 13. Moves like this signal a deeper shift: fewer on-ramps, more concentrated hiring, and less tolerance for learning curves that were once considered normal.
That reach is what makes this moment destabilizing. AI doesn’t just introduce a new tool; it exposes long-standing gaps in how organizations think about talent development, apprenticeship, and capability building—gaps that were easier to overlook before work itself began changing this quickly.
When Thinking Is Delegated Instead of Designed
In moments of uncertainty, organizations often fall back on familiar structures.
With earlier technologies, that looked like blanket bans on cloud usage, rigid approval gates, or overly restrictive security postures designed to control risk rather than enable learning. With AI, it often sounds like, “No one can use it,” or, “Legal and IT own this.” Risk, ethics, and compliance are isolated within functions never meant to carry the full weight of work redesign.
This isn’t a failure of Legal, IT, or HR. Each is operating within its mandate.
The breakdown occurs when the enterprise avoids the harder work of joint sensemaking—when leadership, HR, Legal, IT, and operations don’t come together to design how work is intended to change.
When interpretation is deferred, policy becomes a stand-in for clarity. In that vacuum, people fill in the blanks themselves—sometimes with fear, and sometimes with unintended consequences. Well-intentioned controls can drive shadow usage, brittle processes, or new cybersecurity risks that undermine the very outcomes they were meant to protect.
Fear Isn’t Irrational—It’s a Signal
Another dynamic shows up consistently: organizations treating AI adoption as an efficiency experiment rather than a deliberate redesign of work.
In practice, this can sound like: We think AI means we’ll need fewer people—let’s test that assumption. Layoffs follow, and the work of those roles doesn’t disappear—it shifts to the people who remain. Workloads swell. Expectations stay the same.
What’s missing is guidance.
People aren’t told how the work is meant to change, where AI should meaningfully reduce effort, or what tradeoffs leadership is willing to make. Without an articulated end state, there’s no shared goal—only pressure.
Buried under overwhelming workloads, people don’t have the space to rethink processes or integrate AI intentionally. They default to coping strategies, not redesign. In those conditions, the benefits of AI tend to remain incremental, not transformational—not because the technology can’t deliver more, but because the conditions required to realize those gains haven’t been created.
When work isn’t redesigned, people don’t move toward thriving. They manage exhaustion instead.
What Actually Helps Adoption Take Root
Organizations that move beyond incremental gains treat AI as a change effort, not a tool rollout.
That starts with work redesign—explicitly mapping how workflows, decisions, and handoffs are intended to change before asking people to adapt, recognizing that while redesign often begins locally, it ultimately needs to connect across the enterprise.
Without that clarity, change impact assessments are speculative at best. When redesign does occur, organizations can identify where roles shift, where judgment changes, and where new capabilities are required.
From there, the fundamentals matter.
People need a clear articulation of what’s changing for them, why it matters, and what support exists—the real “what’s in it for me,” grounded in how day-to-day work will actually evolve. Training alone isn’t sufficient if it isn’t anchored to redesigned processes and realistic expectations.
Sustainable adoption also requires social infrastructure. Communities of practice, peer learning forums, and shared examples give people space to compare notes, test assumptions, and build confidence together. This is where HR, IT, Legal, and business leaders need to stay tightly connected—monitoring friction points, adjusting guidance, and reinforcing norms as learning unfolds.
Most importantly, responsibility isn’t delegated downward. It’s held collectively. Leaders and enabling functions stay engaged as sensemaking partners, not distant sponsors.
This isn’t dramatic bravery. It’s disciplined change leadership.
A Familiar Story, Higher Stakes
We’ve seen versions of this story before—but never with stakes quite this human.
AI adoption follows a familiar pattern, yet it reaches deeper than any previous wave of technology. It touches identity, opportunity, and how work itself is learned and valued. The question isn’t whether AI will be adopted. It will be.
The real question is whether organizations are willing to do the work this moment is asking of them—together, deliberately, and with care.
Because this time, the stakes extend not just to today’s workforce, but to the one being shaped right now.