The Leadership Work We Skipped — and Why AI Is Making It Visible

The Leadership Work We Skipped — and Why AI Is Making It Visible

I want to be clear about where I stand, because much of the current conversation about AI avoids clarity in favor of reassurance.

AI is neither a savior nor a threat on its own. It is a human-designed tool that introduces meaningful complexity into how work gets done, how decisions are made, and how responsibility moves through organizations. The benefits are real. So are the risks. And those risks are not abstract or hypothetical. They are already showing up in everyday leadership behavior, particularly in how leaders are asking their teams to adopt and use these tools.

What concerns me most is not the pace of technological change. It is the lack of leadership context surrounding it.

Leadership roles have been evolving for years while leadership development steadily thinned. AI did not create this mismatch. It has simply made it visible.

How AI Adoption Broke the Pattern We Used to Rely On

What is striking to me is how different AI adoption looks compared to other major technology shifts organizations have navigated in the past.

When new systems are rolled out—ERP platforms, CRM tools, compliance software, data systems—leaders typically spend time preparing people for what’s coming. There is usually a narrative: why this tool matters, what problems it is meant to solve, how it will change the work, what support will be available, and where judgment is still required. Training plans follow. Expectations are named. Boundaries are made explicit.

With AI, that framing is often missing.

Instead of a shared understanding of purpose and limits, the message many employees receive is simply to “try using AI.” Experimentation is encouraged, but without much articulation of why AI is being introduced, what its limitations are, how it should—and should not—be used, or what responsible use actually looks like in practice. There is an implicit assumption that AI is so intuitive that anyone can just pick it up and use it well, without much guidance or shared language.

Across industries—including highly regulated ones—leaders are encouraging experimentation and adoption without adequately articulating the realities their employees are now expected to navigate. Teams are being handed powerful tools without clear guidance on judgment, boundaries, confidentiality, or accountability. Many employees do not have the time, authority, or sophistication to independently assess the implications of how they are using these systems, yet they are expected to move quickly and deliver results.

AI Is Human-Designed—and That Matters More Than We Admit

AI systems are human-designed. That fact is often acknowledged and then quickly set aside, when it should instead sit at the center of how leaders think about adoption and use.

These systems are built by people and trained on data that reflects human choices, cultural norms, and historical inequities. Gender bias, cultural bias, and structural blind spots do not disappear in these environments. In many cases, they are reinforced at scale, particularly when outputs are delivered with confidence and fluency. We are already seeing credible scrutiny of algorithmic behavior in professional platforms, including concerns about differential visibility once demographic signals are present. These patterns shape whose voices are amplified and whose perspectives are quietly deprioritized.

It is also important to say this plainly: leaders are not the only ones responsible for how these tools behave.

AI designers, vendors, and platforms make consequential choices about what gets optimized, what gets measured, and what remains invisible. Many of the risks organizations are now grappling with were designed upstream, often far from the leaders being asked to manage the downstream impact. Those design decisions matter.

At the same time, once these tools enter an organization, leadership becomes the point where those design choices meet real people, real work, and real consequences. Responsibility concentrates there—not because leaders caused the problem, but because they are the ones positioned to shape how it shows up day to day.

When leaders treat AI as neutral or objective, they unintentionally transfer authority to systems that were never designed to hold values or ethical responsibility. Efficiency can create a false sense of completeness. Polished language can obscure partial or skewed perspectives. Over time, deference replaces discernment, not because leaders are careless, but because the conditions reward speed over reflection.

AI does not remove the need for judgment. It increases the consequences of how judgment is exercised.

The Leadership Development Gap We Normalized

At the same time, we need to be honest about how unprepared many leaders were for the kind of leadership this moment requires.

For years, organizations deprioritized leadership development at the precise moments when leadership roles change most significantly. People were promoted based on execution, technical competence, and delivery, then expected to navigate ambiguity, system-level impact, and values-based decision-making with minimal structured support. Leadership development became episodic or optional. One-off programs replaced sustained practice. A quiet assumption took hold that capable people would simply figure leadership out along the way.

Many did, but often by relying on habits that were rewarded earlier in their careers: moving quickly, maintaining control, gathering more data, reducing uncertainty wherever possible. Those habits are understandable. They are also insufficient for leadership roles where decisions ripple through systems and shape how others think and act.

AI has entered directly into that gap.

When Judgment Becomes Observable

One of the most consequential shifts AI introduces is not automation, but visibility.

When leaders use systems that generate recommendations, summaries, analyses, or drafts, their judgment leaves traces. The questions they ask, the outputs they accept, and the explanations they provide all signal how decisions are being made. Teams are no longer experiencing only the outcomes of leadership decisions; they are learning from the reasoning behind them.

For leaders who were never trained to make their thinking explicit, this can feel exposing. Historically, leadership allowed for private deliberation followed by public action. Today, reasoning itself is increasingly observable, and that visibility shapes trust.

Leaders Are Training the System—Whether They Intend To or Not

There is another dimension of this shift that deserves more attention.

Leaders are not passive users of AI. They are shaping how these systems behave through use. Every interaction reinforces something—what gets accepted, what gets challenged, what gets ignored. Over time, unexamined use does not simply produce mediocre outputs; it institutionalizes blind spots.

If leaders never use AI to surface counterarguments, it will not do so. If they never invite it to challenge assumptions, it will reinforce them. Using AI well requires intention, time, and a willingness to examine one’s own thinking. Quality in really does produce quality out.

Risk, Regulation, and the Cost of Silence

This is where risk compounds, particularly in regulated environments.

When leaders fail to articulate boundaries around confidentiality, discoverability, and decision ownership, employees are left to improvise. They draft documents without understanding what may be discoverable. They input sensitive information without clarity on where it goes. They produce work that is fast and polished, but thinly reasoned.

This is how organizational slop takes hold. This is how compliance risk grows quietly. This is how litigation exposure increases—not because employees are reckless, but because leaders did not provide the context required to act responsibly.

Much of what is currently described as AI backlash is better understood as a response to this absence of leadership clarity.

People lose trust when accountability becomes unclear. When decisions cannot be explained. When leaders retreat behind tools instead of standing behind judgment. Trust is not restored by better tools alone. It is restored when leaders remain present—when they explain how systems informed their thinking, where discretion was applied, and why certain outputs were rejected.

AI does not require leaders to be perfect. It requires them to be visible.

An Invitation, Not an Indictment

I do not raise these points as an outside critic.

I have spent my career working alongside leaders through moments like this—large-scale technology shifts, restructures, regulatory change, and the human disruption that follows. Practitioners in organizational change, leadership development, and organizational design have seen this pattern before. The technology moves quickly. The human system struggles to keep up.

This is not a failure of intent. It is a signal that the work ahead is human work: sensemaking, capability-building, and leadership practice. And it is work leaders do not have to navigate alone.

Where I Land

So here is where I land.

AI introduces real complexity into leadership work. Its arguments for and against are nuanced and often in tension with one another. Pretending otherwise does leaders and organizations a disservice. The success or failure of AI adoption will depend far less on technical sophistication than on whether leaders are willing to articulate judgment, surface assumptions, and make values explicit—for themselves and for their teams.

Staying human is not about resisting efficiency. It is about recognizing that responsibility cannot be automated, judgment cannot be outsourced, and context cannot be skipped.

As AI becomes more capable, leadership does not become simpler. It becomes more exposed. And the leaders who navigate this moment well will be the ones who are willing to slow down where it matters, challenge easy answers—including their own—and remain accountable for the thinking behind their decisions.