• Feb 8, 2026

Stop Restructuring Around AI Until You Read This

Most AI failures aren't technology problems—they're people decision problems. Learn why cutting roles before understanding value movement makes organizations brittle.

TL;DR: 95% of enterprise AI pilots fail to deliver ROI—not because the technology doesn't work, but because leaders are redesigning org charts before they understand where value has moved. When AI enters workflows, judgment and coordination don't disappear—they relocate. Cut the wrong roles too early, and you don't become leaner. You become brittle.

VIDEO VERSION:


Most AI conversations in leadership are starting to sound the same. Tools. Skills. Headcount.

Here's the problem: that's not where AI is breaking companies.

The technology mostly works. What's failing is how organizations are being redesigned around it—who creates value, who makes decisions, and how fast you can change your structure when things move. Right now, a huge number of companies are stuck in what I call pilot purgatory. Dozens of AI experiments. Plenty of demos. Very little business impact.

The numbers back this up. BCG research shows that 70% of digital transformations fail to reach their stated goals. Gartner predicted that through 2025, 80% of AI projects would remain "alchemy"—experimental work that never scales to production. And we're seeing it play out in real time.

The pattern behind those failures is surprisingly consistent. It's not a tooling problem. It's not a talent problem. It's a people decision problem.

What is the first decision error leaders make with AI?

Here's what I see organizations doing: they use AI to automate tasks, consolidate roles, and remove middle layers. Especially the middle.

On paper, it looks efficient. Fewer handoffs. Wider spans of control. Lower cost. But here's what's actually happening: leaders are redesigning org charts before they understand where the value has moved.

This instinct makes total sense. Boards are pushing for speed. Vendors are promising efficiency. Every deck says AI is going to automate a huge chunk of work. The World Economic Forum estimates that 44% of workers' core skills will be disrupted by 2027. When leaders feel that pressure to act fast, what's the lever they pull?

Headcount. Because nothing says progress like a reorg.

What happens when you cut the middle too fast?

I had a friend at a technology company who decided to flatten their revenue cycle operations after implementing an AI-powered claims processing tool. The logic seemed sound. The AI could handle 80% of claims automatically. Why keep three layers of operations managers reviewing work the machine was doing?

So they cut 40% of the middle management layer. Promoted a few high performers to "super-spans" of 25+ direct reports. Celebrated the efficiency gains.

Six months later, their denial rates had spiked 23%. Not because the AI was wrong, it was actually catching more errors than the humans had. The problem was that nobody was left to handle the exceptions. The edge cases. The claims that required a judgment call about whether to escalate, renegotiate, or write off.

That judgment work had lived in the middle. And when they cut it, they lost millions in recoverable revenue.

Where does value actually move when AI enters workflows?

Here's the part that gets missed: AI doesn't just automate tasks. It changes coordination. It connects systems that used to be siloed. It surfaces conflicting data. It collapses work that used to require multiple layers of translation.

This means the value doesn't disappear. It moves.

McKinsey research shows that as AI moves into workflows, humans shift from doing work to orchestrating systems—framing problems, deciding trade-offs, and coordinating across people, agents, and machines. Their analysis found that demand for social and emotional skills will grow by 26% in the U.S. by 2030, while demand for basic cognitive skills will decline.

Translation: the value moves into judgment, orchestration, and deciding when to trust the system and when not to.

That work often lives in the middle.

What does successful AI restructuring look like?

Compare that tech example to a another client I worked with last year.

They were implementing AI-powered quality control—computer vision systems that could detect defects 10x faster than human inspectors. The obvious move was to cut the QC team.

Instead, they asked a different question: Where does judgment need to live when the AI is doing the detection?

The answer surprised them. The value wasn't in spotting defects anymore. It was in deciding what to do about them. Which defects were acceptable for which customers? When should they stop the line versus flag for review? How should they communicate upstream to suppliers about recurring issues?

They didn't cut the QC team. They redeployed them as quality orchestrators or better defined as the people who managed the AI system, made judgment calls on edge cases, and drove continuous improvement with suppliers.

Result: 34% reduction in quality-related costs, and faster resolution times on supplier issues. They got leaner and more resilient.

Why do 95% of enterprise AI pilots fail to deliver ROI?

When leaders remove roles before redefining judgment points, they don't make the organization faster. They make it brittle.

MIT research found that 95% of enterprise generative AI pilots failed to deliver on ROI. The primary reason? Organizations tried to force AI into existing processes and structures instead of rethinking workflows and decision points from end to end.

Harvard Business Review research reinforces this: the biggest predictor of transformation failure isn't technology adoption—it's the failure to redesign decision rights alongside the technology implementation.

When that doesn't happen, you end up with a system that technically works but fails in real situations. Because no one owns the gray area.

No one reconciles conflicts. No one arbitrates trade-offs. No one is clearly accountable when the data disagrees with reality.

How did Zillow's AI strategy go wrong?

This isn't hypothetical. We've seen it play out at massive scale.

A SHRM article details how Zillow Offers failed when the company leaned heavily into pricing algorithms, overpaid for homes, and ultimately exited the business—resulting in layoffs of about 25% of their workforce and a $500+ million write-down.

The core issue: over-reliance on automated signals without having a human in the loop to exercise judgment on market anomalies, local conditions, and edge cases the algorithm couldn't see.

Zillow's algorithm was technically sophisticated. But the organization wasn't structured to catch when the algorithm was confidently wrong. There was no clear accountability for overriding the system when reality disagreed with the model.

This is what I see all the time. Organizations get stuck running AI on top of old structures instead of redesigning the work from end to end.

What rule are leaders violating?

You cannot redesign roles until you understand where judgment, context, and accountability are actually going to move.

If you cut coordination before you redefine it, you don't become leaner. You become more vulnerable.

The organizations that get this right ask a fundamentally different question. Instead of "What can we automate?" they ask "Where does human judgment create the most value in this process—and how does that change when AI handles the routine work?"

What should leaders do on Monday?

Before your next restructuring conversation, ask these questions:

1. Where does judgment currently live in this process? Identify the people who reconcile conflicts, interpret ambiguous data, and make calls when systems disagree. These roles often have titles like "senior analyst," "team lead," or "operations manager"—and they're frequently the first targets in a reorg.

2. Where will that judgment need to move? AI changes coordination. Map where decision rights will relocate—not just where tasks get automated. In my experience, judgment usually moves up (to more strategic decisions) and out (to customer-facing moments).

3. Who owns the gray area after the reorg? If no one is accountable for edge cases and trade-offs, the system will technically work—until it doesn't. Make this explicit. Name the person. Define the escalation path.

4. What's your 90-day feedback loop? The first restructuring is never right. Build in explicit checkpoints to assess whether judgment is flowing to the right places—or falling through the cracks.

The organizations that get AI right won't be the ones that move fastest on headcount. They'll be the ones that understand where value is moving before they start cutting.


This is Part 1 of a three-part series on AI decision errors. Part 2 will tackle the mistake of confusing skills with decision rights—and why your AI training programs might be solving the wrong problem.

Join our mailing list

Get the latest and greatest updates to your inbox!

About the Author

Human Capitalist

About The Author

As a recognized authority in Human Capital, I'm passionate about how AI is transforming HR and shaping the future of our workforce. Through my books Sprint Recruiting: Innovate, Iterate, Accelerate and High-Performance Recruiting, I've introduced agile methodologies that help organizations thrive in today's rapidly evolving talent landscape. 

My research in AI-powered people analytics demonstrates that HR must evolve from administrative functions to strategic business partnerships that leverage technology and data-driven insights. I believe organizations that embrace AI in their HR practices will gain significant competitive advantages in attracting, developing, and retaining talent. 

Through my podcast, The Human Captialist, and speaking engagements nationwide, I'm committed to helping HR professionals prepare for workplace transformation and technological disruption. Connect with me at www.trentcotton.com or linktr.ee/humancapitalist to learn how you can position your organization for the future of work.

0 comments

Sign upor login to leave a comment