Your Privacy

This site uses cookies to enhance your browsing experience and deliver personalized content. By continuing to use this site, you consent to our use of cookies.
COOKIE POLICY

Skip to main content

The Disappearing Middle of Software Work: Why the Bookends – Strategy & Impact – Matter Most Now

The Disappearing Middle of Software Work: Why the Bookends – Strategy & Impact – Matter Most Now
Back to insights

Here’s a question nobody in enterprise software wants to sit with: what happens to the middle? 

Not the middle of the org chart. The middle of the work. The vast, expensive layer of effort that has defined enterprise software delivery for thirty years—translating what the business wants into working code. The requirements-to-implementation pipeline. The “build phase.” 

That middle is compressing. Fast. Forty-one percent of all code written in 2025 was AI-generated. GitHub Copilot crossed 20 million users, up 400% in a single year, adopted by 90% of the Fortune 100. McKinsey’s controlled studies show developers completing tasks in roughly half the time. The gap between “knowing what to build” and “having working software” is shrinking. 

Karri Saarinen, the CEO of Linear, named it directly earlier this year: the middle of software work—turning intent into something real—absorbed most of the time, attention, and craft of software teams. When that middle disappears, what comes into focus is forming the right intent and making sure the outcome actually meets it. 

Read that again. The front and the back. Not the middle. 

When building software is no longer the largest effort, the importance shifts to knowing what to build and proving that it worked.

That’s the thesis. And if you’re responsible for an enterprise technology portfolio, the implications are enormous. 

The productivity numbers are contested. That makes the point stronger.

AI analyticsBefore you assume this is another breathless “AI changes everything” piece, consider the counterevidence. A randomized controlled trial by METR found that AI actually made experienced developers 19% slower on real-world tasks—even as those same developers believed they were 20% faster. Google’s DORA Report showed that every 25% increase in AI coding tool adoption correlated with a small dip in delivery speed and a measurable drop in system stability. GitClear found AI-generated code has a 41% higher churn rate—meaning it gets rewritten more often. 

This doesn’t undermine the thesis. It sharpens it. The middle isn’t simply accelerating—it’s transforming. The nature of the work shifts from writing code to reviewing, directing, and validating AI output. The volume of raw implementation goes up while the judgment required to make it good stays stubbornly human. The middle gets cheaper, not necessarily faster. And that changes the economics of every engagement. 

We’ve seen this movie before. It was called “buy a platform.”

For twenty years, the enterprise playbook was straightforward: buy a platform, configure it, standardize on it. The logic was sound. Custom software was expensive, risky, and hard to maintain. Platforms eliminated risk by eliminating uniqueness. 

The problem, of course, is that your competitors can buy and configure that same platform. Hundreds of millions spent implementing best-in-class systems that didn’t actually get the business what it wanted—because the platform wasn’t designed for how that particular business runs. We’ve watched this happen at client after client. The numbers back it up: 55–75% of ERP projects fail to meet their objectives. Seventy percent of digital transformation initiatives fail outright. IDC estimates $2.3 trillion has been wasted globally on failed digital transformation programs. 

And here’s the thing that doesn’t get said enough: buying a platform is the dying middle in enterprise form. It’s the same compression, just at a higher altitude. “Buy platform, configure platform, staff platform” was the enterprise equivalent of “translate spec into code.” Both are commodity middle layers. Both are hollowing out. 

Your uniqueness and differentiation is what drives your competitive advantage. Your technology platforms should enable and accelerate that advantage. They should never hold you back or dilute it.

The organizations that figured this out early are already moving. Sixty-one percent of enterprises expect a fully composable architecture by 2026. Gartner predicts those pursuing composable approaches will see 30% higher revenue growth. The market is shifting from off-the-shelf to purpose-built—not because custom is fun, but because differentiation demands it.

The front: when building becomes commoditized, knowing what to build is the whole game. If the middle is compressing, where does the value go? It goes to the front of the engagement—the work that happens before anyone writes a line of code. 

This sounds obvious until you look at how most organizations actually spend their money. PMI’s data shows 47% of unsuccessful projects fail due to inaccurate requirements. Barry Boehm’s foundational research demonstrated that fixing a requirements error in production costs up to 100 times more than catching it during the requirements phase. And yet most engagements still sprint through discovery in a few weeks so they can get to the “real work” of building. 

That instinct made sense when building was expensive. When the build phase took 18 months and consumed 80% of the budget, of course you wanted to get there quickly. But the economics have flipped. When AI compresses the build by orders of magnitude, the dominant risk is no longer “can we build it?” The dominant risk is “are we building the right thing?” 

Marty Cagan at SVPG has been saying this for years: only 10–30% of shipped features actually yield positive business outcomes. The majority of what teams build goes underused. That’s not a build problem. That’s a discovery problem—and it’s a discovery problem that gets more expensive, not less, as building gets cheaper. Because now you can build the wrong thing faster than ever. 

Discovery isn’t a phase. It’s the product.

There’s an instinct that the best discovery practitioners share, the discipline of approaching every engagement willing to ask questions that feel almost embarrassingly simple. Why are we doing this? Is this even a technology problem? What happens if we do nothing? What are three people in this room assuming that the other two don’t know about? 

It turns out this isn’t just a consulting trick. Harvard researchers studying 166 science challenges found that problem-solving success was positively correlated with the solver’s distance from the problem’s domain. Outsiders solved problems that stumped experts—precisely because they didn’t carry the assumptions that kept insiders stuck. Fresh eyes aren’t a weakness. When AI handles the predictable middle, fresh eyes become the entire value proposition. 

The back: the graveyard of software is full of things that shipped on time.

Here’s the piece most firms are sleeping on. If AI can generate code fast and platforms can be configured in weeks, the critical question flips from “Can we build it?” to “Does it actually work? Does it actually matter?” 

ai adoptionThe data on this is genuinely alarming. Pendo’s research shows 80% of features in the average software product are rarely or never used. The Standish Group found 45% of features are never used at all. CISQ and Carnegie Mellon estimate the total cost of poor software quality in the United States at $2.41 trillion. 

Read those numbers again. We’re not talking about failed projects. We’re talking about shipped projects—things that were built, deployed, and declared “done”—that nobody uses. The definition of success in most organizations is still “it launched.” That’s a catastrophically low bar in a world where launching is becoming trivial. 

Validation—real validation, not UAT sign-off—becomes the premium skill. Does it solve the stated problem, not the spec? Is anyone actually using it? Are the advantages compounding over time, or is this a one-and-done project that decays the moment the team walks away? Can you prove the ROI, or did you just build an expensive science project? 

One of the sharpest framings we’ve encountered came from an internal discussion about sprint cadence. The insight was this: a project team’s real sprint cycle isn’t the two-week increment a project manager puts on a calendar. It’s the time between exposures to real end users for genuine feedback. If you can’t get that exposure until two years from now, your actual sprint cycle is two years. Not two weeks. You’re waterfall wearing an agile costume. 

AI is incredible at generating answers. It’s terrible at knowing whether the answer was right.

That gap—between generating output and validating that the output matters—is where the next era of consulting value lives. The firms that master it will compound their clients’ advantages with every cycle. The firms that skip it will keep shipping features into the void. 

The new shape of the engagement.

Put it all together and the geometry of a software engagement fundamentally changes. The old model was thin discovery, long build, minimal validation—with budget allocated roughly 10/80/10 across those phases. The new model inverts it: deep discovery, expedited build, rigorous validation. Something closer to 30/40/30. 

This isn’t a theoretical exercise. David Autor at MIT describes a “barbell pattern” emerging across the knowledge economy—demand growing at both ends while the middle thins out. Harvard and BCG’s “jagged technological frontier” study showed AI supercharges mid-tier tasks but value accrues to those who know which tasks are inside the frontier and which are outside it. The inside tasks get automated. The outside tasks—the ones requiring judgment, context, and human understanding—become more valuable, not less. 

From projects to flywheels.

There’s one more shift that matters, and it’s the one that ties the front and the back together into something durable. 

software projectsMost enterprise software work is still structured as projects—discrete engagements with a start, a build, and a handoff. The problem with projects is that they don’t compound. Every new initiative starts from zero. Discovery gets repeated because institutional knowledge walks out the door. Validation gets skipped because the team has already moved on. The client pays $500,000 for discovery, gets value from it, and then pays $500,000 again eighteen months later to rediscover the same things. 

The alternative is a flywheel: institutional knowledge that compounds over time, people practiced in curating real-world feedback from the field, engineering patterns that accelerate delivery without waiting on legacy modernization, and digital products that serve as insight sources feeding back into the next cycle. Each rotation costs less and delivers more. Each engagement makes the next one smarter. 

Discovery produces a hypothesis and success metrics. The build is fast enough to test the hypothesis quickly. Real usage creates adoption signals. Those signals become recommendations, which inform the next discovery cycle.  

This is precisely what our Adoption App is built to do. Rather than relying on anecdotal feedback or post-launch surveys, it embeds structured testing directly into the delivery process—translating real-world workflows into moderated task scripts, capturing behavioral signals and friction points in the moment, and feeding prioritized recommendations back into the product. It turns the feedback loop from a manual, episodic exercise into an instrumented, repeatable system. The flywheel doesn’t just exist in theory; it runs on something. 

BCG’s research on “digital flywheel” companies found they delivered 2.5x higher total shareholder value. OpenView’s data shows product-led growth companies—the ones built on usage-feedback-improvement loops—grow at 50% year-over-year compared to 21% for traditional models. The economics of compounding are brutal for firms that don’t have them and transformative for firms that do. 

The firms that will thrive aren’t the ones with the biggest build teams. They’re the ones that master the bookends—the discipline to slow down at the start and the rigor to actually validate at the end.

The edges are where the value lives.

If you’re responsible for an enterprise technology portfolio, look at your last three major software initiatives. How much time and budget were allocated to discovery versus build versus validation? If the answer is anywhere near 10/80/10, you’re investing heavily in the one part that AI is about to commoditize. 

The middle will take care of itself. It already is. What won’t take care of itself is the hard, human, sometimes uncomfortable work of figuring out the right problem to solve—and then proving, with evidence, that you actually solved it. 

That’s where competitive advantages compound. That’s where the future of this work lives. 

And if that sounds simple, good. The best questions usually do. 

About Josh Bartels

With over 15 years at the forefront of technology innovation, I've dedicated my career to delivering strategic solutions that drive business growth. As the Chief Technology Officer of UDig, I lead our technology vision, architecting solutions that transform how organizations leverage technology to generate impact.

Digging In

  • Software Engineering

    Zero-Code Telemetry with OpenTelemetry’s OBI

    Full distributed tracing and exception capture for any application — without writing a single line of instrumentation code. View the source code on GitHub → The Premise Observability is essential for understanding what’s happening inside your services, but instrumenting an application by hand — adding trace spans, logging calls, and metric counters throughout your codebase […]

  • Software Engineering

    Building a Consultant in the Trenches: How Playing Offensive Line Shaped My Consulting Career

    People often ask me the same question when they find out that I played college football: “Do you miss it?” On the surface, it’s a bad question with an obvious answer. Yes. However, if I give myself a minute to sit with that question, the answer is more nuanced. Yes, I miss playing football, but […]

  • Software Engineering

    Modernization That Sticks: Why Adoption, Not Just Architecture, Drives Success

    Modernizing a legacy sales platform in a large enterprise isn’t just a technical challenge, it’s a cultural and operational one. On a recent project with a Fortune 500 organization, several past attempts to replace the aging ERP system failed. Why? Because those efforts treated modernization as a software delivery exercise, not an adoption journey. When […]

  • Software Engineering

    Choosing the Right Modernization Approach

    When organizations decide it’s time to modernize their technology infrastructure, choosing the right approach is crucial. Modernization isn’t merely a technical upgrade; it’s a strategic business move that significantly impacts growth, agility, and long-term success. Here’s how your company can effectively begin to select the best modernization strategy tailored to your goals and challenges. In […]

  • Software Engineering

    TAG Panel: Differentiate Your Customer Experience

    Join the CX and Product Management Societies to hear from our panel of Human-Centered Design experts on the business value of Agentic AI.

  • Software Engineering

    The Bloated SaaS Era: Paying More for Less While Businesses Wait

    SaaS was supposed to make business faster, smarter, and more efficient. Instead, it’s become bloated, expensive, and painfully slow to change. The platforms we rely on—Salesforce, Workday, SAP, and others—haven’t truly innovated in years. Yet, they demand massive investments in re-implementation, process re-engineering, and data migration just to keep up. It’s time to ask: Are […]