Your Modernization Project Is Smaller Thank You Realize

Blog

The Glover Team

The complexity of legacy software often means a modernization effort has to start with the same assumption: all the code matters. This is of course not true but it is very hard to figure out which code is dead.

In our experience — and this is consistent across every legacy estate we've analyzed — somewhere between 30% and 50% of enterprise code is dead. Not "rarely used." Dead. Long-deprecated features that were never pruned. Customer customizations from clients who left a decade ago. Abandoned branches that got merged and forgotten. Entire subsystems that haven't processed a transaction since the floppy disk.

Traditionally, figuring out what is dead is a huge part of scoping a modernization effort. Initial consultant estimates usually take into account the full line count because they don't know. If they are just looking at code, AI agents ingest everything and treat it all as equally important. The result is that enterprises routinely pay to modernize code that could have been deleted or spend huge amount of time and money figuring out what should stay and what should go.

What Dead Code Actually Looks Like

Dead code in a legacy enterprise system isn't the kind your IDE flags with a yellow squiggle. It's not an unreachable else branch or an unused import. Those are trivial.

Enterprise dead code is structural. It's the claims adjudication module that was built for a product line the company exited in 2014. It's the reporting subsystem that was replaced by a third-party tool but never decommissioned because nobody was sure what else depended on it. It's the batch processing logic for a regulatory filing format that hasn't been required since the rules changed six years ago.

This code compiles. It deploys. It may even run on a schedule. But it produces no business value and hasn't for years. And in a COBOL system — where everything is connected through copybooks, JCL, and batch schedulers — the dependencies are opaque enough that most teams are afraid to touch it.

So it stays. And when modernization comes, it gets modernized along with everything else.

Why Code Analysis Alone Can't Find It

If you're evaluating AI modernization tools, this is the question to ask: can they tell you what's dead? Because static code analysis alone cannot reliably answer it.

A static analyzer can tell you that a function is never called from within the codebase. It can't tell you that the function is invoked by a batch scheduler, triggered by an external event, or referenced in a configuration file it doesn't know about. In legacy environments, execution paths routinely cross boundaries that no single analysis tool can see — JCL job streams, message queues, database triggers, UI-driven workflows, third-party integrations.

The result is that purely code-based tools produce one of two outcomes: false negatives (they miss dead code because they can't prove it's dead across all possible invocation paths) or false positives (they flag live code as dead because they can't see the external caller). Neither is useful at enterprise scale. The first means you modernize code you don't need. The second means you delete code you do.

The only reliable way to identify dead code is to correlate static analysis with production telemetry. You need to know not just what the code says, but what actually executes. Which modules are hit in production. Which batch jobs actually run and produce output that something downstream consumes. Which UI paths users actually traverse. The gap between what's in the codebase and what's in production is where dead code lives.

The Omnimodal Difference

At Glover Labs, dead code detection isn't a feature we bolted on. It's a natural consequence of how we build understanding.

Our approach — using omnimodal ingestion — doesn't start and end with source code. We ingest code, UI flows, production logs, support tickets, documentation, database schemas, and subject-matter expert knowledge. We build execution graphs and ASTs using static code analysis, then map production telemetry against them to identify which code paths are live and which are dead.

The UI evidence matters more than people expect. If a module exists in the codebase but no UI flow references it and no production telemetry hits it, that's a strong signal. If a support ticket system shows zero issues filed against a subsystem in three years, that's another. When you layer code analysis, runtime data, and operational evidence together, the dead code reveals itself — not through any single signal, but through the convergence of all of them.

Code-only tools can't do this. They treat all code as equally important because they have no other signal to work with. A tool like Devin or Cursor pointed at a legacy codebase will dutifully analyze, translate, and modernize dead modules right alongside live ones — burning time, budget, and review cycles on work that produces zero value.

What This Means for Your Modernization Budget

The numbers are large enough to change project economics. We've seen prospects with 80+ application estates where dead code detection alone could eliminate 30–50% of migration scope. Not 30–50% of effort on a given module — 30–50% of the modules themselves.

Run that math against your modernization budget. If you're scoping a $20 million program and half the code is dead, you're looking at potentially $6–10 million in unnecessary work — unnecessary analysis, unnecessary translation, unnecessary testing, unnecessary review, unnecessary deployment. And that's before you account for the downstream risk: modernized dead code still needs to be maintained, still consumes CI/CD resources, still adds surface area for bugs and security vulnerabilities.

The smarter play is to know what's dead before you start. Build the map first. Understand which parts of the system are actually producing business value, which are vestigial, and which are somewhere in between. Then scope your modernization against reality, not against a line count.

Dead Code as a Leading Indicator

The dead code problem is worth solving on its own economics. But it also tells you something uncomfortable about the organization.

When code stays dead for years without anyone noticing, the organization has lost track of what its own systems do. The people who wrote those modules have retired or moved on. The documentation — if it ever existed — is buried in a SharePoint nobody checks. The operational knowledge that would tell you "that module hasn't done anything useful since 2016" exists only in the heads of people who are too busy to be asked.

We've seen this at every enterprise we've worked with. The dead code percentage correlates almost perfectly with how much institutional knowledge has walked out the door. A system with 30% dead code has had moderate attrition and some documentation gaps. A system with 50% dead code has had a generational turnover — the people who built the system are gone, and the people running it inherited something they don't fully understand.

Dead code is a symptom of lost context. And lost context is what makes every other part of modernization harder — dependency mapping, business rule extraction, risk assessment, migration planning. If you don't know what's alive and what's dead, you don't know what your system does. And if you don't know what your system does, no agent — no matter how capable — can modernize it safely.

Building that context back, systematically, from every available source — code, telemetry, tickets, docs, the people who still remember — is the first thing we think a modernization program should do. We built Glover Labs to do exactly that.

Book a demo to see how the Living Spec maps what's live, what's dead, and what it means for your modernization scope.