When a system consumes more resources than a nasa project and still fails to deliver
There is a management philosophy, more common than most people realize, that holds that understanding the details of what you oversee is not only unnecessary but somehow beneath the role. Strategy lives at altitude. Details are for other people.
This is a story about what that philosophy costs — measured precisely, in people and dollars and degraded code.
The System
Canada’s largest bank, EBR — Enterprise Book of Record — was a data processing system. Its job was to process changes to equity reference data across five data streams and publish the results to downstream systems.
By volume it was a small system. Modest, even. On a typical day it processed between three and seven thousand changed records — delta updates representing three to seven percent of the daily source data. Five streams. Two processed daily. Two monthly. One weekly.
The system had existed since 2004. It had accumulated twenty years of decisions, additions, patches, and workarounds. It had a history. It had weight.
What it did not have was proportionate infrastructure.
What I Found
I joined EBR and spent three months understanding it in detail — the code, the database, the processing logic, the support procedures, the development practices. By the end of those three months I had, as I wrote at the time, the best technical understanding of anyone working on the project onshore or offshore.
What I found was an infrastructure so disproportionate to the actual problem it was solving that the numbers require a benchmark to make sense.
At FundSERV, a single business analyst managed the full processing lifecycle for the entire organization — mutual fund trade processing, regulated by law, required to be perfect, handling eight billion dollars in trades every day. Four testers covered the complete test cycle.
EBR processed three to seven thousand record changes daily. It had as many or more testers than FundSERV. It had more business analysts than any small system I had encountered in thirty years. It had fifteen to twenty people supporting it at any given time. It had a database with over two hundred tables and one hundred and fifty stored procedures in the primary system alone — close to one thousand data objects across all three databases combined. It had hundreds of shell scripts and over one hundred Java classes, most of them duplicating the same functionality with minor variations in file names and table references.
The infrastructure was not proportionate to the system. It was proportionate to the confusion about the system — and the confusion was profound.
The Director
The director who oversaw EBR was proud of something unusual.
He said it openly, in meetings, more than once. He did not understand what EBR did or how it did it. He stated this not as a confession or an apology but as a matter of fact — almost as a point of professional identity. Understanding the system was not his job. His job was to manage the people who understood it.
I had never encountered this before. I have not encountered it since.
The problem with not understanding what you oversee is not philosophical. It is operational. Every decision that flows from a position of deliberate ignorance is made without the information needed to make it correctly.
When management does not understand where competence lives, competence becomes invisible. The people doing the most valuable work are indistinguishable from the people producing the most visible activity. The person who fixes things quietly is less visible than the team that generates tickets, meetings, documents, and process. The offshore development team that was copying existing code, introducing bugs at scale, and degrading the codebase with every release was generating enormous visible activity. The Toronto developers who were fixing what the offshore team broke were generating results.
When the cuts came, management terminated the Toronto developers — myself included — and kept the offshore team.
The director who did not understand his own system could not have known what he was doing. That is precisely the point.
The Numbers That Tell The Story
The document I produced after three months broke the dysfunction into precise components.
Business analysis was consuming hundreds of thousands of dollars annually. Not a single analyst had a thorough, accurate understanding of what EBR should do and what it actually did. In one notable case, analysts produced a multi-page document describing EBR’s functionality and proposed changes. The developer assigned to implement the changes found that ninety percent of the functionality the document described simply did not exist — no code, no data, no implementation of any kind. The analysts had been documenting a system they had never fully understood.
Testing was consuming comparable resources. Code tested and signed off by the testing team was failing in production regularly. Testing was consistently behind schedule. I completed performance improvement code in November and was still waiting for it to be tested in January. I offered to test it myself. I was told that was the testing team’s job.
The offshore development team was operating as, in my words at the time, text editors rather than developers. Eighty to ninety percent of their work was copying existing code and making the minimum changes required. A request to remove a single field from a data source — the Margin Lending SRF_ID — generated a three-day estimate from the offshore team. I did it in less than an hour and included it in the testing release the same day.
The code base had accumulated thousands of lines of dead code — scripts, tables, and procedures for two entire data streams that were no longer processed, still sitting in production years after they had been decommissioned. Nobody had cleaned them up because nobody had a complete enough understanding of the system to know what was safe to remove.
All databases were running out of hardware space. EBR was pushing storage costs onto unrelated business groups at RBC — networking, storage infrastructure — because the data object count had grown so far beyond what the actual processing volume required.
The Comparison That Makes It Concrete
FundSERV processed eight billion dollars in regulated mutual fund trades every day. One business analyst. Four testers. The system worked. The trades were accurate. The regulatory requirements were met.
EBR processed three to seven thousand record changes daily. Fifteen to twenty people. Hundreds of scripts. Close to one thousand database objects. Testing that failed regularly and ran perpetually behind. Business analysis that documented functionality that did not exist.
The ratio of infrastructure to output was not a minor inefficiency. It was a systems-level failure — the accumulated cost of two decades of decisions made by people who did not fully understand what they were deciding about, managed by a director who considered that ignorance an acceptable condition of leadership.
What I Proposed
The fix was not complicated. It required clarity about what EBR actually was.
EBR was a small ETL system. It loaded data, applied business rules, and published results. Eighty to ninety percent of the development effort was consumed by the data loading component — a generic technical task that commercial ETL tools handle routinely and inexpensively.
The path forward was straightforward: acquire a commercial ETL for the generic data loading work, freeing developer capacity for the business logic that actually required custom development. Consolidate the codebase. Eliminate the dead code. Enforce the architectural separation of concerns that the system had never respected. Replace the offshore team’s copy-paste development cycle with something that did not multiply every bug across a dozen scripts simultaneously.
The savings estimate was conservative: eighty percent reduction in development effort for data loading alone, with additional savings in testing and maintenance that could reduce total costs by ninety to ninety-five percent over the system’s lifetime.
I also proposed an alternative — a dynamic EBR redesign that I could have delivered in six months in parallel with ongoing development, improving performance tenfold and eliminating the need for custom coding in ninety-five percent of future data loading scenarios.
Neither proposal was acted on.
The Toronto developers who understood the system were terminated. The offshore team that was degrading it was retained. The director who did not understand his own system continued to oversee it.
What This Story Is About
I want to be precise about the target of this story because it is not the offshore developers, who were doing what inadequately supervised developers do, or the business analysts, who were doing what analysts do when nobody holds the work to a standard of accuracy, or even the testing team, who were doing what testers do when they do not understand the system they are testing.
The target is the management philosophy that treats understanding as optional.
A director who publicly states that he does not know what his system does or how it does it has made a decision — consciously or not — that his judgment about that system will be permanently uninformed. Every resource allocation, every personnel decision, every priority call flows from that uninformed judgment. The fifteen to twenty people consuming resources disproportionate to a three to seven thousand record daily processing volume were there because nobody with the authority to ask the fundamental question — is this proportionate to what the system actually does? — had ever asked it.
I asked it. I documented the answer precisely. The document exists.
The system, as far as I know, continues as it was.
Russ Profant is a solutions architect and independent consultant with 30 years of experience across financial services, investment banking, healthcare, and government. He runs PC4IT, offering cloud cost diagnostics and architecture advisory to mid-market organizations. pc4it.com


