“Yes, More or Less”

A large Canadian insurance company had a problem with its operations team.

Every morning, seven days a week, from seven to ten AM, the team worked through three checklists before the segregated funds were released for trading. Three hundred pages of manual verification. Daily. Without exception.

Management had been watching this for years. The conclusion was obvious: this was inefficiency. Bad process. Legacy behavior that a modern renovation project should eliminate.

I was brought in as enterprise architect on that renovation project. Eliminating the manual work was one of the top objectives.


What I Found Instead

The operations team was too busy to talk to. Every morning, seven to ten, they were working through the checklists. That busyness was exactly what management was pointing to as evidence of the problem.

I could not investigate directly. So I built the picture from the margins.

Conversations with adjacent teams. System documentation. Process artifacts. Data flow analysis across the ten downstream systems feeding into the fund administration platform.

The picture that emerged was not what management saw.

The ten downstream systems had a data quality problem. A chronic one. Data errors had been flowing into the fund administration system for years. The operations team had flagged the issues repeatedly. Fixes had been applied, supposedly. The errors kept coming.

At some point the operations team had made a rational decision. They stopped trusting the incoming data. They built their own verification layer — three hundred pages of daily manual checks designed to catch every category of error before the funds were released for trading. They were not being inefficient. They were being rigorous. The manual process was not the problem. It was the solution to a problem that nobody above them fully understood.

Management had been misreading this for years. They saw people doing manual work and concluded the people were the issue. They never asked why the manual work existed.


The Question

I had one opportunity to speak directly with the operations manager. A brief window between the morning process and the rest of her day.

I asked one question.

“You are doing all this to prevent trading errors — because you do not trust the incoming data — because no matter how many times you flagged the issues and fixes were supposedly applied, the data errors never stopped?”

She looked at me for a moment.

“Yes. More or less.”

Three words. Everything confirmed.


What It Meant

The renovation project had been designed to automate away the manual checks. If it had proceeded as planned, it would have eliminated the only functioning quality control mechanism in the fund administration process.

The errors the operations team was catching every morning — errors in a regulated financial environment where fund miscalculations have serious legal and financial consequences — would have re-emerged. At scale. Automatically. Without the human verification layer that had been quietly preventing them for years.

My supervisor, the AVP architect who had hired me for the project, did not fully grasp this when I explained it. The diagnosis was that counterintuitive. Years of observing the manual process had not produced the question that three weeks of peripheral investigation had answered.

The renovation objective needed to change. Not eliminate the manual work. Fix the upstream data quality that made the manual work necessary. That is a different project with a different scope and a different solution.


The Pattern

Every organization has a version of this.

Somewhere in the building, a team is doing something that looks inefficient from above. Extra steps. Manual processes. Workarounds that have become load-bearing without anyone above a certain level understanding why they exist.

The people doing the work know. Management does not.

The diagnostic question is never “how do we eliminate the manual work?” It is always “why does the manual work exist?” The answer to the second question changes everything about the answer to the first.

In this case the operations team had built something genuinely sophisticated under constraint, without recognition, while being misread as the source of the problem they were actually solving.

The question took one sentence. The answer took three words. Everything else followed from there.


Russ Profant is a solutions architect and independent consultant with 30 years of experience across HP, Morgan Stanley, CIBC, and RBC. He runs PC4IT, offering cloud cost diagnostics and architecture advisory to mid-market organizations. pc4it.com