What's Really Driving the Cost of Insurance Migration?
The Economy Drivers That Shape Every Decision
In our last post, we looked at the transaction costs of insurance migration and how they distort strategic decision-making across the industry. The numbers were stark: 83% of data migration projects failing or overrunning, cost blowouts averaging 30%, and the UK littered with transformation programmes that consumed hundreds of millions before being quietly shelved. Expensive lessons in not knowing what you were sitting on.
But stating that migration is expensive doesn't help anyone. The more useful question is: why is it expensive? What are the specific economy drivers that turn moving a book of business from one platform to another into the kind of programme that gives CTOs night sweats?
Understanding these drivers isn't academic. It's the difference between walking into a migration with a realistic plan and walking into one with a spreadsheet full of optimism. So let's pull them apart.
1: Data Complexity and Quality
This is the one that bites hardest, and usually earliest.
Insurance policy data isn't a clean dataset sitting in well-structured tables waiting to be moved. It's decades of accumulated business decisions, encoded in formats that made perfect sense to the system that created them and make very little sense to anything else. Endorsements layered on endorsements. Claims linkages that cross product lines. Rating factors stored in ways that reflected the technology of the time, not the needs of today. A few decades of "we'll fix it later," except later is now, and it's expensive.
Research consistently shows that bad data costs organisations around $15 million per year on average. In insurance specifically, poor data quality has been estimated to cost carriers up to 20% of their revenue. That's not a rounding error. That's a structural problem wearing a very convincing disguise. When you attempt to migrate that data to a new platform, every inconsistency, every duplicate, every mystery code becomes a problem that has to be resolved before the migration can proceed. The industry talks about "Phase Zero," a dedicated stage for data profiling, auditing, and cleansing before the actual migration begins. Good practice. Also where budgets start to stretch, timelines start to slip, and someone asks "did we know about this?" The answer is usually no.
When 10 million or more records are involved, 61% of projects overrun on time alone, and that's before anyone has actually migrated anything.
There's a deeper issue here too. Insurers report that 60 to 80% of the data they collect is effectively inaccessible, trapped in formats, systems, or structures that prevent it from being used. This "dark data" doesn't just represent wasted storage. It represents a minefield for migration teams who have to decide what to bring forward, what to archive, and what to leave behind, often without clear visibility of what's actually there. You can't make good decisions about data you can't see. And yet that's exactly what most migration programmes are asked to do.
2: Regulatory and Compliance Overhead
In the UK, this is where the cost compounds. And where the stakes get personal.
The FCA requires firms to maintain appropriate controls around data accuracy, access, storage, and use. That's not a suggestion. Providing information that is false, misleading, or deceptive is a criminal offence under FSMA. For migration projects, this means every policy record carried across must be demonstrably complete and accurate, with full traceability maintained throughout the process. No shortcuts. No "close enough."
In practice, that means additional validation cycles, compliance sign-off gates, specialist resource for data reconciliation, and audit trails that document every transformation applied to every record. For life insurers, where policy terms can span decades and contractual obligations are long-tailed, the regulatory bar is even higher. You're not just moving data. You're moving promises.
The FCA's own evaluation of its General Insurance Pricing Practices remedies found that compliance costs were higher than initially assessed, with implementation costs disproportionately impacting smaller firms. Migration carries a similar dynamic. The regulatory overhead doesn't scale linearly with book size. A small specialist insurer migrating 50,000 policies faces many of the same compliance requirements as a composite carrier migrating five million. Same gates, same evidence, fraction of the budget.
3: Platform and Vendor Lock-In
The major policy administration vendors (Guidewire, Sapiens, Duck Creek and others) have built powerful ecosystems. Guidewire alone serves over 570 insurers. Duck Creek was acquired by Vista Equity Partners for $2.6 billion in 2024. These are significant platforms, and they do significant things.
But that power comes with a structural consequence. Data structures optimised for fast daily processing and efficient storage within these platforms are, as industry analysts put it, radically different from the structures needed for extraction, analysis, or migration. Proprietary data models mean that what should be a straightforward data export frequently becomes an engineering exercise. And sometimes a negotiation.
Pricing models compound this. Guidewire charges annual fees as a percentage of Direct Written Premium, with incremental fees as volumes grow. The economics of staying can feel punitive, but the economics of leaving (extracting data from a proprietary ecosystem, transforming it, and loading it into an alternative) can feel worse. It's the hotel you can't afford to stay in and can't afford to leave.
In the UK market, where many carriers have built years of customisation on top of these platforms, the escape velocity required is considerable. The global policy administration market is projected to reach $31.6 billion by 2032. That's a lot of gravity.
4: Undocumented Business Logic and Tribal Knowledge
This is the economy driver that nobody puts in the business case, but everyone discovers mid-programme. Usually on a Thursday afternoon when something breaks.
Legacy insurance systems don't just hold data. They hold decisions. Rating algorithms, underwriting rules, exception handling, pricing logic, all encoded in code paths that were written years or decades ago, often by people who have long since moved on. Documentation, where it exists, is frequently outdated or incomplete. The real understanding of how these systems behave lives in the heads of a shrinking pool of specialists. Ask Janet. She knows. Except Janet retired in March.
PwC research indicates that insurers spend on average 70% of their annual IT budget maintaining legacy systems. A significant portion of that spend is, in effect, paying to keep the lights on in systems that nobody fully understands anymore. When modernisation programmes encounter business-owned applications or near-defunct systems that handle extreme outlier conditions, the project scope expands in ways that weren't anticipated, because they couldn't have been, without proper discovery.
The retirement risk is real and accelerating. Legacy system experts familiar with COBOL, RPG, and decades-old architectures are leaving the workforce, taking critical knowledge with them. Every year that passes without capturing that knowledge makes eventual migration harder and more expensive. The tribal knowledge problem isn't theoretical. It's a ticking clock.
5: Organisational Readiness (The Human Cost)
Finally, there's the driver that's hardest to quantify but easiest to feel.
Migrations aren't just technical projects. They require business stakeholders to make decisions about data mapping, product rationalisation, and acceptable tolerances between what the old system held and what the new system can represent. They require underwriters to validate that their rating logic has been faithfully reproduced. They require claims teams to confirm that linkages are intact. They require compliance to sign off that nothing has been lost. And they require all of this on top of the day job.
Actuaries and underwriters already spend up to 50% of their time on manual data work rather than the high-value analysis they were hired for. Adding migration validation to that workload doesn't just slow things down. It creates fatigue, errors, and a growing temptation to cut corners that will cost more to fix later. The human cost of migration rarely makes it into the business case, but it's often the thing that determines whether the programme actually lands.
Where This Leaves Us
These five drivers (data complexity, regulatory overhead, vendor lock-in, undocumented logic, and organisational readiness) don't operate in isolation. They compound. A data quality issue triggers a compliance question, which exposes an undocumented business rule, which requires a specialist who retired two years ago, which stalls a decision that blocks the entire programme. It's not one big problem. It's a lot of connected problems wearing a trench coat.
That compounding effect is why migration costs are so consistently underestimated, and why the 83% failure rate isn't as surprising as it first appears.
In our next post, we'll explore what can actually be done about this. Specifically, how targeted discovery across data, process, code, and design documentation can address each of these economy drivers before a migration begins, reducing cost, reducing risk, and reducing the likelihood of joining the majority of projects that don't go to plan.1