Building Your AI Dream Team in Insurance: The 70-80% In-House Rule
- Shen Pandi
- Jan 12
- 5 min read
AI has quietly crossed a line in insurance. It is no longer something teams experiment with on the side. It now shapes who gets covered, how risk is priced, and how quickly customers receive outcomes. As insurers move beyond pilots, a more uncomfortable question shows up in leadership discussions: who actually owns the intelligence behind these decisions?
The insurers making real progress are learning the same lesson. Buying tools is easy. Building capability is hard. And without that capability living inside the organisation, AI never becomes a lasting advantage.
From Experimentation to Real Capability
In Part 1 of this series, we talked about why AI has moved from “nice to have” to mission critical for insurers. We also introduced the Six Signature Moves that separate leaders from laggards and surfaced a truth that keeps repeating itself in real programmes: technology does not create advantage on its own. Capability does.
That brings us to the second signature move-
Build an AI organisation where 70–80% of critical capability is in-house.
For insurers, this is not about culture or control for its own sake. It is about managing risk, meeting regulatory expectations, and staying competitive in a market where decision quality is everything.
Why You Cannot Outsource Your AI Brain

Insurance is different from most industries using AI. Models here are not just supporting analysis. They are actively making or shaping decisions that affect pricing, eligibility, claims outcomes, and customer trust.
When too much of this intelligence sits outside the organisation, three predictable problems show up.
You Lose Underwriting and Claims Intelligence
Vendors understand models. Your teams understand insurance.
Things like loss development behaviour, policy wording nuances, jurisdictional claims patterns, and broker incentives do not live in code repositories. They live in underwriting rules, claims handling habits, and years of institutional learning.
When AI capability is largely external, that knowledge never fully feeds back into the system. Over time, the insurer becomes dependent on outputs without owning the understanding behind them. That is not a technology risk. It is a strategic one.
You Inherit Regulatory and Explainability Risk
Regulators are asking harder questions about AI. They want to know how decisions are made, how bias is controlled, and who is accountable when something goes wrong.
If models are built and run by external parties, those answers get fuzzy. Who owns the decision? Who can explain it in plain language? Who takes responsibility when a claim is challenged or a premium increase is questioned?
In insurance, blurred ownership quickly turns into governance exposure.
You Get Stuck in Pilot Mode
Most insurers have no shortage of pilots. Fraud tools here. Document automation there. A chatbot somewhere in between.
What’s missing is the ability to turn these into core systems. That only happens when teams inside the organisation know how to integrate, monitor, govern, and evolve AI over time. Without that muscle, pilots stay pilots and never deliver enterprise-level impact.
What the 70–80% In-House Rule Actually Means
This rule does not mean building everything yourself or cutting vendors out entirely.
It means something simpler and stricter: the intelligence that drives insurance decisions must live inside your organisation. Partners should help you move faster, not become the owners of your decision-making engine.
Once you look at it this way, the lines become much clearer.
What Needs to Stay In-House
Let’s get into it …
AI Product Owners Close to the Business
These are not generic product managers. They sit inside underwriting, claims, fraud and SIU, pricing, and customer operations.
Their job is to turn real insurance problems into AI use cases, define where automation stops and human judgment takes over, and stay accountable for outcomes like loss ratio, leakage, and turnaround time.
Without them, AI teams optimise models. With them, AI improves insurance performance.
Data Engineering Built Around Insurance Reality
Insurance data is messy by nature. Legacy systems, scanned documents, adjuster notes, and third-party data all collide in daily workflows.
Internal data engineers understand which data fields actually matter, how information flows across systems, and how to trace decisions for audits and regulators. This is not plug-and-play work. It is core infrastructure that only makes sense when it sits close to the business.
Machine Learning Engineers Who Know Insurance
Claims severity does not behave the same way across products. Fraud patterns change with economic cycles. Catastrophe risk blends traditional actuarial thinking with modern machine learning.
Engineers who work inside the insurer learn these patterns over time. They know when accuracy must be traded for explainability and when automation introduces more risk than value. This is where real differentiation is built.
MLOps and Model Ownership
Insurance models need to be monitored, retrained, and auditable long after they go live. That responsibility cannot be delegated away.
Internal MLOps teams make sure models stay stable in production, changes are traceable, and decisions can be reconstructed months or years later. Without this, AI remains fragile and risky to scale.
Governance, Risk, and Compliance for AI
Fairness, transparency, and regulatory alignment are not side concerns in insurance. They are central to trust.
Accountability for AI decisions must sit with the insurer. No vendor can carry that responsibility on your behalf. Keeping this capability in-house is non-negotiable.
Where Partners (WE) Actually Help
External partners are most valuable when they shorten learning curves.
They help with document automation, specialised GenAI use cases, migrations, and tooling. They add risk when they own underwriting logic, run production models without oversight, or become the only people who understand how things work.
The goal should always be learning and transfer, not dependency.
What a Practical AI Team Looks Like
In practice, insurers that get this right usually combine a central AI platform and governance team with embedded AI squads inside claims, underwriting, and fraud. Strategic partners support specific needs, but core intelligence stays internal.
This structure gives you control without slowing progress.
How to Start Without Overloading the Organisation
You do not need a massive hiring wave on day one.
Start by putting clear AI ownership into claims and underwriting. Build a small internal foundation team for data and machine learning. Use partners, but make sure your teams are learning alongside them.
Over time, bring the most valuable capabilities inside. This is how insurers build strength without creating chaos.
Why This Pays Off

Insurers that follow the 70-80% in-house rule move faster, retain institutional knowledge, and reduce regulatory risk. Over time, every new model and every new use case compounds the advantage they already have.
Those that ignore it stay dependent on vendors, fragmented across pilots, and permanently stuck trying to “catch up.”
What Comes Next
In Part 3, we will look at the next signature move: embedding AI into insurance decision-making rather than treating it as automation. That is where underwriting, claims, and fraud workflows truly start to change, and where AI begins to amplify human judgment instead of replacing it.




Comments