Product & Technology

Why GenAI in insurance is risky, and why it’s worth getting right

Today, we’re introducing the AI Compliance Risk Index™ - a new benchmark for measuring the safety of consumer-facing GenAI tools in insurance.

GenAI has the potential to transform how consumers buy and manage insurance

GenAI has already changed the way consumers shop, learn, and make decisions. Every day, more people are turning to ChatGPT and other generative AI tools to get help with purchasing decisions for everything from clothes to holidays.

That same behavior is now extending to insurance. Consumers are drawn to conversational AI because it feels natural and human, and because it can distill complex information into simple, understandable terms. It’s exactly what insurance needs. Today, buying a policy often means sifting through hundreds of pages of disclosure documents just to make an informed decision. GenAI promises to make that process as easy as asking a question.

Deploying GenAI comes with significant risks

Insurance isn’t like shopping for shoes or electronics. It’s a regulated financial service designed to protect people when things go wrong. That means accuracy, clarity, and compliance is critical.

That’s why deploying GenAI directly to consumers is risky. One misplaced word can cause serious customer harm and expose the licenceholder to severe penalties. Imagine a GenAI tool telling someone their daughter is covered under their car insurance when she isn’t, or that their home policy includes flood cover when it doesn’t. If errors like that occur at scale, the consequences for customers and insurers alike can be significant.

But with the right foundations, controls, and training, AI can become one of the most powerful tools for improving insurance outcomes.

At Open, we believe that when done right, GenAI can help people make smarter, safer choices about their cover. Insurance is complex. Customers want and need help navigating it.

The four core challenges for safe GenAI in insurance

To make GenAI safe and effective for insurance, we’ve identified four core challenges that every model must address.

1 - Compliance and Responsibility

Every response must meet regulatory and ethical expectations. The model needs to stay within the boundaries of general information and avoid offering personal or product-specific advice unless it is properly licensed to do so. The model also needs to understand appropriate escalation patterns for handling vulnerable customers and customers who wish to make a complaint. In all cases, it must ensure that its language is fair, balanced, and not misleading.


2 - Explainability

The reasoning behind a model’s response should never be a mystery. Insurers and regulators should be able to see how the AI arrived at an answer, what data sources it relied on, and why it framed information the way it did. Transparent systems build trust and make oversight possible.


3 - Reliability and Retrieval Accuracy

Reliable information is more than just correct wording. It means the model consistently provides answers that are factually accurate and drawn from the right source. Even if an AI gets a detail right, pulling it from the wrong policy, region, or product class introduces risk. In insurance, reliability means the information must not only be true, but contextually precise every time.


4 - Hallucination Prevention

Generative AI can sometimes “fill in the blanks” when it doesn’t know the answer. In insurance, that is unacceptable. Effective guardrails must stop the model from inventing details or coverage terms. A single fabricated statement can mislead a customer and cause real financial harm.

Insurance Companion, bringing safe GenAI to consumers

To solve these challenges, we’ve built region-specific LLMs trained specifically to trade and manage insurance products safely. These models underpin Insurance Companion, our conversational AI designed to help customers explore cover options confidently, while staying fully compliant with financial regulations.

Insurance Companion distills years of Open’s insurance trading expertise, our proprietary compliance framework, and our commitment to safety.

Learn more about Insurance Companion and how you can bring GenAI to your customers here.

Measuring safety: introducing the AI Compliance Risk Index™


Starting this month, the AI Compliance Risk Index™ will track the compliance performance of AI models when deployed in real-world insurance scenarios. The index compares Insurance Companion against leading foundation and off-the-shelf AI systems, setting a new benchmark for compliance transparency in customer-facing financial services.

The Index measures how each model performs against the four core challenges of deploying GenAI safely. Models are evaluated on a basket of realistic consumer questions drawn from everyday insurance conversations. Each response is assessed across a set of detailed criteria that test for:

  1. Evidence and Retrieval – Whether the model identifies and uses the correct policy evidence before generating a response.
  2. Policy Logic and Accuracy – Whether it applies the correct policy rules, terms, and numeric values without mixing or inventing information.
  3. Language and Communication Standards – Whether the answer stays within “information only” boundaries when required, remains clear, neutral, and empathetic, and avoids any misleading phrasing.
  4. Vulnerable Customer and Complaint Detection – Whether the model recognises language that signals vulnerability or dissatisfaction, ensuring reliable detection for potential escalation.

Each dimension is scored, and the aggregated results form the model’s overall Compliance Risk Index™. A lower index reflects a lower incidence of regulatory issues per 1,000 conversations, representing a safer, more trustworthy AI.

We’re doing this because we believe GenAI in insurance shouldn’t just be powerful. It should be measurable, transparent, and built for customer protection from the ground up.

Copy to clipboardLinkedin LogoTwitter logoFacebook logo
No items found.

You might also like

Designing and building a safety harness for our insurance LLMs
For safe and robust deployment of customer facing GenAI, you need a supporting safety harness around your LLM.
We're building Open General Insurance Intelligence – and not using closed source models
AI can bring clarity to complex insurance products, but to do so safely, you need an AI that deeply understands insurance.
Woman talking to team member
Privacy by design
What you share, and who you share it with, should be up to you. Here we look at how we design and architect for privacy, and give users control over their information.

Sign up for our newsletter

Register your information to receive all the latest news and updates from Open

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By clicking Subscribe you're confirming that you agree with our Terms and Conditions.