Product & Technology

We're building Open General Insurance Intelligence – and not using closed source models

AI can bring clarity to complex insurance products

When comparing insurance products, consumers are forced to navigate hundreds of pages of complicated policy documents. This makes buying the right cover a difficult and time consuming task. Can generative AI help consumers make sense of all of this complexity? At Open, we believe it can, but only if it is applied in a safe and thoughtful way. 

But using GenAI carries risk: one wrong word is all it takes to cause harm

In insurance, one misplaced phrase can have serious consequences. A small wording error about who is covered or when an excess applies can lead to a denied claim for a customer and a regulatory breach.

Imagine an AI assistant telling a homeowner that a particular policy includes flood cover when it does not. The customer, reassured by that response, buys the policy. Months later, their home is damaged in a storm and their claim is rejected. What began as a failed document lookup by the AI becomes financial loss, emotional distress, and a breach of the insurer’s duty to provide fair and accurate information.

Deploying general purpose AI in this context is risky. We tested leading foundational models by OpenAI (GPT), Anthropic (Claude) and Google (Gemini). We ran thousands of simulated customer conversations and analysed every output against safety and compliance standards. The results were clear: these models cannot trade safely in insurance. 

In the simulations, we saw models hallucinate cover options, personally recommend products, invent discount codes and explain why products were the “best deal” for customers. When dealing with regulated financial products, these sorts of failures can cause serious customer harm. As Lloyds Banking Group discovered when fined £90 million for misleading communications, one wrong word can have major consequences.

We need domain experts, not generalist models

GPT-5 with a prompt isn’t going to give you a robust system, even if you say you’ll give it a £2k tip if it stays compliant. Solving this problem is not about making bigger models or writing cleverer prompts. It is about building systems that truly understand the rules of the domain they operate in. 

Across industries, leaders are discovering that general purpose AI isn’t enough for regulated domains. Early on, Bloomberg built BloombergGPT, a model trained specifically for the financial markets. In healthcare, Google’s MedGemma models are trained on medical data and clinician reasoning to ensure accuracy and safety in diagnosis. In law, firms like Harvey AI have built models that specialise in legal drafting and case interpretation.

Each of these efforts recognises the same truth: in high-stakes, regulated environments, precision matters more than breadth. Language fluency alone does not make a model trustworthy. Domain expertise, grounded in real data, conduct standards and the logic of professional decision-making, is what separates a safe model from a dangerous one.

Insurance is no different. Insurance products are complex, where every clause, exclusion and definition has consequences. Building safe AI for insurance requires a model that understands not just words, but how those words interact with cover, claims and conduct.

We need a domain specific model for general insurance that deeply understands how to safely discuss and trade regulated insurance products.

Introducing Open General Insurance Intelligence (OGII)

That is why we built Open General Insurance Intelligence (OGII), the world’s first model trained specifically for general insurance trading. It draws on Open’s ten years of real-world safe insurance trading experience across multiple markets.

OGII inherently understands the boundaries between information, guidance and personal advice. It knows how to efficiently navigate policy documents and extract the relevant information for a customer’s query. And it naturally knows to communicate in ways that are balanced and not misleading.

OGII is not a wrapper around a foundation model. It is a purpose-built, independently trained system designed to operate safely in production customer facing environments.

Explainable, safer and faster

Because the model has been developed from the ground up, we can build in new levels of interpretability. Every response OGII produces can be traced to its source material, giving a clear line of sight from model output to policy document. This traceability means we can show why a particular clause was referenced and how the model reached its conclusion. This is something that black-box, closed source systems cannot offer.

We continuously benchmark OGII using the Compliance Risk Index, a metric that measures model safety across four dimensions: evidence and retrieval accuracy, policy logic, communication standards, and detection of vulnerability or complaints. This index allows us to benchmark the model against other providers, and continuously improve.

Because OGII is domain specific, we can make it smaller while still outperforming other models. This means that it is not only cheaper to run, but also much faster. This speed is critical for ensuring smooth, fluid customer experiences. In some of our previous research, we show that it is 8x faster than GTP-5, while offering better performance.

Bringing OGII to life with Insurance Companion

OGII powers Insurance Companion, our conversational AI Agent designed to help insurers and intermediaries improve their sales experience for their customers. It provides customers with clear, accurate answers to complex product and policy questions in real time.

Insurance Companion is fast, safe and easy to embed. It can sit inside any digital ecosystem to help customers understand cover, compare policies and make confident decisions, all within regulatory boundaries.

If you are exploring how to bring safe, auditable GenAI into your customer journeys, we would love to show you what OGII and Insurance Companion can do.

Copy to clipboardLinkedin LogoTwitter logoFacebook logo
No items found.

You might also like

Designing and building a safety harness for our insurance LLMs
For safe and robust deployment of customer facing GenAI, you need a supporting safety harness around your LLM.
Why GenAI in insurance is risky, and why it’s worth getting right
Today, we’re introducing the AI Compliance Risk Index™ - a new benchmark for measuring the safety of consumer-facing GenAI tools in insurance.
Woman talking to team member
Privacy by design
What you share, and who you share it with, should be up to you. Here we look at how we design and architect for privacy, and give users control over their information.

Sign up for our newsletter

Register your information to receive all the latest news and updates from Open

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By clicking Subscribe you're confirming that you agree with our Terms and Conditions.