An Expanded Role for AI in Life & Health Predictive Underwriting

As the use of AI matures beyond sporadic pilot projects, it demands rethinking of established processes to realise new savings and sources of value. Customers are increasingly offered, and taking advantage of, digital engagement channels that deliver convenient, fast and engaging experiences. Using AI enables weaving underwriting more seamlessly through a markedly improved customer experience when buying Life & Health (L&H) insurance. The use of AI is soon expected to become a key capability for insurers to remain viably competitive.

In this fourth article in this AI series, we will share some of the ways Swiss Re is collaborating with clients using "traditional" AI to transform underwriting. Despite significant excitement around generative AI, there is plenty of value to be realised with traditional AI. Newer generative AI models, including large language models, will be discussed in the next article in this series.

Traditional AI in L&H Insurance: Evolution and impact since the 80s

Traditional AI includes a class of models that can be developed and deployed in a variety of different manners. This may include references to terms such as machine learning models and predictive underwriting models.

Using machine learning models in L&H insurance is among some of the earliest recorded work involving traditional AI. This dates back to the early 80s in the US market, when more access to socioeconomic data, structured medical underwriting data – such as BMI, blood pressure or lipids – and claims data supported more granular risk pricing. The use of machine learning enabled these models to process higher volumes of richer data, thereby accounting for more complex interactions between risk variables. Since then, we have observed a multitude of different ways in which traditional AI projects have unfolded, as no two projects are identical.

BEGIN with the END in mind.
Default profile image
Stephen Covey, The 7 habits of highly effective people

In L&H re/insurance, data science projects often start off as an exploration of the potential usefulness of a data source; however, this exercise can meander aimlessly – and expensively – in the absence of clear purpose and goals. In our experience, the most effective projects have been the ones able to reach a clear consensus early on, defining exactly how value is created, what the measures of success will be and what the new underwriting process will "look like". This way the simplest and most effective solution can be identified, which, we should not forget, may not always involve AI.

Maximising business performance through deep cross-functional expertise and collaboration

In process automation, there is usually a trade-off between a fast, frictionless underwriting onboarding process, acquisition costs (such as medical testing and underwriting time) and future claims experience. Underwriters are familiar with the limitations of traditional medical underwriting, such as false positive or false negative tests. It is valuable to keep this top of mind as predictive underwriting models are somewhat similar in this respect. The impact of such models on future claims will depend on the frequency and severity of prediction errors.

Figure 1 compares Straight Through Processing (STP) rates with the model error rates. Underwriters must interpret and translate risk classification errors that ultimately form the basis of probable actuarial pricing considerations / adjustments. The optimum balance between STP and error rates depends on the insurer's risk appetite, but it is best achieved when supported by deep technical expertise and cross functional collaboration.

Underwriter expertise is also recommended in identifying and defining "guardrails" to the model such as for high sums assured, older (or very young) ages, particularly complex medical conditions and so on.

Figure 1: An analysis of STP vs Error rate

Figure 1: An analysis of STP vs Error rate

Proof points of AI being deployed for commercial benefits

Below we briefly describe two examples of traditional AI use in underwriting to illustrate some of the practical possibilities and learnings. Importantly, the techniques have been replicable in various settings, including with different insurers, across multiple countries and with variations from the original objective. However, it's equally important to note that the details and exact results can vary.

Non-smoker models

Non-disclosed smoking represents significant increased claims costs so underwriters randomly test one in three applicants (i.e. cotinine tests) who self-declare non-smoking to validate their entries. Using an AI model, combining application data with authorised third-party data sources (in a US pilot), insurers can predict, with over 95% accuracy, whether a declared non-smoker is genuine. Cotinine tests can be targeted at 1 in 15 declared non-smokers with more than double the rate of smoking detection compared to random testing – reducing medical testing costs and awkward urine testing procedures.

In the early stages, many insurers only considered deployment of these models in a live production environment much later in the journey; in some cases, this became an unsurmountable hindrance. This highlights the value of considering end-to-end development early on.

This foundational capability can be used in other settings to predict the value of validating BMI, medical and laboratory requirements and 'misrepresentation' risk models.

Underwriting prediction models for simplified issue or accelerated underwriting eligibility

Banks or other multi-line insurers may leverage their rich customer data to design and offer suitable life products. They can use transactional banking data, health claims records or other data sources as alternatives to traditional underwriting risk assessments. Third-party data vendors are commonly used as supplementary data sources in several markets, but it is important to consider any regulatory limitations.

Typically, reducing traditional underwriting assessments results in increased claims and anti-selection costs. However, by utilising alternative data sources (Figure 2) along with historical insurance records, underwriting decisions and/or death indicators, Swiss Re has helped clients achieve over 60% simplified issue offers to applicants with no or limited price increases.

This approach significantly saves on operational costs, speeds up client service and ultimately delivers a favourable customer experience, often measured through metrics such as net promoter scores.

This foundational capability can be used in other settings. For instance, segmenting risk can be used to more accurately targeting post-issue sampling or to more extensively review underwriting outcomes for internal risk reviews and audits.

Figure 2: Alternate sources of underwriting data

Figure 2: Alternate sources of underwriting data

Lessons learned from two decades in predictive underwriting models

Every AI project engagement is also an opportunity to learn and to accumulate a unique set of experiences and expertise in insurance risk. Here are some key observations from our experiments with AI in predictive underwriting models:
 

Further Information

A clear understanding of the specific customer or business need is essential

A clear understanding of the specific customer or business need is essential to avoid working on the wrong problem, testing the wrong hypotheses and failing to achieve the desired business improvement. In the 2000s, predictive underwriting models were developed with bancassurers to reduce or eliminate underwriting for a portion of their customers. This was expected to boost sales on the assumption that underwriting was a major roadblock to life insurance sales. However, there was often little change in direct sales. The redeeming quality was that advisors experienced significant efficiency in customer onboarding times. Not long after this, models were developed to better predict the profiles of customers who were most likely to buy insurance ("propensity to buy" models) with success in higher lead conversion rates.

Upskilling underwriters and data scientists is essential

Upskilling underwriters and data scientists is essential to improve collaboration and ultimately the quality of the output. This starts with building familiarity with each other's technical concepts, terminology and how each discipline evaluates different problems and solutions. This can take time, but teams reap the benefits of diverse knowledge bases and work more effectively resulting in better overall project performance.

Robust data management is critical

"Garbage in, garbage out" – this common adage summarises the notion that robust data management is critical as it will directly impact the quality and/or performance of the output. This effort is often underappreciated. Inconsistently captured data, the existence of non-sensical outliers in the data or large chunks of missing data and other data quality issues can significantly affect the performance of an AI model.

Insurers must be able to articulate what drives model results

Insurers must be able to articulate what drives model results to a variety of stakeholders including customers, sales channels, regulators and others. Explainability is crucial not only for overall model performance but also at the individual case level. An underwriter cannot justify a decision to a broker without understanding the reasoning behind the AI model's prediction

Solutions must be implementable

Solutions must be implementable – in earlier years models were built but seldom implemented due to legacy systems or a lack of consideration for how the final results would be practically integrated into existing workflows and processes. By drafting the probable future state of a process early in the AI development phase, businesses can better articulate their needs and maintain focus through to deployment.

Strong project management & change management for successful implementation

Strong project management and change management skills are invaluable to successful implementation. Depending on the magnitude of the change, the impact to underwriters and sales channels can be substantial and may require significant support throughout the implementation journey. Some data analysis projects have quietly faded away due to lack of drive and proper project management. Conversely, some projects fail despite the development of potentially valuable AI models because of inadequate change management efforts, for instance, underwriters not being trained on how to interpret and use said model results.

Responsible AI in underwriting models

It has long been required that underwriters explain relative risk ratings using statistical or actuarial evidence. Explainability and transparency are foundational to trust in underwriting (Figure 3). As new sources of underwriting data have developed, with AI used as the natural analysis tool, underwriters will need to adapt and extend governance frameworks to include Responsible AI principles. The lack of a robust framework can result in irresponsible or ineffective utilisation of AI. This includes due diligence and revisiting of models already in production where the assessment of principles, documentation and possibly model refinements may be appropriate.

Figure 3: Responsible AI framework

Figure 3: Responsible AI framework

The next evolution in underwriting: using multiple AI models at various stages in the process

A single AI model based on limited data will not be a silver bullet for all customer and underwriting pain points or realise the full potential of AI use. Responsible use of broader customer risk and behavioural data and AI will optimise, accelerate and augment underwriting. Figure 4 is an illustration of how this might look like.

Figure 4: The future of underwriting will likely require multiple AI models

We will write more on this later in this series.

Figure 4: The future of underwriting will likely require multiple AI models

Conclusion

Given the complexity of medical underwriting, challenges with anti-selective behaviour, the need to remain price-competitive and have a smooth customer experience, deep technical expertise and strong platform capabilities are key to effectively unlock the potential of AI to evolve our industry. This can present significant investment costs for insurers. The time to develop new technologies and AI capabilities needs to be carefully weighed against improving go-to-market speed. We have seen many insurers choose to partner in developing AI services while modernising their legacy platforms.

AI models have been proven to: 

  • Boost sales through better customer experiences
  • Streamline application forms
  • Optimise medical requirements, reducing medical testing expenditure
  • Improve the accuracy of risk selection
  • Reduce the impact of anti-selection
  • Improve automation outcomes through higher STP and reduced manual referrals
  • Increase underwriter job satisfaction 

Swiss Re has pioneered underwriting innovation over many years, introducing automation technology solutions such as Magnum, Underwriting Ease and Digital Health Underwriting. We continue this journey by adding AI to the underwriting toolset, combining our strong technical expertise with cutting-edge data analytics platforms and capabilities, all the while focussing on generating real value for our client partners. 

In the next article in this series we will discuss newer generative AI models, including large language models. We will also write more about the future of underwriting and the growing need to integrate multiple AI models.

Tags

Publication Artificial Intelligence:

An expanded role for AI in Life & Health predictive underwriting

Authors Contact us

Read more AI related content

Life & Health Reinsurance

Enabling clients to move forward with confidence in Life & Health, no matter how complex the market.