Responsible AI: what it is and why it matters for Life & Health insurers

Amidst the recent spread of Generative AI-powered applications, organisations are ramping up their efforts to adopt AI technology to reap its expected benefits. In the Life and Health (L&H) insurance sector, the adoption of Traditional AI, and increasingly Generative AI is inevitably accompanied by essential considerations regarding, for instance, an AI model's ability to avoid unfair discrimination and ensure accuracy. It's also crucial to consider how AI will integrate into employee workflows and how to design the human-computer relationship thoughtfully. Responsible AI (RAI) offers valuable guidance to build trust and help insurers fully realize the potential benefits of AI.

What is Responsible AI (RAI)?

RAI has gained significant traction in recent years. Although no universal definition exists at present, RAI is commonly understood as a set of principles and processes to guide the development and use of AI systems. Organisations adapt RAI principles according to their business needs and in compliance with relevant local regulations, but common themes often emerge. These are mainly centred around transparency, fairness, accountability and privacy. RAI is also about ensuring AI applications are fit for purpose and aligned with predefined business objectives.

What are the key principles of RAI?

RAI guidelines revolve around five key principles:

  • Fairness: Ensure that AI models do not unfairly discriminate. For example, consumers should be able to access insurance regardless of their ethnicity.
  • Transparency and explainability: Ensure clarity on the use and limitations of AI systems and be able to provide understandable explanations for the decisions made by such systems.
  • Robustness, security and safety: Ensure accurate and reliable AI systems, which deliver robust outcomes and are protected from the adoption of Traditional AI, and increasingly Generative AI (complying with safety standards and regulations), misuse and threats. For example, an AI model should have controls in place to prevent it from generating outputs when the system has been compromised.
  • Accountability: Establish clear responsibilities for AI outcomes and potential harms, such as biased decision making, by designating a business model owner with overall accountability for the model and a business technology owner for its technical deployment.
  • Privacy: Establish the right policies, procedures and controls for data handling within AI systems and ensure that any personal information is protected from the adoption of Traditional AI, and increasingly Generative AI access and use. These include measures such as data anonymisation.

Establishing confidence and trust in AI models requires integrating all of the above RAI principles. Additionally, insurers that integrate a RAI framework early in the AI model development lifecycle are better positioned to earn trust and mitigate downside risks, like bias and discrimination, from the outset.

Is RAI more important now in the Generative AI era?

AI is not new to L&H insurance. Given the inherent risks of AI, it has always been important to use it responsibly within our industry. However, as newer technologies like Generative AI become mainstream and risks continue to evolve, we must adapt the application of RAI to mitigate these emerging risks. Key risks include:

  • Misinformation: the risk of AI generating incorrect or misleading output.
  • Hallucinations: the generation of nonsensical or spurious outputs.
  • Ethical concerns: the risk of biased outcomes.
  • Unknown unknowns: risks which are not known yet.

These challenges highlight the importance of RAI practices to navigate and mitigate potential risks effectively.

Interesting facts Responsible AI

From a user's perspective, does RAI extend beyond the AI system itself?

In short, the answer is yes. RAI is about prioritising human needs in the human-AI relationship. As Generative AI increasingly spreads into the domains of creative or strategic decision-making, it becomes ever more important to design AI tools thoughtfully to reflect human points of view. For example, Generative AI's typically conversational and versatile tools may tempt professionals to passively accept AI responses, rather than actively inquire (Figure 1). This may lead to mistakes that could be avoided. Such AI over-reliance may also weaken a professional's decision-making muscle in strategic areas.4 Behavioural scientists and user experience specialists should thus design human-AI interactions to strike the right balance between leveraging AI and blind AI over-reliance.

Guiding users into the AI “goldilocks zone” to unlock the full benefits of AI

How to strike the right balance between trust and healthy scepticism?

Providing users with the expected accuracy of a Generative AI assistant based on how often the assistant is right for similar tasks 5 is one way to strike the right balance in the AI trust spectrum. Such ratings encourage users to review and validate responses before taking actions, especially in high-stake cases. 6

In another strategy, the user interface could regularly prompt users to go through tasks manually (e.g. every 10th time) to enable them to continue to hone relevant underlying skills, while being supported by AI. 7

For example, underwriters who rely on a Generative AI assistant to handle most of their cases, may be prompted from time to time to review underlying information in detail, before moving on to conclude their assessment.

How does RAI address human expectations, especially the AI trust gap?

Trust in AI is a determinant of user engagement with AI. 8 But earning trust depends on whether AI tools are understandable, reliable and meet our expectations, thereby reflecting the RAI principles. 9 Without those ingredients, AI tool users are prone to disengagement. 10 Indeed, workers are commonly reluctant to follow algorithmic advice 11; and as many as three in five managers indicate they are wary of trusting AI systems. 12

How can we build trust?

Research shows that once a new technology disappoints, regaining trust and reengaging users is a challenge. 13 Thus a key to fostering long-lasting trust and engagement is to manage user expectations effectively. As an example, Swiss Re recently launched Life Guide Scout, a first-of-a-kind Generative AI-powered underwriting assistant for its the underwriting manual Life Guide. During the pilot phase, communication focused on transparency and expectation management with particular emphasis on:

•            Data source: we informed users that all data was solely sourced from Life Guide. Accompanying each response, Life Guide Scout included links to relevant sections in Life Guide, so users could enjoy full transparency on the data source.

•            Development partner: we highlighted our partner Microsoft OpenAI.

•            Phase of development: we made it clear that we are in the pilot/beta phase.

•            Capabilities and limitations: we highlighted Life Guide Scout's limitations clearly.

A prominent disclaimer and feedback request made clear that AI played an advisory role in this human-AI interaction. 

How can Swiss Re help you in building confidence in in your AI-based model?

Conclusion

AI is rapidly transforming the L&H insurance sector, offering new opportunities for innovation and efficiency gains. However, AI’s advancement, new challenges and risks unfold, including bias, inaccuracy, misinformation and ethical dilemmas. To foster a trustworthy and beneficial usage of AI, insurers need to adopt a RAI framework to steer the development and use of AI systems according to principles of fairness, transparency, accountability, robustness and privacy. RAI is not only about the technical aspects of AI, but also about the human-AI interaction and the alignment of AI with business objectives and societal values. By applying RAI principles and insights from behavioural science, insurers can help enhance users' confidence and trust in AI, and ultimately deliver better outcomes for their customers and society.

Further Information

References

1 In line with the OECD's principles for responsible stewardship of trustworthy AI

2 McKendrick, J. (2022) Forbes Everyone Wants Responsible Artificial Intelligence, Few Have It Yet

3 Singla, A.; Sukharevsky, A.; Yee, L.; Chui, M.; Hall, B. (2024) McKinsey & Company The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

4 Puranam, P. (2024). To Use AI Tools Smartly, Think Like a Strategist. INSEAD Knowledge  https://knowledge.insead.edu/strategy/use-ai-tools-smartly-think-strategist

5 E.g. Rate the assistant on a scale from 0% to 100%, where 0% indicates "no certainty" and 100%, "full certainty"

6 Bucinca, Z.; Malaya, M.B. and Gajos, K.Z. (2021). To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 188

7 ibid

8 Woodward, S.; Chatterjee,M.; O U, N. (2023)Swiss Re Institute. Digital trust II: A consumer perspective. 

9 Spavound, S. and Kourentzes, N. (2022). Making Forecasts More Trustworthy. Foresight: The International Journal of Applied Forecasting. 2022:Q3, 66.

10 Goodwin, P.; Gonul, S. and Onkal, D. (2022). Commentary on "Making Forecasts More Trustworthy". Foresight: The International Journal of Applied Forecasting. 2022:Q3, 66.

11 Fildes, R.; Goodwin, P.; Lawrence, M. and Nikolopoulos, K. (2009). Effective Forecasting and Judgmental Adjustments: An Empirical Evaluation and Strategies for Improvement in Supply-Chain Planning, International Journal of Forecasting. 25, 3-23.

12 KPMG (2023). Trust in artificial intelligence: a 2023 global study on the shifting public perceptions of AI. https://kpmg.com/xx/en/home/insights/2023/09/trust-inartificial-intelligence.html

13 Kim, T. and Song, H. (2021). How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics. 61: Aug 2021.

Further Information

Disclaimer

Although all the information discussed this article was taken from reliable sources, Swiss Re does not accept any responsibility for the accuracy or comprehensiveness of the information given or forward-looking statements made. The information provided and forward-looking statements made are for informational purposes only and in no way constitute or should be taken to reflect Swiss Reʼs position, in particular in relation to any ongoing or future dispute. In no event shall Swiss Re be liable for any financial or consequential loss or damage arising in connection with the use of this information and readers are cautioned not to place undue reliance on forward-looking statements. Swiss Re undertakes no obligation to publicly revise or update any forward-looking statements, whether as a result of new information, future events or otherwise.

Tags

Responsible AI: What it is and why it matters for Life & Health insurers

Authors Contact us

Read more AI related content

Life & Health Reinsurance

Enabling clients to move forward with confidence in Life & Health, no matter how complex the market.