Generative AI
The rapid development of generative artificial intelligence (AI) systems opens opportunities for underwriting and claims processes. It also raises questions of ownership and associated risks.
Article information and share options
Large data sets form the core of generative AI models, the latter a set of algorithms that can create seemingly realistic content like text, images, or audio using training data.1
Who owns what?
The collection of the large data sets raises questions around ownership of the data, and also concerns about data quality and biases. Ownership is still a relatively new field but already, in the US and UK, a few lawsuits have been filed to challenge the data used in and produced by generative AI models.2 Recently, a class action case was filed by several artists whose images had been used to train generative AI tools.3 In the meantime, several platforms have banned AI-generated art.4
In the US and in Europe, class actions against the technology sector have been on the rise.5 This poses a risk for the insurance industry as class actions can be expensive. While legal disputes, together with existing and new legislation, will help establish some guiding principles, the AI industry will likely adapt to new tech-generated disruption in similar fashion to other sectors that have been disrupted by innovation.6 For example 15 years ago, the music and movie industries suffered losses and struggles over rights related to illegal downloads. Then with innovation came platforms allowing consumers to listen and view online content legally, while at the same time compensating rights holders.
Concerns around data quality and biases
Another risk relevant for the insurance industry is that publicly-available generative AI systems can discourage people to seek professional medical, legal or financial advice. These systems make predictions using training data only. They do not access real-time data and thus may generate misleading information. Based on this information, an individual may decide not to seek, for instance, medical help when really a visit to the doctor should be the course of action. The outcome can be a worsening of health status, requiring more intense (and expensive) treatment in the future and, in turn, higher claims on health insurance policies.
The question comes down to the quality and quantity of data used in the development of generative AI systems. An example from Finland, prior the recent developments of generative AI systems, illustrates how regulators may find the use of non-individual data discriminative. The National Non-Discrimination and Equality Tribunal prohibited a credit institution from using a decision-making method based on criteria such as gender, first language, age and residential area, the said criteria itself based on assumptions derived from general statistical data and information on payment defaults. The Tribunal stated that the creditworthiness of an applicant became weaker than it would have been using other information.7 Further, several Data Protection authorities investigate complaints related to the use of and invented (non-accurate) personal data derived from generative AI systems.8 In the EU, the European Data Protection Board has launched a task force to help its members on data protection issues linked to generative AI.9
If the training data is too limited, biased or is incorrect, the predictions/answers generated will likewise be biased and inappropriate. This could lead to medical malpractice claims or professional liability lawsuits. On the other hand, generative AI models can be trained on a company’s proprietary knowledge base and processes. For example, insurers could deploy generative AI models to improve the efficiency of in-house manual underwriting and claims processes.
References
References
1 See “Generative AI”, BCG (accessed 19 April 2023).
2 “Getty Images Statement”, Getty Images, 2023; “AI Art Generator Copyright Litigation”, Joseph Saveri Law Firm (accessed respectively 16 February 2023). “AI Drake just set an impossible legal trap for Google”, The Verge, 19 April 2023.
3 “GitHub Copilot litigation”, Joseph Saveri Law Firm & Matthew Butterick, 3 November 2022, “Stable Diffusion litigation”, Joseph Saveri Law Firm & Matthew Butterick, 13 January 2023 (accessed respectively 16 February 2023).
4 “Getty Images Content License Agreement”, Getty Images, August 2022; “Newgrounds Wiki – Art Guidelines”, Newsgrounds; “Artmageddon: The rise of the machines, and banning machine-generated images”, Purpleport, 14 September 2022 (accessed respectively 16 February 2023).
5 “European Class Actions Report”, CMS, 2021; “Class & Group Actions”, Global Legal Group, 2021.
6 “After inking its OpenAI deal, Shutterstock rolls out a generative AI toolkit to create images based on text prompts | TechCrunch”, TechCrunch, 25 January 2023.
7 “Plenary session (voting)”, National Non-Discrimination and Equality Tribunal of Finland, 21 March 2018.
8 “Factbox: Governments race to regulate AI tools”, Reuters 4 May 2023. , “Federal privacy watchdog probing OpenAI, ChatGPT following complaint”, CBC 4 April 2023.
9 “EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT”, EDPB, 13 April 2023., “European privacy watchdog creates ChatGPT task force”, Reuters, 14 April 2023.