Blog(Click here to get to the blog overview page)

A journey into Responsible AI: Veritas Fairness Assessment Methodology & Toolkit for the Financial Industry

28 Mar 2022

The use of Artificial Intelligence and Data Analytics (AIDA) systems enables financial service institutions (FSIs) to further automate their decision processes and propose more personalised and valuable services to their customers harvesting the full power of data they can access. However, without careful design and control, these AIDA systems can bring new risks and unintended harms, perpetuating or reinforcing existing disadvantages in society. With more decisions aided by AIDA systems, the fairness, ethics, accountability and transparency of these systems is becoming increasingly important to make those solutions scalable and resilient.

While principles, best practices, frameworks, data laws and regulations in this area are fast evolving across jurisdictions, it is the responsibility of each financial services institution (FSI) to identify and address unfair outcomes from their AIDA systems to ensure that AIDA driven decisions do not systematically treat individuals or groups of individuals unfairly.

Regulatory authorities across the globe are evaluating the suitability of existing laws, regulations, and guidance for AIDA models, and are formulating frameworks to address the unique opportunities and challenges presented by AIDA systems. In 2018, The Monetary Authority of Singapore (MAS) released Principles to promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of Artificial Intelligence and Data Analytics (AIDA). In 2020, MAS also created a consortium of 27 companies to develop the FEAT assessment methodology and apply it to selected industry use case in which Swiss Re participated. The Veritas Phase 2 publications comprise the assessment methodology and use cases from financial services institutions around the FEAT principles of Fairness, Ethics, Accountability and Transparency. As part of Veritas Phase 2 workstreams, Swiss Re was selected by MAS to work together with Accenture to refine the Fairness Assessment Methodology and apply it to a life insurance predictive underwriting use case - a simplified underwriting journey leveraging an AIDA solution assessing risk for existing customers in Singapore. The whitepapers have been published by the MAS on 4 February 2022 which are referenced here

Proportional Fairness Assessment Methodology

The aspirational fairness assessment methodology aims to propose a process that enables fairness assessments for end-to-end AIDA systems development lifecycle rather than just focusing on the algorithmic models.

The proposed methodology consists of a collection of fairness checkpoints in a typical AIDA system development lifecycle:  featuring detailed considerations, additional guidance and examples by each of the development phases, including system design, input data preparation, building and validation, and deployment and monitoring. It is also important to note that the level of FEAT fairness assessment is suggested to be proportional to the fairness risk of any use case, as these assessments will incur additional costs, which ultimately get passed onto consumers. FSIs are encouraged to calibrate the assessment taking into consideration their existing corporate fairness principles, data governance frameworks and standards, materiality of AIDA systems, cost of FEAT assessment and potential mitigation.

In the fairness assessment methodology whitepaper, we also introduced the definition of personal and group fairness and key fairness concepts that are referred to in the Fairness Principles as defined in the FEAT Principles, such as personal attributes, types of bias and mitigation methods and fairness objectives and fairness metrics.

Illustrating the Fairness Assessment with Predictive Life insurance Underwriting Use Case for Singapore market

Traditionally, underwriters rely on underwriting guides in the risk selection process, which are developed and built using various actuarial analysis, medical journals, underwriters’ experience and data collected over the years.

Predictive Underwriting enables life insurance companies to segment and underwrite risks using granular data and domain expertise from underwriters. With access to a greater variety and volumes of consented/approved data at the time of underwriting, if managed well, predictive underwriting could accelerate the decision-making process. This approach reduces costs and saves valuable time when processing an insurance application. These benefits result in an improved customer journey, increased customer satisfaction and enhanced affordability. For the insurer, the approach yields a better understanding (including financial projection) of the risk they are onboarding.

To illustrate the application of the fairness assessment methodology, we considered the scope of a predictive underwriting model to cross sell life insurance products to existing customers in the Singapore market. The illustrative use case has been developed for a hypothetical insurer using a synthetically generated dataset to preserve data privacy. However, we have helped Life & Health insurers launch similar solutions across South East Asia with 4 models currently in production.

The hypothetical insurer discussed in the use case has leveraged defined fairness standards from its organisation taking into account its principles and values as well as all relevant regulations, and these standards have been approved through its internal governance processes.

The fairness objective of the hypothetical insurer is: For the eligible population (those qualifying for simplified underwriting) the distribution of errors (those not offered simplified underwriting) does not differ by over 20% among subgroups, where the 20% threshold is based on the Four-Fifths Rule. The relevant measurable fairness metric for this objective is: false negative rate (FNR) ratio (*Selection process and decision points are detailed in the use case whitepaper). Gender and ethnicity were selected as the personal attributes for the fairness assessment in this illustrative example.

From the illustrative fairness assessment, the result on ethnicity is satisfactory and thus no further action is required (i.e., there are no major differences in fairness metrics between ethnic groups). The fairness assessment result on gender is outside the limit of our acceptance threshold, suggesting a potential disadvantage for male customers over female customers at similar level of actuarial risk. According to fairness standards of this hypothetical insurer, the suggestion for this use case was to introduce mitigation methods to rectify the imbalance in gender. With the fairness-performance trade-offs assessed and considered, the hypothetical insurer implemented the post-processing mitigation of gender-specific thresholds, thereby bringing the false negative rate ratio within the acceptable threshold band for the personal attribute of gender.

The Way Forward

It’s still early days for the development of Responsible AI with no defined standards and tools. During the development journey of the whitepapers, our workstream has identified topics for future consideration for the industry and regulators where greater clarity would be beneficial. Such as the alignment between data minimisation principle and fairness assessment, between AIDA and non-AIDA fairness requirements, and cross-border fairness assessment.

At Swiss Re, our Global Advanced Analytics Centre of Expertise, part of the Group Data Services unit, has already delivered more than 1300+ advanced analytics projects across the world by January 2022. With strong expertise and experience in implementing data driven solutions, we are well positioned to continue supporting our clients and the insurance industry, regulators and partners in building robust insights with responsible AIDA enabled solutions. In parallel to integrating Responsible AI such as FEAT methodology into our group data governance framework and continue to develop new target standards and tools, Swiss Re will continue engaging with our clients, partners and regulators, participating in regulatory expert groups, contributing to studies and reports, such as project Veritas led by MAS, in alignment with our mission to make the world more resilient.

We are thankful to MAS for having selected Swiss Re to lead the Veritas Phase 2 fairness assessment workstream, and to Accenture for the strong collaboration on delivering this initiative. We believe that the fairness methodology whitepaper and this predictive underwriting use case fairness assessment will be valuable guidance for the industry.

For more details of the Veritas Phase 2 Fairness Assessment Methodology and its application to our predictive underwriting use case, please read the Veritas Document 3A - FEAT Fairness Principles Assessment Methodology and Veritas Document 4 - FEAT Principles Assessment Case Studies. You can contact us to help you integrate this into your AIDA solutions and data governance framework.

Tags

Author

Related content

Advancing digital transformation in insurance