'Hidden depths: The effects of extrinsic data collection on consumer insurance contracts' by Zofia Bednarz and Kayleen Manwaring in (2022) 45 Computer Law & Security Review comments
Commentators have predicted that the insurance industry will soon benefit from technological advancements, such as developments in Artificial Intelligence (‘AI’) and Big Data. The application of AI- and Big Data-powered tools promises cost reduction, the creation of innovative products, and the potential to offer more efficient and tailored services to consumers. However, these new opportunities are mirrored by new legal and regulatory challenges. This article discusses challenges facing Australian data protection law, focusing on (potential) collection of consumers' data by insurers from non-traditional sources. In particular, we examine situations in which consumers may not be aware that the data collected could end up being used to price insurance. In our analysis, we discuss two useful examples of such non-traditional data sources: customer loyalty schemes and social media. These may give rise to several concerning data practices, including a significant increase in the collection of consumers' data by insurers. We argue that datafication of insurer processes may fuel excessive data collection in the context of insurance contracts, generating a substantial risk of harm to consumers, especially in terms of discrimination, exclusion, and unaffordability of insurance. We complement our analysis with the discussion of Australian insurance-specific provisions, asking if, and how, the harms examined could be adequately addressed.
The authors argue
Commentators have predicted that the insurance industry will soon benefit from technological advancements, particularly developments in Artificial Intelligence (‘AI’) and Big Data. AI- and Big Data-powered tools promise cost reduction, the creation of innovative products, and the potential to offer more efficient and tailored services to consumers. However, new opportunities arising for the insurance industry out of enhanced data analytics trigger new legal and regulatory challenges. This article focuses on the challenges facing the regulation of data collection by insurers. We argue this regulation is problematic in the light of (increasingly more common) processing of data through automated means and its use for the purpose of underwriting of consumer insurance contracts. We focus on individually underwritten policies, where collection of consumers’ data can directly translate into personalised contracts.
The increasing accessibility of consumers’ data for insurers has the potential to substantially affect the insurance industry. As detailed information about an individual insured becomes more readily available, it can be processed with tools allowing for a (theoretically) accurate prediction of risk a particular individual presents. In consequence, the underlying paradigm of insurance may change, as risk becomes less and less uncertain, and the element of unpredictability disappears.
Insurance, being a data-driven industry, has long been concerned with collecting data to predict and price risk, well before computers were invented. However, recent developments in AI- and Big Data-powered technological tools have the potential to initiate unprecedented change. These technologies allow for vast amounts of data to be generated, collected, stored and processed, and also provide tools to analyse and learn from that data. Inferences can be generated, and trends predicted, that would be otherwise unobservable to humans. Sophisticated information and insights of commercial value can be efficiently extracted from that data, encouraging data collection, sharing and aggregation.
It has been predicted that insurers will benefit from applying AI and Big Data tools to contract underwriting processes, and evidence is emerging that they are starting to do so. However, the ways in which insurance firms may collect potential insureds’ data remains relatively unexplored. There are several reasons for this, including competitive motivations for corporate secrecy linked to the commercial value of data, and consumers’ negative perception of invasive data collection by firms, further incentivising firms to make their practices opaque. This lack of transparency about collection of consumers’ data and in consequence digital profiling by insurers may lead to consumer harm in terms of discrimination, exclusion, breach of privacy and unfair pricing.
In this article we explore potential data collection practices by insurers, analysing current market practices of data collection and sharing, as well as non-traditional sources of consumer data. These practices may bring with them privacy harms, as well as exclusionary conduct by insurers that may detrimentally affect already disadvantaged groups, therefore undermining the social function of insurance. We introduce a concept of ‘extrinsic data’, which is data from non-traditional sources that consumers may be unknowingly sharing with insurers. The use of this type of data has attracted significant attention from researchers in other contexts, such as online advertising. It is especially concerning in the context of insurance contracts for several reasons. Potential consumer harms stemming from data collection and use for profiling of clients and insurance pricing range from cybersecurity risks to collection of intimate and sensitive information, inaccuracy of automated processing and discrimination, exclusion and digital consumer manipulation. Collection of ‘extrinsic data’ by insurers constitutes a particular challenge for privacy protection regimes. Furthermore, the fact that consumers are not expecting their data to be collected and used for the purpose of pricing insurance raises important ethical questions.
We also explore whether the current legal and regulatory framework in Australia relating to data protection is adequate to address the potential changes in the industry. However, when considering adequacy, a consideration of consumer harm is insufficient, as such changes will bring both benefits and disbenefits. The insurance industry is already highly regulated, and evidence is emerging that the existing framework may be hindering the use of emerging technologies by financial services firms. Therefore, any reform should balance consumer protection considerations with the identification of what initiatives would work to promote, not hinder, beneficial innovation.
This article proceeds as follows. Part 2.1 considers AI- and Big Data-related technologies and their application in insurance contracts. Part 2.2 outlines the research approach used in this paper to examine and respond to the use of new technologies in the insurance industry. Part 2.3 describes how the use of AI and big data tools may affect the underwriting of consumer insurance. Part 3 provides a brief overview of Australian privacy protection law, underpinning the subsequent analysis. Part 4 focuses on insurers’ potential access to consumers’ data. We discuss non-traditional sources of data, and outline the importance of ‘extrinsic data’ (4.1). We illustrate the potential access insurers may have to consumers’ extrinsic data (4.2) through empirical examples extracted from privacy policies of certain consumer insurance products with links to retail loyalty schemes (4.2.1); and an analysis of social media scraping practices (4.2.2). In Parts 4.2.3 – 4.2.6 we delineate data practices relevant to the insurance context in the light of access to extrinsic data. In Part 5 we turn to the harms which may result from insurers’ data collection and, ultimately, use for the purpose of the composition or acquisition of digital consumer profiles. In Part 6 we focus on insurance-specific rules, asking if sector-specific law and regulation could help address issues arising out of insurers’ potential access to consumers’ extrinsic data. We consider briefly the Privacy Act Review currently underway (6.1), and proceed to distinguish two approaches that policymakers and regulators could adopt to mitigate harm. These are: restricting insurers’ access to external data (6.2); and mandating transparency regarding the use of machine learning models and data (6.3). Part 7 concludes this article.