30 July 2019

Pricing Algorithms and Data Trusts

'The Price Is (Not) Right: Data Protection and Discrimination in the Age of Pricing Algorithms' by Laura Drechsler and Juan Carlos Benito Sanchez in (2018) 9(3) European Journal of Law and Technology comments
In the age of the large-scale collection, aggregation, and analysis of personal data (‘Big Data’), merchants can generate complex profiles of consumers. Based on those profiles, algorithms can then try and match customers with the highest price they are willing to pay. But this entails the risk that pricing algorithms rely on certain personal characteristics of individuals that are protected under both data protection and anti-discrimination law. For instance, relying on the user’s ethnic origin to determine pricing may trigger the special protection foreseen for sensitive personal data and the prohibition of discrimination in access to goods and services. Focusing on European Union law, this article seeks to answer the following question: What protection do data protection law and anti-discrimination law provide for individuals against discriminatory pricing decisions taken by algorithms? Its originality resides in an analysis that combines the approaches of these two disciplines, presenting the commonalities, advantages from an integrated approach, and misalignments currently existing at the intersection of EU data protection and anti-discrimination law.
'Data Trusts as an AI Governance Mechanism' by Chris Reed and Irene Ng comments 
This paper is a response to the Singapore Personal Data Protection Commission consultation on a draft AI Governance Framework. It analyses the five data trust models proposed by the UK Open Data Institute and identifies that only the contractual and corporate models are likely to be legally suitable for achieving the aims of a data trust. 
The paper further explains how data trusts might be used as in the governance of AI, and investigates the barriers which Singapore's data protection law presents to the use of data trusts and how those barriers might be overcome. Its conclusion is that a mixed contractual/corporate model, with an element of regulatory oversight and audit to ensure consumer confidence that data is being used appropriately, could produce a useful AI governance tool.