This paper takes the perspective of law and philosophy, integrating insights from computer science. First, I will argue that in the era of big data analytics we need an understanding of privacy that is capable of protecting what is uncountable, incalculable or incomputable about individual persons. To instigate this new dimension of the right to privacy I expand previous work on the relational and ecological nature of privacy and the productive indeterminacy of human identity. Second, I will explain that this does not imply a rejection of machine learning, based on a more in-depth study of the assumptions, operations and implications of the practice of machine learning – highlighting its alignment with purpose limitation as core to its methodological integrity. Instead of rejecting machine learning, I advocate a practice of ‘agonistic machine learning’ as core to scientifically viable integration of data-driven applications into our environments while simultaneously bringing them under the Rule of Law. This should also provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.
12 December 2017
'Privacy As Protection of the Incomputable Self: Agonistic Machine Learning' by Mireille Hildebrandt comments