10 August 2019

Algorithmic Bias and Tax Profiling

'The Missteps of the FIRST STEP Act: Algorithmic Bias in Criminal Justice Reform' by Raghav Kohli in (2019) 1 Journal of the Oxford Centre for Socio-Legal Studies comments
Contrary to his tough-on-crime rhetoric, Donald Trump in December 2018 signed the FIRST STEP Act (the ‘Act‘) into law, a criminal justice reform legislation aimed at reducing recidivism and reforming prison and sentencing laws. With a 87-12 vote in the Senate and a 358-36 vote in the House, a bitterly divided Congress approved the Act in a rare display of bipartisanship earlier that month. Apart from triggering an awakening within Congress about the dire need to decarcerate, the Act unified an unusual coterie of proponents, including tycoons such as the Koch Brothers, and celebrities such as Kim Kardashian. Whilst hailed as historic and sweeping in some quarters, the Act only affects the federal system, which houses a small fraction of the United States prison population. Out of approximately 2.1 million people imprisoned, only 180,413 are federal inmates. Nonetheless, the Act aims to introduce several reforms. It mandates the Department of Justice to establish a ‘risk and needs assessment system’ to classify the recidivism risk of prisoners, and to incentivise participation in productive activities. For instance, it allows prisoners to earn ‘time credits’ through their participation and apply them towards early release to pre-release custody. Other proposed changes include retrospective modification of ‘good time credit’ computation, reduced sentences for drug-related offences, and a ban on shackling of pregnant women. 
However, inmates do not benefit equally from these reforms. The risk and needs assessment system employs algorithms to classify each prisoner as having a minimum, low, medium, or high risk for recidivism. The Act only permits prisoners falling within the minimum and low risk brackets to apply for time credits towards pre-release custody. 
This article seeks to critically examine the impact of such algorithmic decision-making in the criminal justice system. Analysing different instances of algorithmic bias and the recent Wisconsin Supreme Court decision of State v. Loomis, it argues that opaque algorithmic decisions violate due process safeguards. In conclusion, the increasing use of such algorithms in the criminal justice system, including the FIRST STEP Act, is found to be undesirable, unless tempered with solutions which meaningfully improve their accuracy and transparency.
'Profiling tax and financial behaviour with big data under the GDPR' by Eugenia Politou, Efthimios Alepis and Constantinos Patsakis' in (2019) 35(3) Computer Law and Security Review 306-329 comments
 Big data and machine learning algorithms have paved the way towards the bulk accumulation of tax and financial data which are exploited to either provide novel financial services to consumers or to augment authorities with automated conformance checks. In this regard, the international and EU policies toward collecting and exchanging a large amount of personal tax and financial data to facilitate innovation and to promote transparency in the financial and tax domain have been increased substantially over the last years. However, this vast collection and utilization of “big” tax and financial data raise also considerations around privacy and data protection, especially when these data are fed to clever algorithms to build detailed personal profiles or to take automated decisions which may exceptionally affect people's lives. Ultimately, these practices of profiling tax and financial behaviour provide fertile ground for discriminating processing of individuals and groups. 
In light of the above, this paper aims to shed light on the following four interdependent and highly disputed areas: firstly, to review the most well-known profiling and automated decision risks emerged from big data technology and machine learning algorithmic processing as well as to analyse their impact on the tax and financial privacy rights through their immense profiling practices; secondly, to document the current EU initiatives toward financial and tax transparency, namely the AEOI, PSD2, MiFID2, and data retention policies, along with their implications for personal data protection when used for profiling and automated decision purposes; thirdly, to highlight the way forward for mitigating the risks of profiling and automated decision in the big data era and to investigate the protection of individuals against these practices in the light of the new technical and legal frameworks; in this respect, we finally delve into the regulatory EU efforts towards fairer and accountable profiling and automated decision processes, and in particular we examine the extent to which the GDPR provisions establishes a protection regime for individuals against advanced profiling techniques, enabling thus accountability and transparency.