22 December 2020

Learning Platforms and Datafication

'Automation, APIs and the distributed labour of platform pedagogies in Google Classroom' by Carlo Perrotta, Kalervo N. Gulson, Ben Williamson and Kevin Witzenberger in (2020) Critical Studies in Education comments 

Digital platforms have become central to interaction and participation in contemporary societies. New forms of ‘platformized education’ are rapidly proliferating across education systems, bringing logics of datafication, automation, surveillance, and interoperability into digitally mediated pedagogies. This article presents a conceptual framework and an original analysis of Google Classroom as an infrastructure for pedagogy. Its aim is to establish how Google configures new forms of pedagogic participation according to platform logics, concentrating on the cross-platform interoperability made possible by application programming interfaces (APIs). The analysis focuses on three components of the Google Classroom infrastructure and its configuration of pedagogic dynamics: Google as platform proprietor, setting the ‘rules’ of participation; the API which permits third-party integrations and data interoperability, thereby introducing automation and surveillance into pedagogic practices; and the emergence of new ‘divisions of labour’, as the working practices of school system administrators, teachers and guardians are shaped by the integrated infrastructure, while automated AI processes undertake the ‘reverse pedagogy’ of learning insights from the extraction of digital data. The article concludes with critical legal and practical ramifications of platform operators such as Google participating in education.

'The datafication of teaching in Higher Education: critical issues and perspectives' by Ben Williamson, Sian Bayne and Suellen Shay in (2020) 25(4) Teaching in Higher Education 351-365 comments 

Contemporary culture is increasingly defined by data, indicators and metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometric devices. In a ‘society of rankings’, simplified and standardized metrics act as key reference points for making sense of the world (Esposito and Stark 2019, 15). Beyond conventional statistical practices, the availability of ‘big data’ for large-scale analysis, the rise of data science as a discipline and profession, and the development of advanced technologies and practices such as machine learning, neural networks, deep learning and artificial intelligence (AI), have established new modes of quantitative knowledge production and decision-making (Kitchin 2014; Ruppert 2018). 

Although ‘datafication’ – the rendering of social and natural worlds in machine-readable digital format – has most clearly manifested in the commercial domain, such as in online commerce (e.g. Amazon), social media (Facebook, Twitter), and online advertising (Google), it has quickly spread outwards to encompass a much wider range of services and sectors. These include, controversially, the use of facial recognition and predictive analytics in policing, algorithmic forms of welfare allocation, automated medical diagnosis, and – the subject of this special issue – the datafication of education. 

Education is a particularly important site for the study of data and its consequences. The scale and diversity of education systems and practices means that datafication in education takes many forms, and has potential to exert significant effects on the lives of millions. That education is widely understood as a public good, rather than a commercial enterprise (with some exceptions) also means that the extraction of data from students, teachers, schools and universities cannot be straightforwardly analyzed as another instantiation of ‘surveillance capitalism’, that is, the gathering of the ‘raw material’ of human life en masse for analysis, sale and profit (Zuboff 2019). Instead, the datafication of education needs to be understood and analyzed for its distinctive forms, practices and consequences. Enhanced data collection during mass university closures and online teaching as a result of the 2020 COVID-19 crisis makes this all the more urgent. In this brief editorial introduction to the special issue on ‘The datafication of teaching in higher education’, we situate the papers in wider debates and scholarship, and outline some key cross-cutting themes. 

Measurement matters 

There is of course a very long history to practices, processes and technologies of datafication in which current developments in big data, AI and machine learning need to be situated (Beer 2016). The eighteenth and nineteenth centuries witnessed an outpouring of statistical knowledge production, as everything from industrial manufacturing to the natural world, and from the state of the human population to the workings of the human body itself, was subjected to quantification and increasing numerical management (Bowker 2008; Ambrose 2015). The work of modern government itself came to rely on statistics, as people, goods, territories, processes and problems were all made legible as numbers, and statistical knowledge came to ‘describe the reality of the state itself’ (Foucault 2007, 274) as part of the ‘machinery of government’ (Rose 1999, 213). 

The statistical machinery of the nineteenth- and twentieth-century state is now, in the twenty-first century, shadowed by a vast complex of data infrastructures, platforms, devices, and analytics organizations from across the public, charitable and private sectors, as big data has itself become a new source of knowledge, governance and control (Bigo, Isin, and Ruppert 2019). Social media platforms, web interactions, financial transactions, public surveillance networks, online commerce, business software, mobile phone location services, wearable devices, and even connected objects in the Internet of Things have become key sources of knowledge for those authorities with access to the data they produce (Marres 2017). Governments are increasingly turning to digital services in order to generate detailed information about the populations they govern, including controversial attempts to introduce public facial recognition systems for purposes of individual identification (Crawford and Paglen 2019). Through machine learning, neural nets and deep learning, so-called AI products and platforms can now ‘learn from experience’ in order to optimize their own functioning and adapt to their own use (Mackenzie 2018). Nineteenth- and twentieth-century ‘trust in numbers’ has metamorphosed into a ‘dataist’ trust in the ‘magic’ of digital quantification, algorithmic calculation, and machine learning (Elish and boyd 2018). 

Dataism is a style of thinking that is integrally connected to processes of neoliberalization, as competitive logics and the desire to compare the performance of entities against each other, as if they are competing in markets, have been incorporated into various forms and technologies of measurement. Beer (2016, 31) argues that this period of intensive quantification is governed under a particular neoliberal system of ‘metric power’, and that ‘understanding the intensification of measurement, the circulation of those measures and then how those circulations define what is seen to be possible, represents the most pressing challenge facing social theory and social research today’. He suggests a number of key themes for understanding metric power (Beer 2016, 173–77). Data and metrics set limits on what can be known and what can be knowable. They define what is rendered visible or left invisible, thereby impacting on how certain practices, objects, behaviours and so on gain value, while others are not measured or valued. Measurement involves classification, sorting, ordering, and categorizing people and things, which defines how they are known and treated. It leads to prefiguring judgment, by setting desired aims and outcomes with the aim to bring the future into the present, which a measurement is designed to help achieve. Data-based processes also expand into new tasks, functions and programmes, and intensify their influence. The intensification of measurement leads to forms of authorization and endorsement of certain outcomes, people, actions, systems, and practices, thus marking out what is claimed to be truthful. It also involves increasing automation, which shapes human agency and decision-making – automated systems of computation are taken as objective, legitimate, fair, neutral and impartial, and impact on human judgement. Finally, metrics induce affective reactions, such as anxiety or competitive motivation, and thereby promote or produce actions, behaviours, and pre-emptive responses by prompting people to perform in ways that can be valued, compared and judged in measurable terms. 

The power of metrics to affect how social and natural worlds are known and compared, and therefore to shape how they are treated and changed, means that measurement matters. Data and metrics do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place. Data practices materialize the competitive neoliberal impulse to ensure efficient market functioning and constant improvement through measurement, the hierarchization of winners and losers, and the attribution of quantitative value. This can be fairly mundane, in the case of an online retailer recommending future goods to purchase based on past purchasing record and comparison against millions of other shoppers, where the measured market is the source of the recommendation. Media streaming services constantly capture data about consumption habits, and feed that back into recommended shows and playlists. The metrics in such cases include favoured genres, time spent listening or watching, artists or shows selected and so on. This may seem fairly banal, but yet is shaping cultural habits and individual tastes. But measurement also matters for even more consequential reasons. It has changed the ways economies function, serving hypercapitalist objectives of making data into a key source of market value (Fourcade and Healy 2017). Surveillance systems such as predictive policing and facial recognition disproportionately focus suspicion on ethnic minority groups, and reinforce longstanding structural inequalities in societies (Crawford and Paglen 2019), as judgments are made based on various forms of comparison and prediction. 

Education has long been subject to historical forms of ‘datafication’ (Lawn 2013), but the quantification, measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is intensifying and expanding rapidly. Higher Education is itself implicated in neoliberalizing forms of metric power, as various technologies of data-based measurement and evaluation impose limits on what is made visible and known, sort people and outcomes into (sometimes hierarchical) categories, establish measurable aims, expand to new tasks, establish what is claimed to be true or valuable, impose automation on decision-making, and affect the ways people feel, act and behave.

In referring to data subjects the authors argue 

 The datafication of human beings affects how they are understood, treated, and acted upon. The concept of the ‘data double’ usefully refers to how digital profiles can be created from the activities of individuals (Raley 2013). These profiles, or shadows, then become the basis for various forms of analysis and calculation, which circle back into individual experiences. To use the social media streaming example, the data double captured inside the database is used to make recommendations, which affects the consumer experience outside the database (Cheney-Lippold 2011). The individual becomes a data subject, defined and characterized algorithmically by being sorted into categories and predicted outcomes. 

The construction of data doubles in education is especially consequential since anything that is modelled inside the database then affects the potentially life-changing experience of teaching and learning. A prediction of future progress based on past outcomes could radically affect the future prospects of the student by foreclosing curriculum opportunities. Forms of algorithmic education, in other words, deeply affect data subjects. In their paper, Harrison et al. (2020) draw attention to how datafication both affects teaching and learning and shapes subjectivities. They refer to a ‘student data subjects’ which are assembled from digital traces of educational activity. Teachers, too, are increasingly known, evaluated and judged through data, and come to know themselves as datafied teacher subjects. 

This datafication of student and teacher subjects prefigures a potentially profound transformation in how students and teachers understand themselves and in how they are understood and managed as learners and professionals. As Marachi and Quill (2020) emphasize in this issue, where ‘frictionless’ data transitions are enabled between primary, secondary and tertiary education and even the employment contexts of individuals, the data subject risks becoming a lifelong ‘shadow’ with potential impact which may be far from benign. Marachi and Quill call for greater awareness, routine interrogation of data-sharing practices and critical distance between higher education institutions and ‘edtech’ platform partners promising ‘enhancement’ through data processing, the constitution of data subjects and the promises of ‘personalization’. Such changes may also demand that educators and students develop critical skills of using and evaluating data.