Showing posts with label EdTech. Show all posts
Showing posts with label EdTech. Show all posts

24 September 2024

Datafication

'Monetising Digital Data in Higher Education: Analysing the Strategies and Struggles of EdTech Startups' by Janja Komljenovic, Kean Birch and Sam Sellar in (2024) Postdigital Science and Education comments  

Digital data are perceived to be valuable in contemporary economies and societies. Since the 2011 World Economic Forum described personal data as a ‘new asset class’ that underpins the development of new products and services (World Economic Forum 2011), policymakers, economic and social actors, and scholars have sought to understand how data create both commercial and social value. For example, digital markets and data have become so important for our economies that in 2022–2023, the European Union introduced the Digital Markets Act to bring order to the digital economy, the Digital Services Act to harmonise rules for online intermediary services and create a safe online environment, and the European Data Act to facilitate the use and exchange of digital data for economic and social benefit. 

However, digital data are neither inherently valuable nor exist ‘out there’ waiting to be collected and exploited. Instead, data and data products are constructs of political-economic and socio-technical arrangements, which also create conditions for data monetisation (Birch 2023). We are particularly interested in user data, i.e. digital data that are logged and collected as an outcome of an individual engaging with a digital platform. User data include, but are not limited to, personal data. Scholars have analysed how user data are imagined to be made valuable in various sectors, such as in healthcare via behavioural nudging (Prainsack 2020), in insurance via personalisation (McFall et al. 2020), or in the application of big data to food and agriculture (Bronson and Knezevic 2016). The literature also highlights the risks and adverse effects of datafication, including surveillance (Zuboff 2019) and various forms of population control and exploitation (Sadowski 2020). In each case, for digital user data to be made useful and valuable, data must be collected, analysed, and processed to produce various digital products and outputs, such as algorithms, analytics (e.g. scores, metrics), automated decisions, or dashboards (Mayer-Schönberger and Cukier 2013). 

As the datafication of our economies and societies has expanded in general, so too has it impacted higher education (HE). Datafication refers to the ‘quantification of human life through digital information, very often for economic value’, with important social consequences (Mejias and Couldry 2019: 1). In education, datafication consists of data collection from all processes in educational institutions at all scales and levels, impacting stakeholder practices (Jarke and Breiter 2019). In HE, policymakers attempt to improve university quality, efficiency, and impact via datafication at the sectoral and institutional levels. For example, the UK Higher Education Statistics Agency (HESA) established a Data Futures programme as an infrastructure for datafying the sector and collecting and collating data from universities (Williamson 2018), with an alpha phase launched in 2021–2022. Moreover, Jisc, a HE sectoral agency providing network and IT services, supports universities with various initiatives, such as the Data Maturity Framework launched in 2024, which universities can use as a template to improve data capabilities and datafy their institutions. Digital data, then, is one of the foundational elements of postdigital education because digital technologies that staff and students use every day are increasingly data-based (Jandrić et al. 2024; Jandrić and Knox 2022). 

User data in HE are not only valuable for universities and policymakers but also for the EdTech industry. Scholars aligning themselves with the field of critical data and platform studies in education (Decuypere et al. 2021) have already conducted excellent research into various aspects of data practices related to the economic value, such as EdTech’s commercial interest not always sitting well with user privacy (Hillman 2022) and the work needed to produce and manage school data (Selwyn 2020). Specifically in HE, emerging work has found that EdTech companies turn user data into assets they control (Hansen and Komljenovic 2023). EdTech incumbents such as Pearson have evolved into data organisations with intensive mobilisation of data analytics for impacting HE processes and governance (Williamson 2016, 2020). Research has also identified tensions and unintended consequences in relation to data work at universities (Selwyn et al. 2018), pedagogic, cultural, and social effects (Williamson et al. 2020), and the need for universities to pay greater attention to privacy issues and data standards in procurement processes (Ali et al. 2024). Thus, research in this field highlights (1) the relations between EdTech companies and universities as pivotal and (2) the dynamics of the EdTech industry as being highly relevant for the sector. 

Data in HE are understood to be valuable in terms of their use, which is mostly the ambition of universities, and in economic terms, which is mostly the concern of the EdTech industry (Komljenovic et al. 2024a, b). In this article, we contribute to the literature by examining strategies employed by EdTech startups to make digital and personal data valuable in HE and the struggles that these startups confront. In other words, we examine the economic dimension of postdigital HE, which is co-constitutive of the socio-material assemblages of digital products and services (Knox 2019; Lupton 2018). Understanding how digital data can be made economically valuable is important because the monetisation of user data is consequential for university practices and the nature of postdigital HE, and because governments and organisations see digital data as the premise of contemporary economies in which HE is embedded. Moreover, we specifically focus on EdTech startups because of the promised transformation and disruption that they seek to achieve in HE (Decuypere et al. 2024; Ramiel 2020). As a result, we can reasonably expect these companies to be leaders of datafication processes. 

In what follows, we first elaborate on our conceptual and empirical approach. We then move to discuss the economic construction of data value by EdTech startups and the challenges they confront, before concluding with some reflections on the impact that data monetisation has in HE

25 October 2023

EdTech

'When public policy ‘fails’ and venture capital ‘saves’ education: Edtech investors as economic and political actors' Janja Komljenovic, Ben Williams, Rebecca Eynon and Huw C Davies in (2023) Globalization, Societies and Education comments 

Educational technology (Edtech) investors have become increasingly influential in education; however, they remain under-researched. We address this deficit and introduce the grammar and landscape of Edtech investment into education research. We empirically examine venture capital Edtech investors and argue that they are economic and political actors. Investors construct the Edtech industry through their investment and advancing particular imaginaries. They legitimate their authority in education through narratives of expertise and measures of social impact. They consolidate the Edtech industry by constructing social networks to perform the political work of futuring. The analysis provides original insights into the power of Edtech investors in education and proposes a research agenda examining new relations between the education, technology, and finance industries. 

Educational technology (Edtech) increasingly structures teaching and learning processes, determines how education is governed, and reframes educational purposes and aims (Decuypere, Grimaldi, and Landri  2021). Since Edtech is so impactful for education, it matters what kind of Edtech is incubated, innovated, and rolled out into the sector. The nature of Edtech is determined by socio-techno-financial processes resulting from power struggles between various actors (Komljenovic  2021). We argue it is investors who increasingly influence the nature of Edtech. They can realise future visions by structuring the direction of entire industries through their funding priorities (Cooiman  2022). However, they do more than only invest financial resources; they conduct studies, issue reports, educate entrepreneurs and other actors, organise networking, work with policymakers, and more (Williamson and Komljenovic  2023). Hence, investment and consequent actions are as much political decisions about the future as they are financial decisions about funding startup companies. What can and cannot exist is determined by an investment decision (Feher  2018), and investors seek to materialise particular visions of futures through very laborious actions that accompany the investment itself (Muniesa et al.  2017). 

Historically, investors were hesitant to invest in the education sector due to low returns, long investment cycles, fragmented markets, heavy regulation, and public hesitancy towards privatisation. This has changed with the emergence and growth of Edtech, akin to other sectors in the digital economy, accelerated by the pandemic (Teräs et al.  2020). Education via Edtech is seen to have an enormous opportunity for growth among investors as one of the last sectors that have not yet been digitalised. In other words, Edtech made education investable. 

The Edtech industry is relatively young. While we can trace the use of the first computers for academic research back to the mid-1940s and their first use in university and school classrooms to the 1960s (Molnar  1997), the Edtech industry as we know it today developed in the early 2010s. Since 2010, the number of newly established Edtech companies has sharply increased (Komljenovic, Sellar, and Birch  2021). Venture capital (VC) investment in Edtech rose from $500 million in 2010 to more than $20 billion in 2021 (As of November 24, 2022, HolonIQ listed on its website https://www.holoniq.com/notes/global-Edtech-venture-capital-report-full-year-2021). And the COVID-19 pandemic accelerated investment in Edtech and its use in education (Williamson and Hogan  2020). The Edtech industry is now consolidating, as indicated by the rising value of individual investments into particular companies (Brighteye Ventures  2022) and an increasing number of acquisitions (Brighteye Ventures  2022), indicating the emergence of ‘Big Edtech’ (Williamson  2022). The number of Edtech ‘unicorns’, companies valued at more than $1 billion, increased from 0 in 2014 to 62 in 2021 (Brighteye Ventures  2022). An important reason that the Edtech industry has grown and consolidated is capital investment. 

Surprisingly, Edtech investors, particularly VC investors, remain under-researched in education research. In this article, we ask who Edtech investors are, how they operate, and what are the consequences. We argue that Edtech investors became economic and political actors in the education ensemble of multisector influences on policy and practice (Robertson and Dale  2015) who need to be brought into research focus. We address the research gap by discussing Edtech investors’ operations, exploring the political and economic actions of two Edtech VC investors through an original empirical study, and proposing a research programme to investigate these key actors further. 

We proceed as follows. First, we provide a brief overview of the practices of Edtech investors to illuminate the investment landscape and its grammar. We then explain our approach to the empirical study. We proceed by discussing three forms of VC investors’ economic and political labouring of making the Edtech industry, legitimating their role, and consolidating the industry. We conclude by reflecting on the implications for education.

08 February 2023

TechnoFixes and EdTech

'The Technological Fix as Social Cure-All: Origins and Implications' by Sean F Johnston in (2018) 37(1) IEEE Technology and Society Magazine 47-54 comments 

In 1966, a well-connected engineer posed a provocative question: will technology solve all our social problems? He seemed to imply that it would, and soon. Even more contentiously, he hinted that engineers could eventually supplant social scientists - and perhaps even policy-makers, lawmakers, and religious leaders - as the best trouble-shooters and problem-solvers for society [1]. The engineer was the Director of Tennessee's Oak Ridge National Laboratory, Dr. Alvin Weinberg. As an active networker, essayist, and contributor to government committees on science and technology, he reached wide audiences over the following four decades. Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg's message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. ... 

Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg’s message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. 

Weinberg’s rhetorical question was a call-to-arms for engineers, technologists, and designers, particularly those who saw themselves as having a responsibility to improve society and human welfare. It was also aimed at institutions, offering goals and methods for government think-tanks and motivating corporate mission-statements (e.g., [3]). 

The notion of the technological fix also proved to be a good fit to consumer culture. Our attraction to technological solutions to improve daily life is a key feature of contemporary lifestyles. This allure carries with it a constellation of other beliefs and values, such as confidence in reliable innovation and progress, trust in the impact and effectiveness of new technologies, and reliance on technical experts as general problem-solvers.  

This faith can nevertheless be myopic. It may, for example, discourage adequate assessment of side-effects — both technical and social — and close examination of political and ethical implications of engineering solutions. Societal confidence in technological problem-solving consequently deserves critical and balanced attention. 

Adoption of technological approaches to solve social, political and cultural problems has been a longstanding human strategy, but is a particular feature of modern culture. The context of rapid innovation has generated widespread appreciation of the potential of technologies to improve modern life and society. The resonances in modern culture can be discerned in the ways that popular media depicted the future, and in how contemporary problems have increasingly been framed and addressed in narrow technological terms. 

While the notion of the technological fix is straightforward to explain, tracing its circulation in culture is more difficult. One way to track the currency of a concept is via phrase-usage statistics. The invention and popularity of new terms can reveal new topics and discourse. The Google N-Gram Viewer is a useful tool that analyzes a large range of published texts to determine frequency of usage over time for several languages and dialects [4], [5]. 

In American English, the phrase technological fix emerges during the 1960s and proves more enduring and popular than the less precise term technical fix. 

We can track this across languages. In German, the term technological fix has had limited usage as an untranslated English import, and is much less common than the generic phrase technische Lösung (“technical solution”), which gained ground from the 1840s. In French, too, there is no direct equivalent, but the phrase solution technique broadly parallels German and English usage over a similar time period. And in British English, the terms technological fix and technical fix appear at about the same time as American usage, but grow more slowly in popularity. Usage thus hints that there are distinct cultural contexts and meanings for these seemingly similar terms. Its varying currency suggests that the term technological fix became a cultural export popularized by Alvin Weinberg’s writings on the topic, but related to earlier discourse about technology-inspired solutions to human problems. 

Such data suggest rising precision in writing about technology as a generic solution-provider, particularly after the Second World War. But while the modern popularization and consolidation of the more specific notion of the “technological fix” can be traced substantially to the writings of Alvin Weinberg, the idea was promoted earlier in more radical form.

In 'Automating Learning Situations in EdTech: Techno-Commercial Logic of Assetisation' by Morten Hansen and Janja Komljenovic in (2023) 5 Postdigital Science and Education 100–116 the authors comment 

 Critical scholarship has already shown how automation processes may be problematic, for example, by reproducing social inequalities instead of removing them or requiring intense labour from education institutions’ staff instead of easing the workload. Despite these critiques, automated interventions in education are expanding fast and often with limited scrutiny of the technological and commercial specificities of such processes. We build on existing debates by asking: does automation of learning situations contribute to assetisation processes in EdTech, and if so, how? Drawing on document analysis and interviews with EdTech companies’ employees, we argue that automated interventions make assetisation possible. We trace their techno-commercial logic by analysing how learning situations are made tangible by constructing digital objects, and how they are automated through specific computational interventions. We identify three assetisation processes: First, the alienation of digital objects from students and staff deepens the companies’ control of digital services offering automated learning interventions. Second, engagement fetishism—i.e., treating engagement as both the goal and means of automated learning situations—valorises particular forms of automation. And finally, techno-deterministic beliefs drive investment and policy into identified forms of automation, making higher education and EdTech constituents act ‘as if’ the automation of learning is feasible. 

 Education technology (EdTech) companies are breathing new life into an old idea: education progress through automation (Watters 2021). EdTech companies are interested in portraying these processes as complex and bringing significant value to the learner and her educational institution, even when actual practices do not always reflect such imaginaries (Selwyn 2022). For example, EdTech companies may claim that artificial intelligence (AI) is a key part of their product, when in fact, actual computations are much simpler. It is therefore vital to disentangle EdTech companies’ imagined and actual automation practices. 

We propose the concept of ‘automated learning situations’ to disentangle automation imaginaries from actual practice. ‘Learning situations’ are the relationships between students, teachers, and learning artefacts in educational contexts. ‘Automated’ learning situations refer to automated interventions in one or more of these relationships. In practice, EdTech companies automate learning situations by capturing student actions on digital platforms, such as clicks, which they then use for computational intervention. For example, an EdTech platform may programmatically capture how a student engages with digital texts before computing various engagement scores or ‘nudges’ in order to affect her future behaviour. 

It is useful to conceptualise such automation as techno-material relations mapped along two dimensions: digital objects and computing approaches. While current literature on EdTech platforms has already uncovered how platformisation reconfigures pedagogical autonomy, educational governance, infrastructural control, multisided markets, and much more (e.g. Kerssens and Van Dijck 2022; Napier and Orrick 2022; Nichols and Garcia 2022; Williamson et al. 2022), the two dimensions bring more conceptual clarity to the technological possibilities and limitations of actually existing automation practices. Furthermore, they allow us to unpack techno-commercial relationships between emergent automation and assetisation processes. 

EdTech is embedded in the broader digital economy, which is increasingly rentier (Christophers 2020). This means that there is a move from creating value via production and selling commodities in the market, to extracting value through the control of access to assets (Mazzucato 2019). Assetisation is the process of turning things into assets (Muniesa et al. 2017). Depending on the situation, different things and processes can be assetised in different ways (Birch and Muniesa 2020). This includes taking products and services previously treated as commodities—something that can be owned through purchase and consequently fully controlled—and transforming them into something that can only be accessed through payment without change in ownership (Christophers 2020). A useful example is accessing textbooks in a digital form by paying a subscription to a provider such as Pearson +, instead of purchasing and owning physical book copies. Assetising a medium of delivery changes the implications for the user. For example, when customers buy a book, they own the material object but not the intellectual property (IP) rights. With the ownership of the book itself, i.e., the physical object, comes a measure of control: they can read the textbook as many times and whenever they want, write in the book, highlight passages, sell it to someone else, use it for some other purpose entirely, or even destroy it. On the contrary, paying a fee for accessing the electronic book via a platform transforms how users can engage with the content because the platform owner holds the control and follow-through rights (cf. Birch 2018): they decide when books are added and removed, what users can do with the book and for how long, and—crucially—what happens to associated user data. Generating revenue from a thing while maintaining ownership, control, and follow-through rights is an indication that this thing has been turned into an asset for its owner. We, therefore, ask: does the automation of learning situations contribute to assetisation processes in EdTech, and if so, how? 

In what follows, we first present our conceptual and methodological approach. We then unpack the digital objects used to construct learning situations. Next, we discuss how interventions are automated differently depending on computing temporalities and complexities. We conclude by discussing three assetisation processes identified in the automation of learning situations: the alienation of digital objects from students and staff, the fetishisation of engagement, and techno-deterministic beliefs leading to acting ‘as if’ automation is feasible.

22 December 2020

Learning Platforms and Datafication

'Automation, APIs and the distributed labour of platform pedagogies in Google Classroom' by Carlo Perrotta, Kalervo N. Gulson, Ben Williamson and Kevin Witzenberger in (2020) Critical Studies in Education comments 

Digital platforms have become central to interaction and participation in contemporary societies. New forms of ‘platformized education’ are rapidly proliferating across education systems, bringing logics of datafication, automation, surveillance, and interoperability into digitally mediated pedagogies. This article presents a conceptual framework and an original analysis of Google Classroom as an infrastructure for pedagogy. Its aim is to establish how Google configures new forms of pedagogic participation according to platform logics, concentrating on the cross-platform interoperability made possible by application programming interfaces (APIs). The analysis focuses on three components of the Google Classroom infrastructure and its configuration of pedagogic dynamics: Google as platform proprietor, setting the ‘rules’ of participation; the API which permits third-party integrations and data interoperability, thereby introducing automation and surveillance into pedagogic practices; and the emergence of new ‘divisions of labour’, as the working practices of school system administrators, teachers and guardians are shaped by the integrated infrastructure, while automated AI processes undertake the ‘reverse pedagogy’ of learning insights from the extraction of digital data. The article concludes with critical legal and practical ramifications of platform operators such as Google participating in education.

'The datafication of teaching in Higher Education: critical issues and perspectives' by Ben Williamson, Sian Bayne and Suellen Shay in (2020) 25(4) Teaching in Higher Education 351-365 comments 

Contemporary culture is increasingly defined by data, indicators and metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometric devices. In a ‘society of rankings’, simplified and standardized metrics act as key reference points for making sense of the world (Esposito and Stark 2019, 15). Beyond conventional statistical practices, the availability of ‘big data’ for large-scale analysis, the rise of data science as a discipline and profession, and the development of advanced technologies and practices such as machine learning, neural networks, deep learning and artificial intelligence (AI), have established new modes of quantitative knowledge production and decision-making (Kitchin 2014; Ruppert 2018). 

Although ‘datafication’ – the rendering of social and natural worlds in machine-readable digital format – has most clearly manifested in the commercial domain, such as in online commerce (e.g. Amazon), social media (Facebook, Twitter), and online advertising (Google), it has quickly spread outwards to encompass a much wider range of services and sectors. These include, controversially, the use of facial recognition and predictive analytics in policing, algorithmic forms of welfare allocation, automated medical diagnosis, and – the subject of this special issue – the datafication of education. 

Education is a particularly important site for the study of data and its consequences. The scale and diversity of education systems and practices means that datafication in education takes many forms, and has potential to exert significant effects on the lives of millions. That education is widely understood as a public good, rather than a commercial enterprise (with some exceptions) also means that the extraction of data from students, teachers, schools and universities cannot be straightforwardly analyzed as another instantiation of ‘surveillance capitalism’, that is, the gathering of the ‘raw material’ of human life en masse for analysis, sale and profit (Zuboff 2019). Instead, the datafication of education needs to be understood and analyzed for its distinctive forms, practices and consequences. Enhanced data collection during mass university closures and online teaching as a result of the 2020 COVID-19 crisis makes this all the more urgent. In this brief editorial introduction to the special issue on ‘The datafication of teaching in higher education’, we situate the papers in wider debates and scholarship, and outline some key cross-cutting themes. 

Measurement matters 

There is of course a very long history to practices, processes and technologies of datafication in which current developments in big data, AI and machine learning need to be situated (Beer 2016). The eighteenth and nineteenth centuries witnessed an outpouring of statistical knowledge production, as everything from industrial manufacturing to the natural world, and from the state of the human population to the workings of the human body itself, was subjected to quantification and increasing numerical management (Bowker 2008; Ambrose 2015). The work of modern government itself came to rely on statistics, as people, goods, territories, processes and problems were all made legible as numbers, and statistical knowledge came to ‘describe the reality of the state itself’ (Foucault 2007, 274) as part of the ‘machinery of government’ (Rose 1999, 213). 

The statistical machinery of the nineteenth- and twentieth-century state is now, in the twenty-first century, shadowed by a vast complex of data infrastructures, platforms, devices, and analytics organizations from across the public, charitable and private sectors, as big data has itself become a new source of knowledge, governance and control (Bigo, Isin, and Ruppert 2019). Social media platforms, web interactions, financial transactions, public surveillance networks, online commerce, business software, mobile phone location services, wearable devices, and even connected objects in the Internet of Things have become key sources of knowledge for those authorities with access to the data they produce (Marres 2017). Governments are increasingly turning to digital services in order to generate detailed information about the populations they govern, including controversial attempts to introduce public facial recognition systems for purposes of individual identification (Crawford and Paglen 2019). Through machine learning, neural nets and deep learning, so-called AI products and platforms can now ‘learn from experience’ in order to optimize their own functioning and adapt to their own use (Mackenzie 2018). Nineteenth- and twentieth-century ‘trust in numbers’ has metamorphosed into a ‘dataist’ trust in the ‘magic’ of digital quantification, algorithmic calculation, and machine learning (Elish and boyd 2018). 

Dataism is a style of thinking that is integrally connected to processes of neoliberalization, as competitive logics and the desire to compare the performance of entities against each other, as if they are competing in markets, have been incorporated into various forms and technologies of measurement. Beer (2016, 31) argues that this period of intensive quantification is governed under a particular neoliberal system of ‘metric power’, and that ‘understanding the intensification of measurement, the circulation of those measures and then how those circulations define what is seen to be possible, represents the most pressing challenge facing social theory and social research today’. He suggests a number of key themes for understanding metric power (Beer 2016, 173–77). Data and metrics set limits on what can be known and what can be knowable. They define what is rendered visible or left invisible, thereby impacting on how certain practices, objects, behaviours and so on gain value, while others are not measured or valued. Measurement involves classification, sorting, ordering, and categorizing people and things, which defines how they are known and treated. It leads to prefiguring judgment, by setting desired aims and outcomes with the aim to bring the future into the present, which a measurement is designed to help achieve. Data-based processes also expand into new tasks, functions and programmes, and intensify their influence. The intensification of measurement leads to forms of authorization and endorsement of certain outcomes, people, actions, systems, and practices, thus marking out what is claimed to be truthful. It also involves increasing automation, which shapes human agency and decision-making – automated systems of computation are taken as objective, legitimate, fair, neutral and impartial, and impact on human judgement. Finally, metrics induce affective reactions, such as anxiety or competitive motivation, and thereby promote or produce actions, behaviours, and pre-emptive responses by prompting people to perform in ways that can be valued, compared and judged in measurable terms. 

The power of metrics to affect how social and natural worlds are known and compared, and therefore to shape how they are treated and changed, means that measurement matters. Data and metrics do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place. Data practices materialize the competitive neoliberal impulse to ensure efficient market functioning and constant improvement through measurement, the hierarchization of winners and losers, and the attribution of quantitative value. This can be fairly mundane, in the case of an online retailer recommending future goods to purchase based on past purchasing record and comparison against millions of other shoppers, where the measured market is the source of the recommendation. Media streaming services constantly capture data about consumption habits, and feed that back into recommended shows and playlists. The metrics in such cases include favoured genres, time spent listening or watching, artists or shows selected and so on. This may seem fairly banal, but yet is shaping cultural habits and individual tastes. But measurement also matters for even more consequential reasons. It has changed the ways economies function, serving hypercapitalist objectives of making data into a key source of market value (Fourcade and Healy 2017). Surveillance systems such as predictive policing and facial recognition disproportionately focus suspicion on ethnic minority groups, and reinforce longstanding structural inequalities in societies (Crawford and Paglen 2019), as judgments are made based on various forms of comparison and prediction. 

Education has long been subject to historical forms of ‘datafication’ (Lawn 2013), but the quantification, measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is intensifying and expanding rapidly. Higher Education is itself implicated in neoliberalizing forms of metric power, as various technologies of data-based measurement and evaluation impose limits on what is made visible and known, sort people and outcomes into (sometimes hierarchical) categories, establish measurable aims, expand to new tasks, establish what is claimed to be true or valuable, impose automation on decision-making, and affect the ways people feel, act and behave.

In referring to data subjects the authors argue 

 The datafication of human beings affects how they are understood, treated, and acted upon. The concept of the ‘data double’ usefully refers to how digital profiles can be created from the activities of individuals (Raley 2013). These profiles, or shadows, then become the basis for various forms of analysis and calculation, which circle back into individual experiences. To use the social media streaming example, the data double captured inside the database is used to make recommendations, which affects the consumer experience outside the database (Cheney-Lippold 2011). The individual becomes a data subject, defined and characterized algorithmically by being sorted into categories and predicted outcomes. 

The construction of data doubles in education is especially consequential since anything that is modelled inside the database then affects the potentially life-changing experience of teaching and learning. A prediction of future progress based on past outcomes could radically affect the future prospects of the student by foreclosing curriculum opportunities. Forms of algorithmic education, in other words, deeply affect data subjects. In their paper, Harrison et al. (2020) draw attention to how datafication both affects teaching and learning and shapes subjectivities. They refer to a ‘student data subjects’ which are assembled from digital traces of educational activity. Teachers, too, are increasingly known, evaluated and judged through data, and come to know themselves as datafied teacher subjects. 

This datafication of student and teacher subjects prefigures a potentially profound transformation in how students and teachers understand themselves and in how they are understood and managed as learners and professionals. As Marachi and Quill (2020) emphasize in this issue, where ‘frictionless’ data transitions are enabled between primary, secondary and tertiary education and even the employment contexts of individuals, the data subject risks becoming a lifelong ‘shadow’ with potential impact which may be far from benign. Marachi and Quill call for greater awareness, routine interrogation of data-sharing practices and critical distance between higher education institutions and ‘edtech’ platform partners promising ‘enhancement’ through data processing, the constitution of data subjects and the promises of ‘personalization’. Such changes may also demand that educators and students develop critical skills of using and evaluating data.

23 April 2020

Analytics

'What’s the Problem with Learning Analytics?' by Neil Selwyn in (2019) 6(3) Journal of Learning Analytics 11–19 comments
This article summarizes some emerging concerns as learning analytics become implemented throughout education. The article takes a sociotechnical perspective — positioning learning analytics as shaped by a range of social, cultural, political, and economic factors. In this manner, various concerns are outlined regarding the propensity of learning analytics to entrench and deepen the status quo, disempower and disenfranchise vulnerable groups, and further subjugate public education to the profit-led machinations of the burgeoning “data economy.” In light of these charges, the article briefly considers some possible areas of change. These include the design of analytics applications that are more open and accessible, that offer genuine control and oversight to users, and that better reflect students’ lived reality. The article also considers ways of rethinking the political economy of the learning analytics industry. Above all, learning analytics researchers need to begin talking more openly about the values and politics of data-driven analytics technologies as they are implemented along mass lines throughout school and university contexts.
'It's My Data! Tensions Among Stakeholders of a Learning Analytics Dashboard' by Kaiwen Sun, Abraham H. Mhaidli, Sonakshi Watel, Christopher A. Brooks, and Florian Schaub in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019) 1-14 comments
Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system's developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system's benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings' implications for the ethical design and deployment of learning analytics applications in higher education. Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system's developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access, and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system's benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings' implications for the ethical design and deployment of learning analytics applications in higher education.
'Big Data in Education. A Bibliometric Review' by José-Antonio Marín-Marín, Jesús López-Belmonte, Juan-Miguel Fernández-Campoy and José-María Romero-Rodríguez in (2019) 8(8) Social Sciences 223 comments
The handling of a large amount of data to analyze certain behaviors is reaching a great popularity in the decade 2010–2020. This phenomenon has been called Big Data. In the field of education, the analysis of this large amount of data, generated to a greater extent by students, has begun to be introduced in order to improve the teaching–learning process. In this paper, it was proposed as an objective to analyze the scientific production on Big Data in education in the databases Web of Science (WOS), Scopus, ERIC, and PsycINFO. A bibliometric study was carried out on a sample of 1491 scientific documents. Among the results, the increase in publications in 2017 and the configuration of certain journals, countries and authors as references in the subject matter stand out. Finally, potential explanations for the study findings and suggestions for future research are discussed.
'Ethical challenges of edtech, big data and personalized learning: twenty-first century student sorting and tracking' by Priscilla M. Regan and Jolene Jesse in (2019) 21(3) Ethics and Information Technology 167-179 comments
With the increase in the costs of providing education and concerns about financial responsibility, heightened consideration of accountability and results, elevated awareness of the range of teacher skills and student learning styles and needs, more focus is being placed on the promises offered by online software and educational technology. One of the most heavily marketed, exciting and controversial applications of edtech involves the varied educational programs to which different students are exposed based on how big data applications have evaluated their likely learning profiles. Characterized most often as ‘personalized learning,’ these programs raise a number of ethical concerns especially when used at the K-12 level. This paper analyzes the range of these ethical concerns arguing that characterizing them under the general rubric of ‘privacy’ oversimplifies the concerns and makes it too easy for advocates to dismiss or minimize them. Six distinct ethical concerns are identified: information privacy; anonymity; surveillance; autonomy; non-discrimination; and ownership of information. Particular attention is paid to whether personalized learning programs raise concerns similar to those raised about educational tracking in the 1950s. The paper closes with discussion of three themes that are important to consider in ethical and policy discussions. 
The last 10 years have witnessed an explosion of new educational technologies (edtech), some touting amazing potential to reach the next generation with new learning methods that will teach not only content, be it history, mathematics or engineering, but also intra- and inter-personal competencies, such as resilience and teamwork. The edtech sector is actively marketing these learning tools, especially to elementary and secondary schools, although the efficacy of technology enhanced learning is still under investigation. Edtech applications have appeared at a political, policy, and commercial moment favorable to the capabilities and advantages offered. The increase in the federal, state and local costs of providing K-12 education and government and voter concerns about financial responsibility generate interest in new techniques that promise to improve efficiency of educational operations. Focus on student achievement and the rankings of US schools with those of other countries has led to heightened consideration of accountability and results. Elevated awareness of the range of teacher skills, as well as variations in student learning styles and needs, has drawn attention to the value of understanding unique characteristics of students and teachers. As a result, the K-12 school environment is conducive to the promises offered by online software and edtech. Edtech companies recognize the huge market offered by K-12 education—an arena that has a vast and renewable population base, but also a particularly vulnerable population involving minor children who experience a range of developmental milestones during the K-12 years. 
This uptick in adoption of a variety of edtech applications at the K-12 level has also generated myriad policy debates, including proposed updates to existing federal laws and the introduction and adoption of numerous new state laws. Much of the policy debate is subsumed under the label of “privacy,” although there are a range of ethical issues associated with edtech applications that have not received the same amount of consideration as privacy, and some issues have been conflated with privacy. Privacy is certainly an issue, as the use of edtech entails collection of more, and more granular, information about students, teachers, and families, as well as administrative details regarding the functioning of educational institutions. Edtech applications enable sophisticated searching and analysis of collected information linking changes in the education arena to the larger debates about the challenges of big data generally. One of the most problematic aspects of edtech, and least addressed from a policy perspective however, involves the capability of edtech to deliver more personalized learning based on the needs and skill levels of individual students. 
Personalized learning applications are currently among the most heavily-marketed, exciting and controversial applications of edtech. These applications involve evaluating students likely learning profiles on applications that use big data to categorize individual learning styles and then direct appropriate learning activities to those students. Known under several labels—personalized learning, student-centered learning, and adaptive learning—they are advocated by edtech companies and foundations, including the Bill and Melinda Gates Foundation and the Chan Zuckerberg Foundation. In 2016, 97% of school districts surveyed by the Education Week Research Center indicated they were investing in some form of personalized learning (Herold 2017). Although exactly what types of programs constitute personalized learning is not always clear and whether and how much these programs incorporate edtech is hard to determine, RAND in the third of its reports on personalized learning cautions that the evidence for the effectiveness of personalized learning is currently weak and needs more research in a range of school settings (Pane et al. 2017). 
A critical ethical concern raised with personalized learning is whether such programs constitute tracking and sorting of students that might be considered discriminatory. The history of tracking in the United States is especially problematic, suggesting the need for caution when sorting children. Student tracking in the 1950s resulted in classrooms that were often divided by race, ethnicity, gender and class. Such tracking was glaringly obvious to parents, students, teachers and administrators—and thus the implications and wisdom of tracking became subjects of policy and social debate. In contrast, the student tracking that appears to be occurring in 2018 is hidden from the view of students, parents and even teachers as it takes place behind computer screens. The extent to which students might recognize they are being tracked through computer programs, and the impact that might have on learning outcomes is rarely discussed or researched. Similarly, the extent to which edtech software embeds subtle discrimination is also unclear, despite the current dialog about algorithmic bias. 
This article seeks to first analyze the range of ethical issues raised by the increased use of edtech and big data in school systems throughout the United States—how these issues are framed; whether the major concerns are receiving the appropriate level of attention and analysis; and what policy implications there are around how issues are being presented. Second, the paper briefly explores policy responses to big data educational innovations—what discourse has resulted; and what policy trends are emerging. Third, the paper is particularly interested in personalized learning systems and whether and how they might incorporate categories such as race, gender, ethnicity, and class, as well as their intersections, and whether discussions about these systems mirror the concerns of the policy and social debates in the 1950s about educational tracking. Finally, the paper closes with some themes that are important to consider in ethical and policy discussions addressing personalized learning systems.

01 November 2018

Law Lectures and Edutech

An article in a UK academic journal is a salutary reminder that Australian law schools have embraced new technologies ahead of overseas peers.

'Lecture recording: a new norm' by Michael J. Draper, Simon Gibbon and Jane Thomas in (2018) 52(3) The Law Teacher 316-334 comments
Classroom recording systems (systems that capture audio or video footage of a taught session) are being adopted in universities globally in part encouraged by studies that suggest that use of recording technology is associated with enhanced student engagement and perceptions of a course. Notwithstanding increased adoption as a result of perceived benefits, approximately only 10% of higher education institutions have adopted comprehensive lecture recording systems. This study considered the benefits and advantages of classroom recording systems. Academic concerns over student attendance and use of recordings are discussed with the implications for teachers, cognisant of the synergistic relationship between teachers, students and their learning. 
The authors go on
 Swansea University is a dual campus institution with 17,445 students (2015/16), with approximately 5000 students at a new Bay campus. Lecture recording was introduced at Swansea in September 2015 on the Bay campus in the Colleges of Engineering and Management, extending to full implementation in 2016/17. Implementation on the original Park campus was piloted as part of this study in September 2015 to explore the potential to extend recording in areas not STEM (science, technology, engineering and mathematics) related. The study sought to explore the potential for use by staff on this campus, student use, the value of recording for them, and impact on attendance. 
The Law Trial ran within the College of Law and Criminology at Swansea University over the academic years 2015 to 2017. The Law School was selected on account of its size and lack of exposure to lecture recording previously. This research is focused on a University sanctioned trial of the use of lecture recording within a programme of the College of Law and Criminology, for which the usage results, student and staff feedback would inform the University Learning and Teaching Committee to recommend, or not, a wider rollout of lecture recording at the University. This piece offers a two-year perspective as part of the wider literature rather than a definitive perspective. That said, the size of the study does not detract from the potential value of exploring the wider pedagogic impact of learning support and how best to respond to it. ... 
Students may benefit from classroom recording systems in a number of ways;
  • they are able to: revisit concepts or topics and reinforce understanding in preparation for or as part of independent study and other class contact; 
  • students themselves believe that having access to recording lectures helps learning; 
  • review discussion and material prior to in-course or end of module assessments; 
  • participate in blended, flipped classroom and online course delivery, view missed content and accommodate different learning styles; 
  • manage the essential processing required to learn concepts – processing demands decrease when multimedia messages are presented in self-paced segments rather than as a continuous flow of information; 
  • view missed content due to illness, timetable clashes, external factors. 
They conclude
A key issue in concluding this account is to consider what teachers can and should know about lecture recording from a pedagogic perspective in terms of the evidence, their own involvement and how to improve their practice. We may need to look beyond the flipped classroom approach to a wider pedagogic perspective on planning and pedagogic practice/methodology but still focused on learner-centred classroom activity and the value of recorded lectures.
For the more experienced teacher, while lecture recording has been available for some time, the sector has responded variably to enable teachers to keep pace with the pedagogic impact and potential issues within the process. For example, addressing course design to harness lecture recording to better effect and ensuring that learning outcomes reflect the common use of recording may be issues to explore. There may also be broader resource issues that institutions will have to address.
As our use of recording advances, we need to be developing other more sophisticated means of ensuring the integrity and dynamism of the teacher–student relationship, individually and collectively, and the discriminating use of policy.
The study demonstrates useful outcomes in relation to the non-STEM use of lecture recording, applicable more widely institutionally. This reinforces the positive conclusions of other recent studies and provides a context for flexible policy development.
Future research could include:
  • wider institutional perspective; 
  • proactive staff engagement to explore and deal with concerns which persist in the face of growing evidence to support and promote lecture recording; 
  • impact on attendance – still mixed perspectives; 
  • detailed comparison with STEM; 
  • investigation of how students can use note-taking, revision and language support; 
  • exploration of the visual components and the potential for wider contribution.
The key outcome is that institutions need to work to create conducive learning environments where staff and students can make best use of lecture recording across disciplines to enhance student experience. The apparent trend to substitution of recording in place of attendance and participation could be an artefact of the lessening familiarity with regularised attendance and structured learning, as previously delivered in higher education. Familiarity with technology may also contribute to a changed mode of learning, in these times of digital availability of range and depth. These developments can be seen as part of a progressive shift towards using the recordings in a surface way and acknowledging this enables pedagogic adaptation. While this is a small study, this slight trend may prove to be an issue reflected in subsequent investigation.
This contribution extends the existing perspective and adds to the body of evidence supporting the pedagogic use of lecture recording for teachers and students in times when student expectations are high. Students value such resources and inclusivity necessitates their widespread adoption.

03 February 2017

InBloom

'The Legacy of inBloom' (Data and; Society Working Paper 02.02.2017) by Monica Bulger, Patrick McCormick and Mikaela Pitcan considers a 2014 educational trainwreck.

The authors ask
Why Do We Still Talk About inBloom?
Many people in the area of educational technology still discuss the story of inBloom. InBloom was an ambitious edtech initiative funded in 2011, launched in 2013, and ended in 2014. We asked ourselves why the story of inBloom is important, and conducted a year-long case study to find the answer. For some, inBloom’s story is one of contradiction: the initiative began with unprecedented scope and resources. And yet, its decline was swift and public. What caused a $100 million initiative with technical talent and political support to close in just one year? A key factor was the combination of the public’s low tolerance for risk and uncertainty and the inBloom initiative’s failure to communicate the benefits of its platform and achieve buy-in from key stakeholders. InBloom’s public failure to achieve its ambitions catalyzed discussions of student data privacy across the education ecosystem, resulting in student data privacy legislation, an industry pledge, and improved analysis of the risks and opportunities of student data use. It also surfaced the public’s low tolerance for risk and uncertainty, and the vulnerability of large-scale projects to public backlash. Any future U. S. edtech project will have to contend with the legacy of inBloom, and so this research begins to analyze exactly what that legacy is.
The inBloom Story
InBloom was a $100 million educational technology initiative primarily funded by the Bill and Melinda Gates Foundation that aimed to improve American schools by providing a centralized platform for data sharing, learning apps, and curricula. In a manner that has become a hallmark of the Gates Foundation’s large scale initiatives, inBloom was incredibly ambitious, well-funded, and expected to deliver high impact solutions in a short time frame. The initiative aimed to foster a multi-state consortium to co-develop the platform and share best practices. It intended to address the challenge of siloed data storage that prevented the interoperability of existing school datasets by introducing shared standards, an open source platform that would allow local iteration, and district-level user authentication to improve security. By providing a platform for learning applications, founders of inBloom set out to challenge the domination of major education publishers in the education software market and allow smaller vendors to enter the space. Ultimately, the initiative planned to organize existing data into meaningful reporting for teachers and school administrators to inform personalized instruction and improve learning outcomes.
The initiative was initially funded in 2011 and publicly launched in February, 2013. What followed was a public backlash over inBloom’s intended use of student data, surfacing concerns over privacy and protection. Barely a year later, inBloom announced its closure. Was this swift failure a result of flying too close to the sun, being too lofty in ambition, or were there deeper structural or external factors?
To examine the factors that contributed to inBloom’s closure, we interviewed 18 key actors who were involved in the inBloom initiative, the Shared Learning Infrastructure (SLI) and the Shared Learning Collaborative (SLC), the latter of which were elements under the broader inBloom umbrella. Interview participants included administrators from school districts and state-level departments of education, major technology companies, former Gates Foundation and inBloom employees, parent advocates, parents, student data privacy experts, programmers, and engineers.
Co-occurring Events
The inBloom initiative occurred during a historically tumultuous time for the public understanding of data use. It coincided with Edward Snowden’s revelations about the NSA collecting data on U.S. civilians sparking concerns about government overreach, the Occupy Wall Street protests surfacing anti-corporation sentiments, and data breaches reported by Target, Kmart, Staples, and other large retailers. The beginnings of a national awareness of the volume of personal data generated by everyday use of credit cards, digital devices, and the internet were coupled with emerging fears and uncertainty. The inBloom initiative also contended with a history of school data used as punitive measures of education reform rather than constructive resources for teachers and students. InBloom therefore served as an unfortunate test case for emerging concerns about data privacy coupled with entrenched suspicion of education data and reform.
What Went Wrong?
InBloom did not lack talent, resources, or great ideas, but throughout its brief history, the organization and the product seemed to embody contradictory business models, software development approaches, philosophies, and cultures. There was a clash between Silicon Valley-style agile software development methods and the slower moving, more risk-averse approaches of states and school districts. At times, it was as though a team of brilliant thinkers had harvested every “best practice” or innovative idea in technology, business, and education—but failed to whittle them down to a manageable and cohesive strategy. Despite the Gates Foundation’s ongoing national involvement with schools, the inBloom initiative seemed to not anticipate the multiple layers of politics and bureaucracy within the school system. Instead there were expectations that educational reform would be easily accomplished, with immediate results, or that – worst case – there would be an opportunity to simply fail fast and iterate.
However, the development of inBloom was large-scale and public, leaving little room to iterate or quietly build a base of case studies to communicate its value and vision. Thus, when vocal opposition raised concerns about student data use potentially harming children’s future prospects or being sold to third parties for targeted advertising, the initiative was caught without a strong counter-position. As opposition mounted, participating states succumbed to pressure from advocacy groups and parents and, one by one, dropped out of the consortium.
The Legacy of InBloom
Although inBloom closed in 2014, it ignited a public discussion of student data privacy that resulted in the introduction of over 400 pieces of state-level legislation. The fervor over inBloom showed that policies and procedures were not yet where they needed to be for schools to engage in data-informed instruction. Industry members responded with a student data privacy pledge that detailed responsible practice. A strengthened awareness of the need for transparent data practices among nearly all of the involved actors is one of inBloom’s most obvious legacies.
Instead of a large-scale, open source platform that was a multi-state collaboration, the trend in data-driven educational technologies since inBloom’s closure has been toward closed, proprietary systems, adopted piecemeal. To date, no large-scale educational technology initiative has succeeded in American K-12 schools. This study explores several factors that contributed to the demise of inBloom and a number of important questions: What were the values and plans that drove inBloom to be designed the way it was? What were the concerns and movements that caused inBloom to run into resistance? How has the entire inBloom development impacted the future of edtech and student data?

12 May 2012

e-fix

I'm reading Out of Hours ... e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology enhanced learning [PDF], a report by Belinda Tynan, Yoni Ryan, Leone Hinton & Andrea Lamont Mills for the ALTC (now defunct as a result of short-sighted cost-cutting in tertiary education).

The authors question the notion of e-learning or e-teaching as a quick fix for the contemporary enterprise university, commenting that
Our main conclusion is, unsurprisingly, that workload associated with online and blended teaching is ill-defined and poorly understood. As more new technologies impact on the sector more generally, it is timely to reconsider and audit practices to ensure future innovation and sustainability of work practices.
The report provides several propositions and recommendations, as follows, on the basis that
If teaching online is to become sustainable, attention needs to be paid urgently to how staff workloads are constructed. It is no longer possible to work in ways that belong to a transmission era of university teaching. As access and connectivity penetrate deeply into our personal, transactional, work and learning lives, interactivity and constructivist pedagogies must be considered routine, not ‘add-ons’ in teaching, and must therefore be reflected in prospective workload models which recognise the higher quantum of teaching tasks associated with e-teaching, and students’ needs for a teacher to ‘be there’. ...
Our research has investigated a topic that is of concern to numerous stakeholders. The approach taken in this project is robust and the conclusions and recommendations fairly represent the voices of staff across four institutions who experience online learning as a key aspect of their work. While some may argue that these universities are not representative of the sector, the findings will no doubt ring true to many. As the higher education sector moves toward an increasingly competitive market place, the inclusion of more diverse students and the increasing use of technology to serve student learning, online workload needs to be reconsidered.
Proposition: Teaching online and in blended modes creates different types and numbers of work activities that require consideration when developing workload models.
Recommendation: Acknowledge that ‘flexibility’ costs, and will impact fixed, variable and opportunity costs.
Proposition: Staff are generally supportive, even enthusiastic, about teaching online. They have concerns about appropriate feedback to students, changing technologies, adequate infrastructure, professional development, access to support staff, large classes and assessment. At times they are not sure if what they do ‘online’, in the time that they allocate or over-allocate, is good enough to support quality learning outcomes. Some academics do not have the time to update materials, develop innovative approaches to learning, take up professional development opportunities, or attend to research demands.
Recommendation: Staff should be enabled to participate actively in their professional development and have their work recognised and valued within performance assessment, development and review. Institutions should ensure business processes and infrastructure are adequately resourced.
Proposition: Workload models are not well-understood by staff teaching online and not adequately broken into specific components, nor implemented transparently and consistently across school areas. Workload models do not reflect what staff perceive they do. Many staff do more than is required and are not prepared to compromise quality of materials or interaction.
Recommendation: Institutional management perceptions of teaching online should be more closely aligned with the reality of the workload as perceived by teaching staff within current workload models. Staff require more transparent participation and negotiation about appropriate workload models.
Proposition: Staff perceptions are that EFTSL is not a clear measure for allocating workload when teaching online. These workload formulas fail to take into account variable costs, for example, multimedia delivery formats; other support such as educational development, IT equipment (software and hardware); additional staff; staff development; opportunity costs (early adopters and innovation); diverse student cohorts; the advent of Work Integrated Learning; committee work; the plethora of additional ‘coordinator tasks’ such as ‘Study Abroad Convenor’.
Recommendation: DEEWR in tandem with Universities Australia and other agencies should initiate a multi-level audit of teaching time and WAMs. This would accurately identify the roles and responsibilities of teachers, and their actual time using various applications and their perceived cost-benefit, in order for universities to develop more appropriate yet efficient workload models.
Proposition: The appropriation and use of technology into curriculum requires a recasting of the role of academics within universities.
Recommendation: Since almost all staff are involved in teaching online, appropriate selection criteria, probation criteria, performance indicators and a commitment to professional development in e-teaching by institutions and their staff are imperative.
Proposition: Teaching online has numerous definitions and perceived understandings. There is an inconsistent terminology and staff cannot articulate or communicate the multitude of issues involved in their teaching.
Recommendation: Define clearly what it means in each program to teach online for staff, learn online for students and manage staff allocation within higher education institutions so that all stakeholders as well as Finance Officers can participate in workload model development.
Proposition: 2011 has seen a surge of concern about the impact of online purchasing (especially from overseas) on the Australian economy, with bricks and mortar businesses being threatened. Many see this as a precursor to online services supplanting physical service industries, including higher education; among these are some Vice Chancellors (Campus Review, 27 June 2011) and the majority of IT executives, including Bill Gates. Others are more sanguine, envisaging a future where the campus still attracts school leavers seeking a vestigial ‘university experience’, through a blended education of independent learning online plus some face-to-face interactions, but where the majority of adults transact their learning ‘at a distance’. For the moment at least, the blended model remains the predominant ‘delivery’ mode in higher education, despite an increasing number of fully online programs.
Recommendation: Develop Workload Allocation Models (WAMs) which acknowledge the greater number of tasks associated with a blended pedagogy, as indicated in table 1 in Part 2, reproduced below.
If and until a wholly disaggregated model of academic work (separating the discrete tasks of content expert, educational developer, multimedia designer, graphic designer, tutor and marker) is adopted (as is suggested by successful models such as in the OUUK), institutions must acknowledge in their workload models the greater number of tasks associated with online and blended development and delivery. Teaching workloads need to be adjusted to acknowledge the greater number of tasks associated with new technologies being incorporated into education systems.

Greater use should be made of multimedia resources which have already demonstrated their efficacy for teaching complex/threshold/key concepts, so that individual teachers do not have to develop resources on core concepts in their discipline. However, the work involved in locating these resources, and then contextualising them to particular professional and institutional programs should not be under-estimated. A one size unit on Statistics 101 does not fit all programs. For example, of the universities involved in this study, ACU subjects must contain a specific community engagement or social justice component, so any ‘core unit’ curriculum must be adapted.