Showing posts with label Law Teaching. Show all posts
Showing posts with label Law Teaching. Show all posts

30 September 2024

Authenticity

'Authentic assessment: from panacea to criticality' by Tim Fawns, Margaret Bearman, Phillip Dawson, Juuso Henrik Nieminen, Kevin Ashford-Rowe and Keith Willey in (2024) Assessment and Evaluation in Higher Education comments 

 Authentic assessment contrasts with ‘traditional’ forms of assessment in ways that appear to be significant and, largely, positive. However, authentic assessment is often invested with superpowers, including the ability to: surmount academic integrity concerns (Sotiriadou et al. 2020); make assessment more inclusive (Nieminen 2024); and ensure relevancy to future personal, social and professional contexts (Ashford-Rowe, Herrington, and Brown 2014; Villarroel et al. 2018; Ajjawi et al. 2020; McArthur 2023). This view finds its way into assessment policies and teaching and learning resources that uncritically blend authenticity with inclusion, integrity or preparation of graduates for the future (e.g. University of Reading 2023; University of Oxford 2024). In this paper, we argue that the promotion of authentic assessment as an answer to a broad range of complex problems unhelpfully positions it as a panacea. 
 
Authentic assessment is often positioned as the ‘silver bullet’ solution for critical, urgent and widespread challenges in our current higher education environment (Ajjawi et al. 2023). While literature supports some potential benefits of authentic assessment in relation to challenges of cheating, inclusion, and the application and future relevance of learning, in principle (Ashford-Rowe, Herrington, and Brown 2014; Villarroel et al. 2018; Sokhanvar, Salehi, and Sokhanvar 2021), there is very limited evidence of how, and the extent to which, authentic assessment actually does this in practice, or the relationship between authenticity and these other concerns (Ajjawi et al. 2023). Indeed, the review by Villarroel et al. (2018) showed that the vast majority of authentic assessment studies do not feature a clear model or practical guidelines for authentic assessment. 
We have concerns that the label ‘authentic assessment’ is sometimes applied without sufficient interrogation of how aspirations of authenticity relate to broader contexts and purposes of assessment. This worry has a historical basis: higher education discourse has seen a range of panacea concepts come and go. Successive educational technologies – including print, radio, television, online, tablets, MOOCs, and now artificial intelligence (AI) – have been associated with ‘magical thinking’ around their capacity to solve complex educational problems (Cuban and Jandrić 2015), with promises of what should happen greatly outstripping the reality of what does happen (Selwyn 2013). Pedagogical innovations – such as student-centredness, constructive alignment, and active learning – have been suggested in response to problems of student learning, engagement and motivation (e.g. Freeman et al. 2014). Even higher education itself is commonly portrayed as a panacea for various global issues such as poverty and unemployment (see, e.g. Ostrowicka 2022). In each case, a concept or label covers over the need for nuanced negotiation and integration of new approaches into particular contexts. Similarly, where the label ‘authentic assessment’ is treated as sufficient explanation for what is actually a complex approach to assessment design and implementation, it can become a distraction (Arnold and Croxford 2024) or even a thought-terminating cliché (Lifton 1961) that hampers important conversations and considerations of the tensions between multiple purposes and practicalities. 
 
In this paper, we examine the relationship between authenticity in assessment and three ‘problems’ it is often purported to address: preparing graduates for their futures; cheating; and inclusion. We have chosen these challenges because of their complexity, their significance, and the primacy of authentic assessment as a solution within current educational discourse. In our discussions, we consider how authenticity can be used as a principle alongside or within more targeted approaches to different assessment purposes. We argue that, if we think beyond authentic assessment as a form of assessment to conceive of authenticity as just one aspect to thoughtfully and judiciously consider within the design, we can more clearly see and address real problems and purposes of assessment and higher education more broadly.

15 August 2024

Ethics

“Just teach them the law!”: the ethics of value inculcation within legal education' by Alex Green in (2023) 57(3) The Law Teacher comments 

 To what extent should law teachers be permitted to advance controversial ethical, moral or political views as part of the LLB curriculum? This short paper grapples with that question by defending the ethical permissibility of such behaviour subject to the important proviso that it does not cause students “pedagogical harm”. In reaching this conclusion, three alternative views are considered and dismissed, each of which seeks either to eliminate value inculcation entirely or to restrict its scope to the moral-political values currently immanent within established law. The approach taken is argumentative, drawing upon analytical philosophy, with each contested and contestable view being presented in propositional form. Ultimately, it is concluded that value inculcation cannot be avoided within legal education and that, given this fact, the question becomes which values law teachers have a responsibility to advance. It is contended that this judgement, fraught though it might be for various reasons, is best left to individual teachers and that, for this reason among others, a permissive “no-harm” approach to value inculcation best justifies current pedagogical practices.

15 July 2024

Feedback

'What’s the use of being nice? Characteristics of feedback comments that students intend to use in improving their work' by David Playfoot, Ruth Horry and Aimee E Pink in (2024) Assessment and Evaluation in Higher Education comments 

Feedback is an integral part of the learning process for students (Hyland 2013). Providing feedback on student work is time-consuming for teaching staff (Gibbs and Simpson 2004) and a lot of effort is expended in trying to provide high-quality feedback (e.g. Pitt and Norton 2017; Brooks et al. 2021). In spite of this, students in the UK rate feedback as one of the aspects of their university experience with which they are the least satisfied on the National Student Survey (Bell and Brooks 2017). It should be noted that the fact that students are the least satisfied with this aspect of their course does not indicate that the majority of students are dissatisfied just that satisfaction scores are lower than for other areas. Similar patterns are also seen in the Course Experience Questionnaire used to gauge student satisfaction in Australia (Quality Indicators for Learning and Teaching 2023). Research has also shown that students often do not act upon the feedback that they are given because they do not consider it to be useful (McGrath, Taylor, and Pychyl 2011). As a consequence, a key goal of research in recent years has been to examine what affects the likelihood that feedback comments are perceived positively by the students that receive them. The current paper outlines three studies which have attempted to determine the characteristics of comments that students believe that they would use to further develop their work. 

The existing literature considers a variety of factors that may contribute to feedback being useful. There are already several excellent reviews and meta-analyses related to feedback practices (e.g. Wiliam 2018; Wisniewski, Zierer, and Hattie  2019; Van der Kleij and Lipnevich 2021; Winstone and Nash 2023). Despite this, there is still no clear consensus as to the characteristics of effective feedback – the effect sizes revealed by the meta-analyses are widely variable from paper to paper (Wiliam 2018). Part of the problem is likely to be that assessment and feedback practices are heavily constrained by the policies of the university in which a study is conducted, by the principles of assessment and programme design, and by individual differences between the students who receive the feedback (Price, Handley, and Millar 2011; Evans 2013; Ajjawi et al. 2022). This has led to claims that there can be no overarching ‘gold standard’ of feedback because the contextual factors are so influential (Krause-Jensen 2010). Nevertheless, we argue that there are likely to be underlying principles that apply to effective feedback; the implementation of these principles may be moderated by institutional influences or specific student populations, but they will provide a good foundation on which to build. In what follows, we outline the key characteristics of effective feedback identified in the literature and investigated in our own studies. To simplify our discussions, we will consider potential characteristics of feedback under umbrella categories, offering examples of the way that these categories have been operationalised in previous studies. 

The first broad category can be referred to as ‘usability’ which subsumes feedback which is clear (Hattie and Timperley 2007; Ferguson 2011; Price, Handley, and Millar 2011; Li and De Luca 2012; Fong et al. 2018) and constructive (Lizzio and Wilson 2008; Dawson et al. 2019; Henderson et al. 2019). Ferguson’s (2011) study, for example, surveyed 566 students at an Australian university and reported that good feedback should be clear and unambiguous as well as having an explicit connection to the marking criteria for the assignment. Dawson et al. (2019) study identified that clarity and constructiveness (and indeed ‘usability’) were key themes among their 400 student participants. This is not surprising – for feedback to be effective, it must be acted upon (Boud and Molloy 2013); and in order to act upon it, the student must understand what they are supposed to do, and it must be actionable (Ryan et al. 2021). In a similar vein, adopting the Transparency in Learning and Teaching framework (TILT e.g. Winkelmes 2023) has been shown to result in improvements across a variety of metrics of the student experience. TILT aims to make communication between students and teachers clear, and can be applied to all aspects of teaching practice (see https://tilthighered.com/tiltexamplesandresources for examples). Transparency as to why students are undertaking tasks and how their work is to be graded, has been shown to improve student perceptions of assignments and feedback, as well as improving the quality of the work (Winkelmes et al. 2015). 

Both clarity and constructiveness were therefore characteristics of effective feedback that we considered in the current study. In addition, we included ‘helpfulness’ in our examination of feedback. This was motivated by the fact that large-scale student satisfaction metrics used to rank universities in the UK (the National Student Survey) ask final-year undergraduates to indicate whether the feedback that they received during their courses was helpful. Helpfulness could be considered as conceptually similar to constructiveness or usability, but to our knowledge there is no empirical evidence to support or refute this interpretation as it pertains to student perceptions of assignment feedback. We sought to gain this evidence as part of the current study. 

A second umbrella category of feedback characteristics can be referred to as ‘niceness’. A large body of research has identified that students prefer to receive feedback that is supportive (Xu and Carless 017; Carless and Winstone 2023), encouraging (Abramowitz, O’Leary, and Rosén 1987; Lizzio and Wilson 2008), motivating (Henderson et al.  2019) and has a positive tone (Winstone et al. 2016; Dawson et al.  2019). Tone, in this case, refers to whether the feedback is framed in a positive or a negative manner. Dawson et al. (2019) demonstrated that feedback that appears overly critical can demotivate students and is unlikely to be used, while Winstone et al. (2016) reported that positively framed feedback is more likely to be acted upon. All of these characteristics of feedback reflect the fact that receiving comments on assignments can have an emotional impact on students (Weaver 2006; Parker and Winstone 2016). Ultimately, students will be less likely to engage with feedback that makes them feel demotivated (Ball et al. 2009) and thus the feedback will not achieve the desired effect. 

In the current work, we asked participants to rate real feedback comments (received by other students in previous academic years) for clarity, constructiveness, helpfulness, encouragement, supportiveness, motivational value and tone to explore the interrelation of these characteristics. Previous studies have considered these factors but not all at the same time and hence it is possible that they are not distinct. Understanding the interrelations between these perceptions might help to understand some of the inconsistencies of previous findings and meta-analyses. Further, as outlined briefly above, there are multiple factors that may influence the effectiveness of a feedback comment. In all cases, though, the effectiveness of a feedback comment is contingent on the recipient engaging with the feedback and acting upon it. For this reason, we asked our participants to rate their ‘intention to use’ each of the feedback comments, imagining that they had received them on their own work. Previous studies have examined student preferences relating to feedback or changes in attainment following feedback (Winstone and Nash 2023). We argue that what students prefer is important information for instructors, but that preference does not guarantee that feedback will foster improvement in future assignments (Jonsson 2013). For example, feedback which is effusive is likely to be well-received but will not be likely to include the necessary information to allow the student to capitalise on what they did well or to correct what they did not (Holmes and Papageorgiou 2009). Thus, knowing the characteristics of feedback comments which are likely to be acted upon is key.

25 June 2024

edTech

'Edtech in Higher Education: Empirical Findings from the Project ‘Universities and Unicorns: Building Digital Assets in the Higher Education Industry’ by Janja Komljenovic, Morten Hansen, Sam Sellar and Kean Birch (published by the Centre for Global Higher Education, Department of Education, University of Oxford comments 

Higher education (HE) is by now thoroughly digitalised. Universities use a variety of digital products and services to support their operations. The educational technology (EdTech) industry has been expanding in the past decade, while investors have become important actors in the field. This report offers findings from the ESRC-funded research project ‘Universities and Unicorns: Building Digital Assets in the Higher Education Industry’ (UU), which investigated new forms of value in digitalised HE as the sector engages with EdTech providers. ... 

The project was conducted between 1 January 2021 and 30 June 2023. It investigated new forms of value in digital and digitalised higher education (HE) as the sector engages with educational technology (EdTech) providers. The project was especially interested in digital user data and data operations. We followed three groups of actors: universities, EdTech start-up companies, and investors in EdTech. 

Our study of universities focused on understanding their: digitalisation strategies and practices; digital ecosystems and collaborations with EdTech companies; attitudes towards and experiences with EdTech companies; user data operations and data outputs; and key challenges with digitalisation. 

Our study of EdTech start-up companies focused on understanding: development of products and services; business models and strategies; how products are datafied and their data operations; how user data is made valuable; experiences and relations with universities; experiences and relations with investors; and challenges they are facing in their work and growth. 

Our study of investors focused on understanding: their views of HE and the future of the sector; the role that EdTech should play in this future; their beliefs about the value of user data; their investment theses, strategies and activities; and their experiences and relations with the EdTech and HE sectors. xx Understanding EdTech relationally, and bringing these groups together, allowed us to gain particular insights into the digitalisation of HE and its political economy. We aimed to trace the flow of ideas, strategies, and actions between these actors and to understand how and why the EdTech industry is developing as it is. 

Our conceptual approach centred on rentiership and assetisation. The global economy is increasingly characterized by rentiership: the move from creating value via producing and selling commodities in the market to extracting value via controlling access to assets. In the digital economy, rentiership is often exercised by controlling digital platforms and pursuing revenues associated with platforms, such as collecting and monetising digital data extracted via these platforms. Users became valuable through their engagement with the platform and are made visible through various user metrics. Emerging work on assetisation in education argues that this is a productive way to understand the impact of the privatisation, financialisation, and digitalisation of public education. However, the rise of assetisation does not mean that HE is no longer a public good or subject to commodification. Instead, it adds new complex forms of value creation and governance to the sector. We should note that this research project was conducted before the release of ChatGPT into public use. Therefore, this report does not make reference to the turbulent discussions about generative AI and its potential usage and impacts in HE. Finally, we note that this report offers an empirical description of key themes and dynamics identified in our study. More in-depth and theorised analyses of project findings are being published in journal articles and book chapters, all of which are openly accessible. The Appendix includes a list of publications.  ...

In this section, we briefly summarise key overall findings, which are analysed in more detail in academic publications, i.e. journal articles and book chapters (see Appendix). The following findings are relevant to our case studies and might be different in other contexts. 

Takeaway #1: Big Tech and legacy software are prominent in digitalising higher education 

Big Tech infrastructure and platforms, legacy software, and EdTech incumbents dominate university digital ecosystems. It is challenging for the EdTech start-up industry to enter HE markets. Digital products and services offered by new companies represent a small proportion of digitalisation work at universities. EdTech companies primarily target individuals as customers, enterprises for staff development and training, and lower levels of education (i.e. schooling rather than HE). 

Takeaway #2: EdTech in HE is less advanced than imagined 

There is a discrepancy between the promises of the EdTech industry regarding the quality and impact of digital products and services and the perception of university customers. Many university actors, as well as a few EdTech companies, argued that the current quality of EdTech products is generally low compared to other sectors. 

Takeaway #3: Making user data valuable is difficult 

Collecting, cleaning, sorting, processing, and analysing digital user data demands significant human, technological, and financial resources. It is difficult to make user data analysis useful and valuable, such that universities are willing to pay higher fees for data-driven products. Most EdTech companies that we analysed struggle with monetising user data. There is also less user data analysis currently in the sector than imagined by the EdTech industry in its public discourse. The omnipresent belief in the value of user data among all actors is disjunctive with the realities of data practices, which are mostly simple or non-existent. Most university users are sceptical about learning analytics. 

Takeaway #4: User data analytics in HE are not well-developed 

EdTech companies attempt to make their digital products valuable by incorporating user data analytics into their core products. However, currently, these analytics are simple and remain at the level of basic descriptive feedback loops for the user. Nevertheless, there is a clear trend in which EdTech companies are continuing their attempts to construct new metrics, scores, and analytics to monetise data, with efforts to convince customers of the value of these analytics.  

Takeaway #5: Datafication in HE happens at universities 

Universities are in the driving seat of their institutional datafication. Universities are establishing data warehouses, and many aim to collect all user data produced by external digital platforms in order to organise and analyse it for pedagogical and business purposes. However, universities currently lack the capacity to analyse, interpret and act on data. Universities need to establish frameworks for action based on data and acquire the requisite personnel and skills to do so. Universities should ensure that data outputs (e.g. analytics, metrics, scores) are truly representative of what is measured and build confidence in their communities regarding data-driven decision-making. 

Takeaway #6: Digitalisation and datafication create work and costs for universities 

Digitalisation and EdTech promise to bring efficiency and cost savings for universities, but in reality, university actors feel that digitalisation and data operations create more work and higher costs. In addition, new staff profiles and skills are needed, including data scientists, vendor managers, cloud engineers, as well as more learning technologists. 

Takeaway #7: Good EdTech does not challenge core university values and practices 

University actors find technology useful in general and are interested in technological innovation in relation to their work. However, there are two instances where university actors are sceptical towards EdTech. First, when companies' business models are exploitative and extractive. Second, when digital products interfere with the university's core values and practices, such as by challenging professional judgement or academic freedom. Intentions to automate the teaching process or provide behavioural nudges are often received with scepticism. Most university actors feel that user data collection should be limited, and data outputs, including analytics, should be restricted and carefully evaluated. 

Takeaway #8: The aims of EdTech require greater clarity 

The key aims of EdTech are understood to be personalisation, automation, enhanced student engagement, and greater institutional efficiency. However, there are discrepancies between university, EdTech, and investor actors in terms of how they understand these objectives and, consequently, how they will be achieved. Each of these aims needs clarification, including recognising the plurality of dimensions to each objective. 

Takeaway #9: Future imaginaries of tech companies and universities 

The future imaginaries of HE and EdTech are constructed by the EdTech industry and policy actors. There are discrepancies between investors, EdTech companies, and universities in relation to what EdTech should do and how it should shape the future of HE. Universities should drive these discussions and determine their futures and the role of technology in creating these futures. 

Takeaway #10: Democratic data governance 

Universities should do more to inform students and staff about the digital products and services they routinely use. Universities should also continuously provide transparent information to students and staff about user data collected from them and what is being done with this data within their universities and externally. Students and staff should have the choice to participate or not in user data collection and processing. Students and staff should be included in the governance of EdTech and user data at their institutions. 

Takeaway #11: There is a plurality of assetisation processes in EdTech 

EdTech companies establish a variety of processes to control and charge for access to their assets. These include mediating content, organising and mediating teaching interventions, and digitalising and mediating credentials. Typical moats that EdTech companies build are lock-in, network effects, and integration of products into everyday individual practices.

11 June 2024

Teaching

The 'Enhancing teaching quality to support student learning and success in Australian higher education: Eight options for reform' report by Sophie Arkoudis, Chi Baik, Wendy Larcombe, Gwilym Croucher, Raoul Mulder and Chris Ziguras 

presents the findings from research and analysis conducted by the Melbourne Centre for the Study of Higher Education (CSHE), with input from representatives of the Department of Education (DoE), over a six-week period from August-October 2023. The aim of the commissioned research was to identify and evaluate options to enhance teaching quality in Australian higher education. The findings and policy options identified in the Australian Universities Accord (AUA) Panel’s Interim Report formed the basis for the research and analysis provided by the CSHE. In particular, the Interim report signalled that the Panel was keen to identify ways to encourage the sector to pursue systemic excellence in learning and teaching. The report further highlighted that learning and teaching for both domestic and international students is sometimes falling short of students’ expectations. 

The focus of this project was therefore on identifying sector-level reforms to strengthen the quality of teaching. 

The research questions addressed by the project were:

1. How should best practice in learning and teaching be identified and promoted across Australia’s expanding HE system? 

2. How can we ensure the higher education teaching workforce is able to deliver for the new system, in both size and capability? 

3. How can best practice, innovation and collaboration in teaching and learning be encouraged? 

4. How can learning and teaching quality be better measured?

The CSHE conducted an extensive review of the Australian and international literature on mechanisms that aim to promote and monitor effective learning and teaching in higher education was conducted. We worked in partnership with the DoE and in consultation with the Panel to develop options to support enhanced teaching quality in a rapidly changing higher education landscape. We also consulted with leading experts in higher education teaching and learning, digital education and quality measurement. This process of working with the DoE and consulting with the Panel member has culminated in the eight options discussed in this report. We emphasize that these options are presented for consideration and do not represent the recommendations of the CSHE authors to the Panel. 

While the report discusses several options, the first two have the potential to facilitate systemic change within the sector. These are the establishment of a National Centre for Higher Education Advancement (Option 1) and a Professional Standards Framework to guide higher education teaching (Option 2). These two options provide a framework for institutional uplift in relation to the peer review of teaching and professional development initiatives (Options 3 and 4). Building on evidence for effective student learning in Australian HE and sharing best practice resources are the focus of Options 5 and 6. The last options (Options 7 and 8) discuss possible new measures of teaching quality for the sector, including an option to explore the development of an Australian Teaching Excellence Framework (TEF).

The summary overview of options is

Issue 1: How to coordinate and support initiatives to enhance the quality of teaching and learning across Australia’s expanding HE system 

Option 1. Establish a national Centre for Higher Education Advancement 

Since the closure of the Office for Learning and Teaching (OLT) in 2016, Australia has not had a national body coordinating and driving initiatives to improve quality in Higher Education (HE). This puts Australia out of step with international best practice. It also means that government and sector- led initiatives to advance quality teaching and learning in HE lack national coordination, amplification and impact. 

Johnson et al.’s (2023) submission in response to the AUA Discussion Paper, and earlier research by James et al. (2015), recommend development of a new national body in Australia – that we provisionally call the National Centre for Higher Education Advancement (NCHEA) – to address emerging challenges to quality in higher education and to build on current sector-wide strengths and opportunities. 

The existence of such a body – representing the diversity of HE teaching staff – is a key enabler for implementation of a number of the Options proposed in this work package. The NCHEA could be funded in part by institutional subscriptions/contributions. 

Issue 2: Addressing the job security, career advancement and professional esteem issues that inhibit development of teaching excellence and innovation in Australian HE. 

Option 2. Adopt a national Professional Standards Framework to guide HE teaching staff. Currently, Australia does not have a national statement of the expected teaching-related knowledge, skills, experience and values of HE teaching staff at progressive levels of expertise and responsibility. This contributes to the under-valuing of teaching knowledge and skills and undermines the status of teaching-focussed roles in HE. International experience indicates that a voluntary Professional Standards Framework (PSF) for HE teaching benefits individual staff by enabling them to demonstrate expected teaching-specific expertise and plan professional development; it also enables HEIs to signal the value they accord to quality teaching and learning. A working group would need to be commissioned to consult and advise on implementation options for development and monitoring of a PSF in Australia. The NCHEA (proposed in Option 1) would be an ideal mechanism to foster engagement with the PSF and monitor its impacts. 

Issue 3: Maintaining minimum standards in teaching and learning in an expanding HE system 

Option 3. Initiatives to increase the quality and uptake of Peer Review of Teaching. 

In Australian HE, student evaluations of teaching (SETs) are the current prevailing measure of teaching quality. This is despite a wealth of evidence demonstrating that SETs are not an appropriate measure of either teaching effectiveness (student learning gains), or teaching competency (teacher knowledge and skills) (see, e.g. Boring, Ottoboni & Stark, 2016; Carpenter & Tauber, 2020; Uttl, White & Gonzalez, 2017). 

In place of SETs, Peer Review of Teaching (PRT) should be established as the preferred measure of HE teaching effectiveness and teacher capability in Australian HE. PRT typically involves review of a teacher’s ‘teaching portfolio’ (evidence of understanding and application of effective teaching and learning principles) alongside classroom observations (to evidence effective teaching practices) and, ideally, evidence of students’ learning gains (teaching effectiveness) (see Schweig, 2019). 

In Australia, while PRT is widely practised across the compulsory education sector, its adoption in HE policy and practice is unsupported nationally, meaning that its uptake is piecemeal and reliant on institutional policies and champions. Two initiatives are explored to improve the quality and uptake of PRT in Australian HE. They would achieve synergies if delivered in tandem. 

Initiative 1. Develop and pilot a scheme for national accreditation of Higher Education Institutions’ PRT programs. 

Initiative 2. Commission a national project to synthesise and disseminate research findings on effective, efficient and ethical means of evaluating HE teaching effectiveness and teacher competency. 

Issue 4: Enhancing the professional development of HE staff in teaching 

Option 4. Initiatives to improve the teaching-related professional development of existing and future HE teaching staff 

Induction, initial training, mentoring, supervision and professional development of the teaching- related capabilities of HE staff are currently a matter for institutions – often devolved to faculties or departments and addressed at varying levels of commitment, resourcing and expertise. This means that the quality of professional development and support for teaching staff varies widely within and across institutions. 

To achieve the aims of the Accord process and deliver on the government’s ambitions for equitable, inclusive and flexible (online, hybrid) learning across an integrated HE ecosystem, the sector will need to ensure that all current and newly-appointed HE staff have access to high-quality professional development that enables them to establish and continually improve their teaching-related knowledge, skills and competencies. 

We outline five potential initiatives to address the professional development needs of the HE workforce. 

• Initiative 1. Mandate minimum teaching qualifications for HE teaching staff, with an initial focus on newly-appointed academic staff (taking up ongoing roles). 

• Initiative 2. Establish a dedicated program to support PhD ‘teaching fellowship’ positions that offer doctoral candidates training, experience and certification in university teaching alongside their research training. 

• Initiative 3. Create a mechanism for certification (quality assurance) of institutional and sector-based professional development programs for HE teaching. 

• Initiative 4. Investigate creation of a portable professional development entitlement for sessional staff. 

• Initiative 5. Require all HEIs to report to TEQSA on the implementation, uptake and effectiveness of their strategies and programs designed to ensure that all teaching staff have access to relevant, high-quality teaching-related professional development opportunities. 

Issue 5: Facilitating dissemination and take-up of best practice in HE teaching and learning 

Option 5. Enable identification and uptake of ‘what works’ to improve student learning in Australian HE. 

Available research into best practice teaching and learning approaches in Australian HE needs to be updated to take account of the rapid changes currently taking place in HE, including advances in educational technology and generative artificial intelligence, wider participation of students from all walks of life, and changing patterns of student engagement. That new research also needs to be translated into policy and practice via accessible implementation guides and tools that enable strategies to be readily adapted for different institutional contexts and missions. We propose two initiatives that have an uptake strategy hard-wired into the project design to ensure that research findings on evidence-based best practice are actually translated into practice and benefits for students. 

Initiative 1. Commission a repository of ‘what works’ evidence for effective student learning in Australian HE, curated by a panel of experts and embedded within teaching networks and communities of practice. A model for the proposed repository and network is the Best Practices Repository initiative of the US-based Healthy Minds Network (https://healthymindsnetwork.org/best-practices-repository/). 

Initiative 2. Pilot a ‘Student Success Project’ that: a) analyses available data to identify institutions with better- and poorer- than-expected outcomes for equity-bearing students, b) appoints a Panel of Experts (POE) to explore factors driving student success in the high-performing institutions, and c) enables the POE to mentor leaders and staff from ‘under-performing’ institutions to take-up the learnings from more successful HEIs. Participation in the mentoring program could be monitored by TEQSA, consequent on the HESF (Threshold Standards) requirement that HEIs’ learning and teaching programs ‘create equivalent opportunities for academic success regardless of students’ backgrounds’ (HESF, 2021, 2.2.1). This initiative is based on the work of the US Foundation for Student Success (FSS) Project. 

Option 6. Share best practice educational resources through discipline-based learning and teaching repositories, housed in Centres of Excellence for learning and teaching. 

We currently lack the infrastructure, protocols, conventions and rewards that are needed to facilitate and encourage sharing and reuse of educational content materials in HE. This results in sector-wide inefficiencies and inconsistency in the quality of students’ educational experiences. 

Internationally, sharing of educational resources through digital repositories has become a widespread practice over the past decade, aimed at advancing student learning and promoting global access to higher education. Missing from that landscape of open access resources are quality- assured, research informed and student-centred learning materials designed in and for Australian HE institutions, aligned with AQF standards and course-specific intended learning outcomes, and reflecting Australian social, geographic, environmental and economic contexts. 

To meet that need, we endorse Austin’s (2023) proposal to establish collaborative, discipline-specific Centres of Excellence (COEs) for creating and sharing educational resources through purpose-built digital repositories (2023, p. 4). Each COE would have a home institution that hosts the learning repository and acts as a ‘hub’ for cross-institutional collaboration. 

In addition, the NCHEA would be tasked with co-ordinating and supporting the COEs and distilling lessons from the early trial phase of the project to inform subsequent roll-out of further COEs. 

Issue 6: Improving metrics and data which measure learning and teaching quality 

Option 7. Consider an Australian Higher Education Teaching Quality Framework. 

Australia does not currently have a national measure of learning and teaching quality in Higher Education (HE), notwithstanding the fact that Higher Education Institutions (HEIs) are required to report a wealth of data about students to the Department of Education. Is it possible to develop a comprehensive measure of learning and teaching quality in Australian HE using available data? International experience suggests any attempt to develop a national indicator of learning and teaching quality in HEIs needs to carefully consider the intended policy aims, availability of appropriate indicators and the potential for unforeseen consequences. 

With that caution in mind, we suggest that an Australian Framework’s aim could be to make transparent to government, students and the public who contribute to the funding of the HE system how and whether those funds are effectively expended in the advancement of student learning and attainment for students from all walks of life. With that purpose in mind, a Learning and Teaching Quality Framework could draw on data about institutional decision-making that reveal the value HEIs place on student learning, and whether HEIs’ learning and teaching programs ‘create equivalent opportunities for academic success regardless of students’ backgrounds’ (HESF, 2021, 2.2.1). 

We outline 7 dimensions of such a framework: 1. Institutional investment in learning and teaching programs 2. Diversity of the student cohort 3. Student academic attainment and attainment gaps for equity-bearing students 4. Employment outcomes, fee costs and education value gaps for equity-bearing students [optional] 5. Institutional expenditure on staffing of teaching mission 6. Teaching staff skills, experience and diversity 7. Teaching staff professional development 

This Framework would impose a minimal additional administrative burden on HEIs, beyond the routine data collection and reporting they currently do. 

Option 8. Consider new metrics for measuring learning and teaching quality in Australian HE Are there new measures that could usefully be implemented at a national level to inform and drive quality improvement in HE learning and teaching? This paper considers options for new metrics within and beyond the current Student Experience Survey (SES). 

New indicators within the SES: 

Education research identifies various student-side factors that influence learning and are modifiable by institutions (see, e.g. Yorke, 2016; Zimmerman & Kitsantis 2007; Pintrich 2004; Pintrich et al. 1993; Kuh, 2009). Among those, the three that we would identify for potential inclusion in the SES are: • Commitment to learning (Learning behaviours self-assessment) – e.g. How often did you skip classes this semester? • Confidence as a learner (academic self-efficacy) – e.g. rates of agreement with statements such as: I believe I am a capable student. • Learning and teaching climate (perceived climate) – e.g. rates of agreement with statements such as: My institution … cares about students and their learning. 

New indicators beyond the SES: 

Initiative 1. A Survey of HE Teaching Staff. 

Other industries’ efforts to drive quality improvement at a system level commonly include staff surveys – e.g. Your Voice in Health – WA Health https://www.health.wa.gov.au/Reports-and- publications/Your-Voice-in-Health-survey. The fact that the voice of teaching staff is currently absent from measures of educational quality in Australian HE is a sign of the endemic under-valuing of the knowledge, skills and expertise of teaching staff. To address that gap, we propose development of a national survey of HE teaching staff (sessional and continuing) asking them to reflect on factors impacting teaching and learning in their unit/course – including the quality of: • Learning environments, curriculum and teaching resources; • Teacher induction, skills development and mentoring programs and opportunities; • The support they receive from colleagues and supervisors; • Students’ preparedness and engagement, and their academic and wellbeing needs; and • The climate for learning and teaching at their institution – including the extent to which teaching staff feel valued, recognised and rewarded. 

Such a survey would assist the sector to identify the extent to which teaching staff feel equipped, supported, rewarded, trusted, and able to work flexibly alongside experienced colleagues. That is, it would identify opportunities to improve the working conditions of staff, which inform the learning conditions of students. 

Initiative 2. Expert peer evaluations of educational quality. 

A second initiative to improve program and teaching quality is to make expert, external evaluations of learning programs and institutional learning strategies (expert benchmarking) more widely available. While external benchmarking of student attainment and course quality is often practised within disciplines to assure and enhance quality, it is possible to conduct elements of an external quality review at the institutional level with a ‘lighter touch’, as is current practice in Scotland (see the Productivity Commission Inquiry Report, 2023, p.110). External peer review of institutional teaching policies and programming would need to be undertaken by appropriately qualified, skilled and knowledgeable HE educators. Such a group could be recruited, trained and certified by the new National Centre for HE Advancement (Option 1)

13 November 2023

Teaching

'Beyond emergency remote teaching: did the pandemic lead to lasting change in university courses?' by Broadbent, Ajjawi, Bearman, Boud and Dawson in (2023) 20(58) International Journal of Educational Technology in Higher Education comments 

The COVID-19 pandemic significantly disrupted traditional methods of teaching and learning within higher education. But what remained when the pandemic passed? While the majority of the literature explores the shifts during the pandemic, with much speculation about post-pandemic futures, a clear understanding of lasting implications remains elusive. To illuminate this knowledge gap, our study contrasts pedagogical practices in matched courses from the pre-pandemic year (2019) to the post-pandemic phase (2022/2023). We also investigate the factors influencing these changes and the perceptions of academics on these shifts. Data were gathered from academics in a large comprehensive Australian university of varying disciplines through a mixed-methods approach, collecting 67 survey responses and conducting 21 interviews. Findings indicate a notable increase in online learning activities, authentic and scaffolded assessments, and online unsupervised exams post-pandemic. These changes were primarily driven by university-guided adaptations, time and workload pressures, continued COVID-19 challenges, local leadership, an individual desire to innovate, and concerns about academic integrity. While most changes were seen as favourable by academics, perceptions were less positive concerning online examinations. These findings illuminate the enduring effects of the pandemic on higher education, suggesting longer-term implications than previous studies conducted during the acute phase of the pandemic. 

In 2020 and 2021, higher education institutions globally had to modify curricula and pedagogy due to the COVID-19 pandemic (UNESCO, n.d.). This rapid shift became commonly known as ‘emergency remote teaching’ (Hodges et al., 2020). Emergency remote teaching (ERT) involves unplanned, quick adaptation, often using existing technology and resources, and with emphasis on preserving instruction rather than enhancing learning quality (Watermeyer et al., 2021). This type of teaching is distinct from online learning, which is a thought-out approach designed for online delivery and is considerate of learners’ needs and preferences (Hodges et al., 2020). During this emergency phase, face-to-face classes were stopped or transferred online to lessen COVID-19 risks (Crawford et al., 2020; Johnson et al., 2020). Class assessments moved online, activities requiring specific locations or equipment were disrupted, and students had to work more independently, regardless of their self-regulation skills (Bartolic et al., 2022a; Slade et al., 2022). Many academics felt ill-prepared for the changes that transpired (Sum & Oancea, 2022) and reported concerns that teaching quality suffered during this time (Weidlich & Kalz, 2021). 

COVID changed teaching and learning practices in a profound manner. For example, a consortium comprising nine institutions from around the world investigated changes to teaching and learning during the early stages of the pandemic, collecting data from 4243 students, 281 instructors, 15 senior administrators, and 43 instructional designers (see Bartolic et al., 2022a, 2022b; Guppy et al., 2022a, 2022b). This body of work showed challenges faced in ERT (Guppy et al., 2022a), including the modifications in assessment approaches corresponding to the digital shift (Bartolic et al., 2022a) and student vulnerabilities and confidence in an online learning environment (Bartolic et al., 2022b). However, are these findings a question of a momentary disruption and a return to the previous status quo? Or does the pandemic represent the kind of external shock that fundamentally changes the landscape? Funding bodies report substantial challenges for teaching and learning innovations to have long lasting impacts (Kottmann et al., 2020). What is interesting about the pandemic is that it forced change across all levels of the university all at once, and this may prove to be a useful lesson for understanding how educational change itself can unfold in different circumstances. Therefore, it is important to ask what, if anything, has been retained and why. 

In a systematic review from 2023, Imran et al. analysed 68 studies on blended and online teaching modes, aiming to identify emerging themes in learning modes from the post-pandemic era. Notably, of the studies they examined, only a handful were conducted in the recent post-pandemic years of 2022 and 2023. Among the few that were, none compared pre-pandemic conditions to the post-pandemic environment nor centred their analysis on data highlighting shifts in teaching practices from the viewpoint of educators. Instead, a significant portion of these studies offered mere speculations about the future in the aftermath of the pandemic. This led the authors to emphasise a noteworthy gap in the literature, concluding that “future research should focus on the long-term effects of COVID-19 on teaching modes and the resulting changes in curriculum development” (p. 8). Echoing this sentiment, Kerres and Buchner (2022) noted that much of the current research predominantly centres around the pre- and mid-pandemic phases, with scant attention paid to post-pandemic impacts. They argue that despite the plethora of available research, “it is still difficult to grasp a clear picture of the effects of the pandemic on education in the various sectors of education worldwide” (p. 6). This ambiguity primarily stems from the dearth of data concerning educational transformations in the post-pandemic world. 

The current research intends to delve deeper into the post-pandemic aftermath than previous studies. We explore the elements from the pre-pandemic era that were retained and what adjustments made during the pandemic persisted, if any. Further, using ecological systems theory as a framework, we explore which factors at the individual, faculty/discipline, university and outside the university contributed to retain changes. This will enhance understanding into how educational change occurs and may allow universities, faculties, and academics to tackle the challenging problem of sustaining change to teaching and learning practices.

25 October 2023

EdTech

'When public policy ‘fails’ and venture capital ‘saves’ education: Edtech investors as economic and political actors' Janja Komljenovic, Ben Williams, Rebecca Eynon and Huw C Davies in (2023) Globalization, Societies and Education comments 

Educational technology (Edtech) investors have become increasingly influential in education; however, they remain under-researched. We address this deficit and introduce the grammar and landscape of Edtech investment into education research. We empirically examine venture capital Edtech investors and argue that they are economic and political actors. Investors construct the Edtech industry through their investment and advancing particular imaginaries. They legitimate their authority in education through narratives of expertise and measures of social impact. They consolidate the Edtech industry by constructing social networks to perform the political work of futuring. The analysis provides original insights into the power of Edtech investors in education and proposes a research agenda examining new relations between the education, technology, and finance industries. 

Educational technology (Edtech) increasingly structures teaching and learning processes, determines how education is governed, and reframes educational purposes and aims (Decuypere, Grimaldi, and Landri  2021). Since Edtech is so impactful for education, it matters what kind of Edtech is incubated, innovated, and rolled out into the sector. The nature of Edtech is determined by socio-techno-financial processes resulting from power struggles between various actors (Komljenovic  2021). We argue it is investors who increasingly influence the nature of Edtech. They can realise future visions by structuring the direction of entire industries through their funding priorities (Cooiman  2022). However, they do more than only invest financial resources; they conduct studies, issue reports, educate entrepreneurs and other actors, organise networking, work with policymakers, and more (Williamson and Komljenovic  2023). Hence, investment and consequent actions are as much political decisions about the future as they are financial decisions about funding startup companies. What can and cannot exist is determined by an investment decision (Feher  2018), and investors seek to materialise particular visions of futures through very laborious actions that accompany the investment itself (Muniesa et al.  2017). 

Historically, investors were hesitant to invest in the education sector due to low returns, long investment cycles, fragmented markets, heavy regulation, and public hesitancy towards privatisation. This has changed with the emergence and growth of Edtech, akin to other sectors in the digital economy, accelerated by the pandemic (Teräs et al.  2020). Education via Edtech is seen to have an enormous opportunity for growth among investors as one of the last sectors that have not yet been digitalised. In other words, Edtech made education investable. 

The Edtech industry is relatively young. While we can trace the use of the first computers for academic research back to the mid-1940s and their first use in university and school classrooms to the 1960s (Molnar  1997), the Edtech industry as we know it today developed in the early 2010s. Since 2010, the number of newly established Edtech companies has sharply increased (Komljenovic, Sellar, and Birch  2021). Venture capital (VC) investment in Edtech rose from $500 million in 2010 to more than $20 billion in 2021 (As of November 24, 2022, HolonIQ listed on its website https://www.holoniq.com/notes/global-Edtech-venture-capital-report-full-year-2021). And the COVID-19 pandemic accelerated investment in Edtech and its use in education (Williamson and Hogan  2020). The Edtech industry is now consolidating, as indicated by the rising value of individual investments into particular companies (Brighteye Ventures  2022) and an increasing number of acquisitions (Brighteye Ventures  2022), indicating the emergence of ‘Big Edtech’ (Williamson  2022). The number of Edtech ‘unicorns’, companies valued at more than $1 billion, increased from 0 in 2014 to 62 in 2021 (Brighteye Ventures  2022). An important reason that the Edtech industry has grown and consolidated is capital investment. 

Surprisingly, Edtech investors, particularly VC investors, remain under-researched in education research. In this article, we ask who Edtech investors are, how they operate, and what are the consequences. We argue that Edtech investors became economic and political actors in the education ensemble of multisector influences on policy and practice (Robertson and Dale  2015) who need to be brought into research focus. We address the research gap by discussing Edtech investors’ operations, exploring the political and economic actions of two Edtech VC investors through an original empirical study, and proposing a research programme to investigate these key actors further. 

We proceed as follows. First, we provide a brief overview of the practices of Edtech investors to illuminate the investment landscape and its grammar. We then explain our approach to the empirical study. We proceed by discussing three forms of VC investors’ economic and political labouring of making the Edtech industry, legitimating their role, and consolidating the industry. We conclude by reflecting on the implications for education.

08 February 2023

TechnoFixes and EdTech

'The Technological Fix as Social Cure-All: Origins and Implications' by Sean F Johnston in (2018) 37(1) IEEE Technology and Society Magazine 47-54 comments 

In 1966, a well-connected engineer posed a provocative question: will technology solve all our social problems? He seemed to imply that it would, and soon. Even more contentiously, he hinted that engineers could eventually supplant social scientists - and perhaps even policy-makers, lawmakers, and religious leaders - as the best trouble-shooters and problem-solvers for society [1]. The engineer was the Director of Tennessee's Oak Ridge National Laboratory, Dr. Alvin Weinberg. As an active networker, essayist, and contributor to government committees on science and technology, he reached wide audiences over the following four decades. Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg's message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. ... 

Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg’s message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. 

Weinberg’s rhetorical question was a call-to-arms for engineers, technologists, and designers, particularly those who saw themselves as having a responsibility to improve society and human welfare. It was also aimed at institutions, offering goals and methods for government think-tanks and motivating corporate mission-statements (e.g., [3]). 

The notion of the technological fix also proved to be a good fit to consumer culture. Our attraction to technological solutions to improve daily life is a key feature of contemporary lifestyles. This allure carries with it a constellation of other beliefs and values, such as confidence in reliable innovation and progress, trust in the impact and effectiveness of new technologies, and reliance on technical experts as general problem-solvers.  

This faith can nevertheless be myopic. It may, for example, discourage adequate assessment of side-effects — both technical and social — and close examination of political and ethical implications of engineering solutions. Societal confidence in technological problem-solving consequently deserves critical and balanced attention. 

Adoption of technological approaches to solve social, political and cultural problems has been a longstanding human strategy, but is a particular feature of modern culture. The context of rapid innovation has generated widespread appreciation of the potential of technologies to improve modern life and society. The resonances in modern culture can be discerned in the ways that popular media depicted the future, and in how contemporary problems have increasingly been framed and addressed in narrow technological terms. 

While the notion of the technological fix is straightforward to explain, tracing its circulation in culture is more difficult. One way to track the currency of a concept is via phrase-usage statistics. The invention and popularity of new terms can reveal new topics and discourse. The Google N-Gram Viewer is a useful tool that analyzes a large range of published texts to determine frequency of usage over time for several languages and dialects [4], [5]. 

In American English, the phrase technological fix emerges during the 1960s and proves more enduring and popular than the less precise term technical fix. 

We can track this across languages. In German, the term technological fix has had limited usage as an untranslated English import, and is much less common than the generic phrase technische Lösung (“technical solution”), which gained ground from the 1840s. In French, too, there is no direct equivalent, but the phrase solution technique broadly parallels German and English usage over a similar time period. And in British English, the terms technological fix and technical fix appear at about the same time as American usage, but grow more slowly in popularity. Usage thus hints that there are distinct cultural contexts and meanings for these seemingly similar terms. Its varying currency suggests that the term technological fix became a cultural export popularized by Alvin Weinberg’s writings on the topic, but related to earlier discourse about technology-inspired solutions to human problems. 

Such data suggest rising precision in writing about technology as a generic solution-provider, particularly after the Second World War. But while the modern popularization and consolidation of the more specific notion of the “technological fix” can be traced substantially to the writings of Alvin Weinberg, the idea was promoted earlier in more radical form.

In 'Automating Learning Situations in EdTech: Techno-Commercial Logic of Assetisation' by Morten Hansen and Janja Komljenovic in (2023) 5 Postdigital Science and Education 100–116 the authors comment 

 Critical scholarship has already shown how automation processes may be problematic, for example, by reproducing social inequalities instead of removing them or requiring intense labour from education institutions’ staff instead of easing the workload. Despite these critiques, automated interventions in education are expanding fast and often with limited scrutiny of the technological and commercial specificities of such processes. We build on existing debates by asking: does automation of learning situations contribute to assetisation processes in EdTech, and if so, how? Drawing on document analysis and interviews with EdTech companies’ employees, we argue that automated interventions make assetisation possible. We trace their techno-commercial logic by analysing how learning situations are made tangible by constructing digital objects, and how they are automated through specific computational interventions. We identify three assetisation processes: First, the alienation of digital objects from students and staff deepens the companies’ control of digital services offering automated learning interventions. Second, engagement fetishism—i.e., treating engagement as both the goal and means of automated learning situations—valorises particular forms of automation. And finally, techno-deterministic beliefs drive investment and policy into identified forms of automation, making higher education and EdTech constituents act ‘as if’ the automation of learning is feasible. 

 Education technology (EdTech) companies are breathing new life into an old idea: education progress through automation (Watters 2021). EdTech companies are interested in portraying these processes as complex and bringing significant value to the learner and her educational institution, even when actual practices do not always reflect such imaginaries (Selwyn 2022). For example, EdTech companies may claim that artificial intelligence (AI) is a key part of their product, when in fact, actual computations are much simpler. It is therefore vital to disentangle EdTech companies’ imagined and actual automation practices. 

We propose the concept of ‘automated learning situations’ to disentangle automation imaginaries from actual practice. ‘Learning situations’ are the relationships between students, teachers, and learning artefacts in educational contexts. ‘Automated’ learning situations refer to automated interventions in one or more of these relationships. In practice, EdTech companies automate learning situations by capturing student actions on digital platforms, such as clicks, which they then use for computational intervention. For example, an EdTech platform may programmatically capture how a student engages with digital texts before computing various engagement scores or ‘nudges’ in order to affect her future behaviour. 

It is useful to conceptualise such automation as techno-material relations mapped along two dimensions: digital objects and computing approaches. While current literature on EdTech platforms has already uncovered how platformisation reconfigures pedagogical autonomy, educational governance, infrastructural control, multisided markets, and much more (e.g. Kerssens and Van Dijck 2022; Napier and Orrick 2022; Nichols and Garcia 2022; Williamson et al. 2022), the two dimensions bring more conceptual clarity to the technological possibilities and limitations of actually existing automation practices. Furthermore, they allow us to unpack techno-commercial relationships between emergent automation and assetisation processes. 

EdTech is embedded in the broader digital economy, which is increasingly rentier (Christophers 2020). This means that there is a move from creating value via production and selling commodities in the market, to extracting value through the control of access to assets (Mazzucato 2019). Assetisation is the process of turning things into assets (Muniesa et al. 2017). Depending on the situation, different things and processes can be assetised in different ways (Birch and Muniesa 2020). This includes taking products and services previously treated as commodities—something that can be owned through purchase and consequently fully controlled—and transforming them into something that can only be accessed through payment without change in ownership (Christophers 2020). A useful example is accessing textbooks in a digital form by paying a subscription to a provider such as Pearson +, instead of purchasing and owning physical book copies. Assetising a medium of delivery changes the implications for the user. For example, when customers buy a book, they own the material object but not the intellectual property (IP) rights. With the ownership of the book itself, i.e., the physical object, comes a measure of control: they can read the textbook as many times and whenever they want, write in the book, highlight passages, sell it to someone else, use it for some other purpose entirely, or even destroy it. On the contrary, paying a fee for accessing the electronic book via a platform transforms how users can engage with the content because the platform owner holds the control and follow-through rights (cf. Birch 2018): they decide when books are added and removed, what users can do with the book and for how long, and—crucially—what happens to associated user data. Generating revenue from a thing while maintaining ownership, control, and follow-through rights is an indication that this thing has been turned into an asset for its owner. We, therefore, ask: does the automation of learning situations contribute to assetisation processes in EdTech, and if so, how? 

In what follows, we first present our conceptual and methodological approach. We then unpack the digital objects used to construct learning situations. Next, we discuss how interventions are automated differently depending on computing temporalities and complexities. We conclude by discussing three assetisation processes identified in the automation of learning situations: the alienation of digital objects from students and staff, the fetishisation of engagement, and techno-deterministic beliefs leading to acting ‘as if’ automation is feasible.

11 January 2023

GPT

'GPT Takes the Bar Exam' by Michael James Bommarito and Daniel Martin Katz comments

 Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as “the Bar Exam,” as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in “AI?” 

In this research, we document our experimental evaluation of the performance of OpenAI’s text-davinci-003 model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5’s zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5’s zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5’s ranking of responses is also highly correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. 

While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future. 

22 December 2020

Learning Platforms and Datafication

'Automation, APIs and the distributed labour of platform pedagogies in Google Classroom' by Carlo Perrotta, Kalervo N. Gulson, Ben Williamson and Kevin Witzenberger in (2020) Critical Studies in Education comments 

Digital platforms have become central to interaction and participation in contemporary societies. New forms of ‘platformized education’ are rapidly proliferating across education systems, bringing logics of datafication, automation, surveillance, and interoperability into digitally mediated pedagogies. This article presents a conceptual framework and an original analysis of Google Classroom as an infrastructure for pedagogy. Its aim is to establish how Google configures new forms of pedagogic participation according to platform logics, concentrating on the cross-platform interoperability made possible by application programming interfaces (APIs). The analysis focuses on three components of the Google Classroom infrastructure and its configuration of pedagogic dynamics: Google as platform proprietor, setting the ‘rules’ of participation; the API which permits third-party integrations and data interoperability, thereby introducing automation and surveillance into pedagogic practices; and the emergence of new ‘divisions of labour’, as the working practices of school system administrators, teachers and guardians are shaped by the integrated infrastructure, while automated AI processes undertake the ‘reverse pedagogy’ of learning insights from the extraction of digital data. The article concludes with critical legal and practical ramifications of platform operators such as Google participating in education.

'The datafication of teaching in Higher Education: critical issues and perspectives' by Ben Williamson, Sian Bayne and Suellen Shay in (2020) 25(4) Teaching in Higher Education 351-365 comments 

Contemporary culture is increasingly defined by data, indicators and metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometric devices. In a ‘society of rankings’, simplified and standardized metrics act as key reference points for making sense of the world (Esposito and Stark 2019, 15). Beyond conventional statistical practices, the availability of ‘big data’ for large-scale analysis, the rise of data science as a discipline and profession, and the development of advanced technologies and practices such as machine learning, neural networks, deep learning and artificial intelligence (AI), have established new modes of quantitative knowledge production and decision-making (Kitchin 2014; Ruppert 2018). 

Although ‘datafication’ – the rendering of social and natural worlds in machine-readable digital format – has most clearly manifested in the commercial domain, such as in online commerce (e.g. Amazon), social media (Facebook, Twitter), and online advertising (Google), it has quickly spread outwards to encompass a much wider range of services and sectors. These include, controversially, the use of facial recognition and predictive analytics in policing, algorithmic forms of welfare allocation, automated medical diagnosis, and – the subject of this special issue – the datafication of education. 

Education is a particularly important site for the study of data and its consequences. The scale and diversity of education systems and practices means that datafication in education takes many forms, and has potential to exert significant effects on the lives of millions. That education is widely understood as a public good, rather than a commercial enterprise (with some exceptions) also means that the extraction of data from students, teachers, schools and universities cannot be straightforwardly analyzed as another instantiation of ‘surveillance capitalism’, that is, the gathering of the ‘raw material’ of human life en masse for analysis, sale and profit (Zuboff 2019). Instead, the datafication of education needs to be understood and analyzed for its distinctive forms, practices and consequences. Enhanced data collection during mass university closures and online teaching as a result of the 2020 COVID-19 crisis makes this all the more urgent. In this brief editorial introduction to the special issue on ‘The datafication of teaching in higher education’, we situate the papers in wider debates and scholarship, and outline some key cross-cutting themes. 

Measurement matters 

There is of course a very long history to practices, processes and technologies of datafication in which current developments in big data, AI and machine learning need to be situated (Beer 2016). The eighteenth and nineteenth centuries witnessed an outpouring of statistical knowledge production, as everything from industrial manufacturing to the natural world, and from the state of the human population to the workings of the human body itself, was subjected to quantification and increasing numerical management (Bowker 2008; Ambrose 2015). The work of modern government itself came to rely on statistics, as people, goods, territories, processes and problems were all made legible as numbers, and statistical knowledge came to ‘describe the reality of the state itself’ (Foucault 2007, 274) as part of the ‘machinery of government’ (Rose 1999, 213). 

The statistical machinery of the nineteenth- and twentieth-century state is now, in the twenty-first century, shadowed by a vast complex of data infrastructures, platforms, devices, and analytics organizations from across the public, charitable and private sectors, as big data has itself become a new source of knowledge, governance and control (Bigo, Isin, and Ruppert 2019). Social media platforms, web interactions, financial transactions, public surveillance networks, online commerce, business software, mobile phone location services, wearable devices, and even connected objects in the Internet of Things have become key sources of knowledge for those authorities with access to the data they produce (Marres 2017). Governments are increasingly turning to digital services in order to generate detailed information about the populations they govern, including controversial attempts to introduce public facial recognition systems for purposes of individual identification (Crawford and Paglen 2019). Through machine learning, neural nets and deep learning, so-called AI products and platforms can now ‘learn from experience’ in order to optimize their own functioning and adapt to their own use (Mackenzie 2018). Nineteenth- and twentieth-century ‘trust in numbers’ has metamorphosed into a ‘dataist’ trust in the ‘magic’ of digital quantification, algorithmic calculation, and machine learning (Elish and boyd 2018). 

Dataism is a style of thinking that is integrally connected to processes of neoliberalization, as competitive logics and the desire to compare the performance of entities against each other, as if they are competing in markets, have been incorporated into various forms and technologies of measurement. Beer (2016, 31) argues that this period of intensive quantification is governed under a particular neoliberal system of ‘metric power’, and that ‘understanding the intensification of measurement, the circulation of those measures and then how those circulations define what is seen to be possible, represents the most pressing challenge facing social theory and social research today’. He suggests a number of key themes for understanding metric power (Beer 2016, 173–77). Data and metrics set limits on what can be known and what can be knowable. They define what is rendered visible or left invisible, thereby impacting on how certain practices, objects, behaviours and so on gain value, while others are not measured or valued. Measurement involves classification, sorting, ordering, and categorizing people and things, which defines how they are known and treated. It leads to prefiguring judgment, by setting desired aims and outcomes with the aim to bring the future into the present, which a measurement is designed to help achieve. Data-based processes also expand into new tasks, functions and programmes, and intensify their influence. The intensification of measurement leads to forms of authorization and endorsement of certain outcomes, people, actions, systems, and practices, thus marking out what is claimed to be truthful. It also involves increasing automation, which shapes human agency and decision-making – automated systems of computation are taken as objective, legitimate, fair, neutral and impartial, and impact on human judgement. Finally, metrics induce affective reactions, such as anxiety or competitive motivation, and thereby promote or produce actions, behaviours, and pre-emptive responses by prompting people to perform in ways that can be valued, compared and judged in measurable terms. 

The power of metrics to affect how social and natural worlds are known and compared, and therefore to shape how they are treated and changed, means that measurement matters. Data and metrics do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place. Data practices materialize the competitive neoliberal impulse to ensure efficient market functioning and constant improvement through measurement, the hierarchization of winners and losers, and the attribution of quantitative value. This can be fairly mundane, in the case of an online retailer recommending future goods to purchase based on past purchasing record and comparison against millions of other shoppers, where the measured market is the source of the recommendation. Media streaming services constantly capture data about consumption habits, and feed that back into recommended shows and playlists. The metrics in such cases include favoured genres, time spent listening or watching, artists or shows selected and so on. This may seem fairly banal, but yet is shaping cultural habits and individual tastes. But measurement also matters for even more consequential reasons. It has changed the ways economies function, serving hypercapitalist objectives of making data into a key source of market value (Fourcade and Healy 2017). Surveillance systems such as predictive policing and facial recognition disproportionately focus suspicion on ethnic minority groups, and reinforce longstanding structural inequalities in societies (Crawford and Paglen 2019), as judgments are made based on various forms of comparison and prediction. 

Education has long been subject to historical forms of ‘datafication’ (Lawn 2013), but the quantification, measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is intensifying and expanding rapidly. Higher Education is itself implicated in neoliberalizing forms of metric power, as various technologies of data-based measurement and evaluation impose limits on what is made visible and known, sort people and outcomes into (sometimes hierarchical) categories, establish measurable aims, expand to new tasks, establish what is claimed to be true or valuable, impose automation on decision-making, and affect the ways people feel, act and behave.

In referring to data subjects the authors argue 

 The datafication of human beings affects how they are understood, treated, and acted upon. The concept of the ‘data double’ usefully refers to how digital profiles can be created from the activities of individuals (Raley 2013). These profiles, or shadows, then become the basis for various forms of analysis and calculation, which circle back into individual experiences. To use the social media streaming example, the data double captured inside the database is used to make recommendations, which affects the consumer experience outside the database (Cheney-Lippold 2011). The individual becomes a data subject, defined and characterized algorithmically by being sorted into categories and predicted outcomes. 

The construction of data doubles in education is especially consequential since anything that is modelled inside the database then affects the potentially life-changing experience of teaching and learning. A prediction of future progress based on past outcomes could radically affect the future prospects of the student by foreclosing curriculum opportunities. Forms of algorithmic education, in other words, deeply affect data subjects. In their paper, Harrison et al. (2020) draw attention to how datafication both affects teaching and learning and shapes subjectivities. They refer to a ‘student data subjects’ which are assembled from digital traces of educational activity. Teachers, too, are increasingly known, evaluated and judged through data, and come to know themselves as datafied teacher subjects. 

This datafication of student and teacher subjects prefigures a potentially profound transformation in how students and teachers understand themselves and in how they are understood and managed as learners and professionals. As Marachi and Quill (2020) emphasize in this issue, where ‘frictionless’ data transitions are enabled between primary, secondary and tertiary education and even the employment contexts of individuals, the data subject risks becoming a lifelong ‘shadow’ with potential impact which may be far from benign. Marachi and Quill call for greater awareness, routine interrogation of data-sharing practices and critical distance between higher education institutions and ‘edtech’ platform partners promising ‘enhancement’ through data processing, the constitution of data subjects and the promises of ‘personalization’. Such changes may also demand that educators and students develop critical skills of using and evaluating data.