09 February 2023

Fiduciaries

'In Code(rs) We Trust: Software Developers as Fiduciaries in Public Blockchains' by Angela Walch in Philipp Hacker, Ioannis Lianos, Georgios Dimitropoulos and Stefan Eich (eds), Regulating Blockchain. Techno-Social and Legal Challenges (Oxford University Press, 2019 comments 

This chapter addresses the myth of decentralized governance of public blockchains, arguing that certain people who create, operate, or reshape them function much like fiduciaries of those who rely on these powerful data structures. Explicating the crucial functions that leading software developers perform, the chapter compares the role to Tamar Frankel’s conception of a fiduciary, and finds much in common, as users of these technologies place extreme trust in the leading developers to be both competent and loyal (ie, to be free of conflicts of interest). The chapter then frames the cost-benefit analysis necessary to evaluate whether, on balance, it is a good idea to treat these parties as fiduciaries, and outlines key questions needed to flesh out the fiduciary categorization. For example, which software developers are influential enough to resemble fiduciaries? Are all users of a blockchain ‘entrustors’ of the fiduciaries who operate the blockchain, or only a subset of those who rely on the blockchain? Finally, the chapter concludes with reflections on the broader implications of treating software developers as fiduciaries, given the existing accountability paradigm that largely shields software developers from liability for the code they create. 

08 February 2023

TechnoFixes and EdTech

'The Technological Fix as Social Cure-All: Origins and Implications' by Sean F Johnston in (2018) 37(1) IEEE Technology and Society Magazine 47-54 comments 

In 1966, a well-connected engineer posed a provocative question: will technology solve all our social problems? He seemed to imply that it would, and soon. Even more contentiously, he hinted that engineers could eventually supplant social scientists - and perhaps even policy-makers, lawmakers, and religious leaders - as the best trouble-shooters and problem-solvers for society [1]. The engineer was the Director of Tennessee's Oak Ridge National Laboratory, Dr. Alvin Weinberg. As an active networker, essayist, and contributor to government committees on science and technology, he reached wide audiences over the following four decades. Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg's message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. ... 

Weinberg did not invent the idea of technology as a cure-all, but he gave it a memorable name: the “technological fix.” This article unwraps his package, identifies the origins of its claims and assumptions, and explores the implications for present-day technologists and society. I will argue that, despite its radical tone, Weinberg’s message echoed and clarified the views of predecessors and contemporaries, and the expectations of growing audiences. His proselytizing embedded the idea in modern culture as an enduring and seldom-questioned article of faith: technological innovation could confidently resolve any social issue. 

Weinberg’s rhetorical question was a call-to-arms for engineers, technologists, and designers, particularly those who saw themselves as having a responsibility to improve society and human welfare. It was also aimed at institutions, offering goals and methods for government think-tanks and motivating corporate mission-statements (e.g., [3]). 

The notion of the technological fix also proved to be a good fit to consumer culture. Our attraction to technological solutions to improve daily life is a key feature of contemporary lifestyles. This allure carries with it a constellation of other beliefs and values, such as confidence in reliable innovation and progress, trust in the impact and effectiveness of new technologies, and reliance on technical experts as general problem-solvers.  

This faith can nevertheless be myopic. It may, for example, discourage adequate assessment of side-effects — both technical and social — and close examination of political and ethical implications of engineering solutions. Societal confidence in technological problem-solving consequently deserves critical and balanced attention. 

Adoption of technological approaches to solve social, political and cultural problems has been a longstanding human strategy, but is a particular feature of modern culture. The context of rapid innovation has generated widespread appreciation of the potential of technologies to improve modern life and society. The resonances in modern culture can be discerned in the ways that popular media depicted the future, and in how contemporary problems have increasingly been framed and addressed in narrow technological terms. 

While the notion of the technological fix is straightforward to explain, tracing its circulation in culture is more difficult. One way to track the currency of a concept is via phrase-usage statistics. The invention and popularity of new terms can reveal new topics and discourse. The Google N-Gram Viewer is a useful tool that analyzes a large range of published texts to determine frequency of usage over time for several languages and dialects [4], [5]. 

In American English, the phrase technological fix emerges during the 1960s and proves more enduring and popular than the less precise term technical fix. 

We can track this across languages. In German, the term technological fix has had limited usage as an untranslated English import, and is much less common than the generic phrase technische Lösung (“technical solution”), which gained ground from the 1840s. In French, too, there is no direct equivalent, but the phrase solution technique broadly parallels German and English usage over a similar time period. And in British English, the terms technological fix and technical fix appear at about the same time as American usage, but grow more slowly in popularity. Usage thus hints that there are distinct cultural contexts and meanings for these seemingly similar terms. Its varying currency suggests that the term technological fix became a cultural export popularized by Alvin Weinberg’s writings on the topic, but related to earlier discourse about technology-inspired solutions to human problems. 

Such data suggest rising precision in writing about technology as a generic solution-provider, particularly after the Second World War. But while the modern popularization and consolidation of the more specific notion of the “technological fix” can be traced substantially to the writings of Alvin Weinberg, the idea was promoted earlier in more radical form.

In 'Automating Learning Situations in EdTech: Techno-Commercial Logic of Assetisation' by Morten Hansen and Janja Komljenovic in (2023) 5 Postdigital Science and Education 100–116 the authors comment 

 Critical scholarship has already shown how automation processes may be problematic, for example, by reproducing social inequalities instead of removing them or requiring intense labour from education institutions’ staff instead of easing the workload. Despite these critiques, automated interventions in education are expanding fast and often with limited scrutiny of the technological and commercial specificities of such processes. We build on existing debates by asking: does automation of learning situations contribute to assetisation processes in EdTech, and if so, how? Drawing on document analysis and interviews with EdTech companies’ employees, we argue that automated interventions make assetisation possible. We trace their techno-commercial logic by analysing how learning situations are made tangible by constructing digital objects, and how they are automated through specific computational interventions. We identify three assetisation processes: First, the alienation of digital objects from students and staff deepens the companies’ control of digital services offering automated learning interventions. Second, engagement fetishism—i.e., treating engagement as both the goal and means of automated learning situations—valorises particular forms of automation. And finally, techno-deterministic beliefs drive investment and policy into identified forms of automation, making higher education and EdTech constituents act ‘as if’ the automation of learning is feasible. 

 Education technology (EdTech) companies are breathing new life into an old idea: education progress through automation (Watters 2021). EdTech companies are interested in portraying these processes as complex and bringing significant value to the learner and her educational institution, even when actual practices do not always reflect such imaginaries (Selwyn 2022). For example, EdTech companies may claim that artificial intelligence (AI) is a key part of their product, when in fact, actual computations are much simpler. It is therefore vital to disentangle EdTech companies’ imagined and actual automation practices. 

We propose the concept of ‘automated learning situations’ to disentangle automation imaginaries from actual practice. ‘Learning situations’ are the relationships between students, teachers, and learning artefacts in educational contexts. ‘Automated’ learning situations refer to automated interventions in one or more of these relationships. In practice, EdTech companies automate learning situations by capturing student actions on digital platforms, such as clicks, which they then use for computational intervention. For example, an EdTech platform may programmatically capture how a student engages with digital texts before computing various engagement scores or ‘nudges’ in order to affect her future behaviour. 

It is useful to conceptualise such automation as techno-material relations mapped along two dimensions: digital objects and computing approaches. While current literature on EdTech platforms has already uncovered how platformisation reconfigures pedagogical autonomy, educational governance, infrastructural control, multisided markets, and much more (e.g. Kerssens and Van Dijck 2022; Napier and Orrick 2022; Nichols and Garcia 2022; Williamson et al. 2022), the two dimensions bring more conceptual clarity to the technological possibilities and limitations of actually existing automation practices. Furthermore, they allow us to unpack techno-commercial relationships between emergent automation and assetisation processes. 

EdTech is embedded in the broader digital economy, which is increasingly rentier (Christophers 2020). This means that there is a move from creating value via production and selling commodities in the market, to extracting value through the control of access to assets (Mazzucato 2019). Assetisation is the process of turning things into assets (Muniesa et al. 2017). Depending on the situation, different things and processes can be assetised in different ways (Birch and Muniesa 2020). This includes taking products and services previously treated as commodities—something that can be owned through purchase and consequently fully controlled—and transforming them into something that can only be accessed through payment without change in ownership (Christophers 2020). A useful example is accessing textbooks in a digital form by paying a subscription to a provider such as Pearson +, instead of purchasing and owning physical book copies. Assetising a medium of delivery changes the implications for the user. For example, when customers buy a book, they own the material object but not the intellectual property (IP) rights. With the ownership of the book itself, i.e., the physical object, comes a measure of control: they can read the textbook as many times and whenever they want, write in the book, highlight passages, sell it to someone else, use it for some other purpose entirely, or even destroy it. On the contrary, paying a fee for accessing the electronic book via a platform transforms how users can engage with the content because the platform owner holds the control and follow-through rights (cf. Birch 2018): they decide when books are added and removed, what users can do with the book and for how long, and—crucially—what happens to associated user data. Generating revenue from a thing while maintaining ownership, control, and follow-through rights is an indication that this thing has been turned into an asset for its owner. We, therefore, ask: does the automation of learning situations contribute to assetisation processes in EdTech, and if so, how? 

In what follows, we first present our conceptual and methodological approach. We then unpack the digital objects used to construct learning situations. Next, we discuss how interventions are automated differently depending on computing temporalities and complexities. We conclude by discussing three assetisation processes identified in the automation of learning situations: the alienation of digital objects from students and staff, the fetishisation of engagement, and techno-deterministic beliefs leading to acting ‘as if’ automation is feasible.

07 February 2023

AI, Regulation and Trust

Orly Lobel's 'The Law of AI for Good' (San Diego Legal Studies Paper No. 23-001) comments 

Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies. 

A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.

'Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk' by Johann Laux, Sandra Wachter and Brent Mittelstadt in (2023) Regulation and Governance comments 

The global race to establish technological leadership in artificial intelligence (AI) is escorted by an effort to develop “trustworthy AI.” Numerous policy frameworks and regulatory proposals make principled suggestions as to which features render AI “trustworthy” [Cf. the overviews in Lucia Vesnic-Alujevic et al., 2020 and Thiebes et al., 2021], Private companies such as auditing firms are offering their clients support in designing and deploying “trustworthy AI” (Mökander & Floridi, 2021). The emphasis on trustworthiness serves a strategic purpose: induce people to place trust in AI so that they will use it more and, hence, unlock the technology's economic and social potential. 

This strategy is not unfounded. Trust cannot be created on command. Signaling trustworthiness is thus the most promising option for regulators and technologists who seek to create the initial trust needed for a broader uptake of AI (Drake et al., 2021; O'Neill, 2012). Success, however, is not guaranteed. Even allegedly trustworthy persons, institutions, and technologies might not be trusted after all. For example, populations which have historically faced discrimination may reasonably distrust broadly accepted signals of trustworthiness (Scheman, 2020). 

As part of the global trustworthiness effort, the European Commission recently proposed a legal framework for trustworthy AI, the “AI Act” (European Commission, 2021b). The AI Act explicitly pursues the dual purpose of promoting the uptake of the technology and addressing the risks associated with its use (AI Act, Recital 81 and p. 1). At the time of writing, the proposal is being discussed by the Council of the European Union and the European Parliament, both of which must agree on a common text before the AI Act can pass into law. 

As this article will show, in its proposal the Commission chose to understand “trustworthiness” narrowly in terms of the “acceptability” of AI's risks, with the latter being primarily assessed through conformity assessments carried out by technology experts (see Section 2.1). This regulatory conflation of trustworthiness with the acceptability of risks invites further reflection. 

Based on a systematic narrative literature review on trust research, this article argues that the European Union (EU) is overselling its regulatory ambition and oversimplifying a highly complex and heterogeneous set of closely related concepts. First, while there is an inherent relationship between trust, trustworthiness, and the perceived acceptability of risks (Poortinga & Pidgeon, 2005), the AI Act will itself require citizens' trust to succeed in promoting the uptake of AI. Second, the concept of trustworthiness serves an important normative function. It allows to assess whether people's actual levels of trust are normatively “justified” (Cf. Lee, 2022) or “well-placed.” This justification depends on whether their degree of trust in something matches its degree of trustworthiness. A person's trust can be “blind” or misplaced; so too can their mistrust. There is a rich philosophical debate as to whether AI even has the capacity of being a genuine object of trust. Its lack of human qualities such as intentionality could prohibit such attributions. AI may then be merely reliable, but not trustable [Miller & Freiman, 2020; for the debate, see further Rieder et al. (2021), Weydner-Volkmann and Feiten (2021), Ryan (2020), Grodzinsky et al. (2020), Nickel et al. (2010), and Taddeo (2009)]. 

Conflating trust and trustworthiness with the acceptability of risks blurs the distinction between acceptability judgments made by domain experts and the trustworthiness of AI systems implemented in society. Others have criticized before that the AI Act outsources decisions about which risks are “acceptable” to AI providers with an economic interest to market the AI system (Smuha et al., 2021). Rather than providing a seal of approval, we argue that trustworthiness is a longitudinal concept that necessitates an iterative process of controls, communication, and accountability to establish and maintain its existence across both AI technologies and the institutions using them. The AI Act suggests an unfounded bright-line distinction between acceptable and unacceptable risks and hence trustworthy and non-trustworthy AI. This approach is incompatible with the conceptualization of trustworthiness as a longitudinal process as opposed to a binary characteristic of systems and the risks they pose. This article therefore aims to provide an intervention into the EU's policy effort to develop “trustworthy AI” by risk regulation based on a review of the multi-disciplinary literature on trust. Instead of working out a coherent theory of trust, it aims to demonstrate the conceptual futility of labeling a complex AI system “trustworthy” prior to placing it on the market. 

We limit our analysis to the use of AI in public institutions. The potential of AI for the public sector is rapidly gaining interest (Gesk & Leyer, 2022; see also de Sousa et al., 2019). AI systems have already been introduced in public institutions (Desouza et al., 2017), with promises of higher quality services and increased efficiency (Sun & Medaglia, 2019). At the same time, AI's characteristics have led to considerable debate about whether and how the public sector should deploy the technology (Green, 2022). Many AI systems “reason by association”: they detect statistical patterns in data but do not offer causal explanations (Bishop, 2021). In addition, an AI system might include so many parameters that its outcome is opaque, resembling a “black box.” There is too much information to interpret its outcome clearly (Dignum, 2019). These features arguably set AI systems aside from other digital technologies already in use by public institutions. 

Through the proposed AI Act and other instruments, the European Commission nevertheless seeks to “make the public sector a trailblazer for using AI” (European Commission, 2021a). Its 2020 “White Paper” on AI (European Commission, 2020) holds it “essential” that the public sector, especially in healthcare and transport, begins to “rapidly” deploy products and services that rely on AI (White Paper, p. 8). The European Commission also supports the uptake of AI in the domain of justice (European Commission, 2018). 

While making AI trustworthy has garnered substantial political momentum, equal attention needs to be paid to AI's potential to erode the trustworthiness of public institutions and, with it, their own ability to produce trust in the population (Bodó, 2021). Without trust, the public sector risks losing support and compliance by citizens. 

Some publicly documented uses of automated decision-systems have led to widespread criticism and the cessation of operations. Consider, for example, the algorithmic prediction of social welfare fraud in marginalized neighborhoods in the Netherlands or the algorithmic profiling of families for early detection of vulnerable children in Denmark (Kayser-Bril, 2020; Vervloesem, 2020). AI in the public sector can quickly become politicized, not least because of the public sector's dual role. It is at the same time drawn to using AI to increase its efficiency and under an obligation to protect citizens from harm caused by AI (Kuziemski & Misuraca, 2020). 

Citizens' concerns about AI in the public sector have likewise been identified as one of the major obstacles to broader implementation (Gesk & Leyer, 2022, pp. 1–2). However, while the use of (non-AI-based) information and communication technology in the public sector has been widely researched—often under the rubric of “eGovernment”—the use of AI in the public sector and its acceptance by citizens is still understudied [Gesk & Leyer, 2022; drawing on Sun and Medaglia (2019); Wang and Liao (2008)]. At the same time, insights gained from the private sector cannot easily be transferred to the public sector, not least because the latter's target is not to maximize profits generated from customers [See the references in Gesk and Leyer (2022, p. 1)]. Moreover, public services' adoption of AI further differs from the private sector as it can have a coercive element. Citizens will often have no choice but to use and pay for the services (through taxes or insurance premiums) whether or not they prefer an AI system to be involved (Aoki, 2021). As the coercive power of public authority requires justification (Simmons, 1999), AI in the public sector thus also raises questions of legitimacy. 

Politicization can add further justificatory pressure. Trust researchers consider how in highly politicized contexts of AI implementation, conflicts about what constitutes a “right” or “fair” decision are likely to erupt (de Bruijn et al., 2022; drawing on Bannister & Connolly, 2011b). The stakes of implementing AI in public services are thus high, invoking the foundational concepts of trust in and legitimacy of public authority. 

This article proceeds as follows. Section 2 begins with a trust-theoretical reconstruction of the conflation of “trustworthiness” with the “acceptability of risks” in the EU's AI policy. We then turn to our review of the literature on trust in AI implemented within public institutions. One simple definition of “trust” is the willingness of one party to expose themselves to a position of vulnerability towards a second party under conditions of risk and uncertainty as regards the intentions of that second party (similarly, Bannister & Connolly, 2011b, p. 139). However, the term “trust” has found multiple definitions within and across social science disciplines, so much that the state of defining trust has been labeled as one of “conceptual confusion” (McKnight & Chervany, 2001). This makes comparing and evaluating trust research across disciplines (and sometimes even within one discipline) extremely difficult. 

Section 3, therefore, develops a prescriptive set of variables for reviewing trust-research in the context of AI. We differentiate between normative and empirical research as well as between subject, objects, and roles of trust. Section 4 then uses those variables as a structure for a narrative review of prior research on trust and trustworthiness in AI in the public sector. We identify common themes in the reviewed literature and reflect on the heterogeneity of the field and, thus, the many ways in which trust in AI can be defined, measured, incentivized, and governed. 

This article concludes in Sections 5 and 6 by relating the findings of the literature review to the EU's AI policy and especially its proposed AI Act. It states the uncertain prospects for the AI Act to be successful in engineering citizens' trust. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI. The conflation of “trustworthiness” with the “acceptability of risks” in the AI Act will thus be shown to be inadequate.

Privacy

'Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy' by María P Angel and Ryan Calo comments 

What distinguishes violations of privacy from other harms? This has proven a surprisingly difficult question to answer. For over a century, privacy law scholars labored to define the illusive concept of privacy. Then they gave up. Efforts at distinguishing privacy came to be superseded at the turn of the millennium by a new approach: a taxonomy of privacy problems grounded in social recognition. Privacy law became the field that simply studies whatever courts or scholars talk about as related to privacy. 

And it worked. Decades into privacy as social taxonomy, the field has expanded to encompass a broad range of information-based harms—from consumer manipulation to algorithmic bias—generating many, rich insights. Yet the approach has come at a cost. This article diagnoses the pathologies of a field that has abandoned defining its core subject matter, and offers a research agenda for privacy in the aftermath of social recognition. 

This critique is overdue: it is past time to think anew about exactly what work the concept of privacy is doing in a complex information environment, and why a given societal problem—from discrimination to misinformation—is worthy of study under a privacy framework. Only then can privacy scholars articulate what we are expert in and participate meaningfully in global policy discussions about how to govern information-based harms.

Theory

'How to do things with legal theory' by Coel Kirkby in (2022) 18(4) International Journal of Law in Context 373-382 comments 

Legal theory must not merely describe our world; it must also assist us acting in it. In this paper, I argue that teaching legal theory should show law students how to do things with legal theory. My pedagogical approach is contextual and historical. Students learn how to use theory by seeing how past jurists acted in their particular worlds by changing dominant concepts of law. Most introductory legal theory courses are organised by what I will call the usual story of jurisprudence. In this story, great thinkers in rival schools of legal thought attempt to answer perennial questions about the nature of (the concept of) law. In this story, the thick context of our world recedes beyond the horizon of theory. I argue that critical genealogy can let us critique this usual story and its unspoken assumptions of morality, politics and history. Amia Srinivasan's account of ‘worldmaking’ is especially compelling in its emphasis on critical genealogies’ capacity to transform our representational practices (and thus open up new possibilities for action). Critical genealogy also has certain pedagogical ‘uses and advantages’ for teaching legal theory in law schools. Here, context is method. The teacher must defend their political choices of context – choices that are naturalised and so beyond critique in the usual story of jurisprudence. By making these choices explicit, students are invited to both challenge the teacher's choices of context and critique their own common law education. This pedagogical approach also encourages students to experiment in ‘worldmaking’ themselves, and so cultivate a creative capacity to use legal theory to change the world through transforming their representations of it. 

Legal theory must not merely describe our world; it must also assist us acting in it. In this paper, I will argue that legal pedagogy should teach law students how to do things with legal theory. My concern is with action rather than description. Most introductory legal theory courses are organised by what I will call the usual story of jurisprudence. In this story, great thinkers in rival schools of legal thought attempt to answer perennial questions about the nature of law. In this story, the thick context of our world recedes beyond the horizon of theory. I argue that critical genealogy can let us critique this usual story and its unspoken assumptions of morality, politics and history. Amia Srinivasan provides an especially compelling account of critical genealogy as ‘worldmaking’. Her reading emphasises its capacity to transform our representational practices (and thus open up new possibilities for action) rather than its potential for epistemic shock. 

Critical genealogy also has certain pedagogical uses and advantages for teaching legal theory in law schools. Here, context is method. Teachers must defend their political choices of context – choices that are naturalised and so beyond critique in the usual story of jurisprudence. By making these choices explicit, students are invited to both challenge the teacher's choices of context and critique their own common law education. Since the choices of context are an open-ended multiplicity, this pedagogical approach also encourages students to practise ‘worldmaking’ themselves through reflective and research essays that start with critical self-reflection on their own particular standpoint and concerns in the present. Ideally, teaching jurisprudence through critical genealogy is a collaborative experiment in which the politics of context helps cultivate a creative capacity to change the world through transforming our representations of it.