Governing through technology has proven irresistibly seductive. Everything from the Internet backbone to consumer devices employs technological design to regulate behavior purposefully by promoting values such as privacy, security, intellectual property protection, innovation, and freedom of expression. Legal and policy scholarship has discussed individual skirmishes over the political impact of technical choices—from whether intelligence and police agencies can gain access to privately encrypted data to debates over digital rights management. But it has failed to come to terms with the reality that “governance-by-design”—the purposeful effort to use technology to embed values—is becoming a central mode of policymaking, and that our existing regulatory system is fundamentally ill-equipped to prevent that phenomenon from subverting public governance.
Far from being a panacea, governance-by-design has undermined important governance norms and chipped away at our voting, speech, privacy, and equality rights. In administrative agencies, courts, Congress, and international policy bodies, public discussions about embedding values in design arise in a one-off, haphazard way, if at all. Constrained by their structural limitations, these traditional venues rarely explore the full range of other values that design might affect, and often advance, a single value or occasionally pit one value against another. They seldom permit a meta-discussion about when and whether it is appropriate to enlist technology in the service of values at all. And their policy discussions almost never include designers, engineers, and those that study the impact of socio-technical systems on values.
When technology is designed to regulate without such discussions—as it often is—the effects can be even more insidious. The resulting technology often hides government and corporate aims and the fundamental political decisions that have been made. In this way, governance-by-design obscures policy choices altogether. Such choices recede from the political as they become what “is” rather than what politics has determined ought to be.
This Article proposes a detailed framework for saving governance-by-design.
Through four case studies, the Article examines a range of recent battles over the values embedded in technology design and makes the case that we are entering an era of policymaking by “design war.” These four battles, in turn, highlight four recurring dysfunctions of governance-by-design:
First, governance-by-design overreaches by using overbroad technological fixes that lack the flexibility to balance equities and adapt to changing circumstances. Errors and unintended consequences result.
Second, governance-by-design often privileges one or a few values while excluding other important ones, particularly broad human rights.
Third, regulators lack the proper tools for governance-by-design. Administrative agencies, legislatures, and courts often lack technical expertise and have traditional structures and accountability mechanisms that poorly fit the job of regulating technology.
Fourth, governance-by-design decisions that broadly affect the public are often made in private venues or in processes that make technological choices appear inevitable and apolitical.
If we fail to develop new rules of engagement for governance-by design, substantial and consequential policy choices will be made without effective public participation, purposeful debate, and relevant expertise. Important values will be sacrificed—sometimes inadvertently, because of bad decisions, and sometimes willfully, because decisions will be captured by powerful stakeholders.
To address these critical issues, this Article proposes four rules of engagement. It constructs a framework to help decision makers protect values and democratic processes as they consider regulating by technology. Informed by the examination of skirmishes across the battlefields, as well as relevant Science and Technology Studies (STS), legal, design, and engineering literatures, this framework embraces four overarching imperatives:
1. Design with Modesty and Restraint to Preserve Flexibility
2. Privilege Human and Public Rights
3. Ensure Regulators Possess the Right Tools: Broad Authority and Competence, and Technical Expertise
4. Maintain the Publicness of Policymaking
These rules of engagement offer a way toward surfacing and resolving value disputes in technological design, while preserving rather than subverting public governance and public values.'The limits of (digital) constitutionalism: Exploring the privacy-security (im)balance in Australia' by Monique Mann, Angela Daly, Michael Wilson and Nicolas Suzor in (2018) 80(4) International Communication Gazette 369 comments
This article explores the challenges of digital constitutionalism in practice through a case study examining how concepts of privacy and security have been framed and contested in Australian cyber security and telecommunications policy-making over the last decade. The Australian Government has formally committed to ‘internet freedom’ norms, including privacy, through membership of the Freedom Online Coalition (FOC). Importantly, however, this commitment is non-binding and designed primarily to guide the development of policy by legislators and the executive government. Through this analysis, we seek to understand if, and how, principles of digital constitutionalism have been incorporated at the national level. Our analysis suggests a fundamental challenge for the project of digital constitutionalism in developing and implementing principles that have practical or legally binding impact on domestic telecommunications and cyber security policy. Australia is the only major Western liberal democracy without comprehensive constitutional human rights or a legislated bill of rights at the federal level; this means that the task of ‘balancing’ what are conceived as competing rights is left only to the legislature. Our analysis shows that despite high-level commitments to privacy as per the Freedom Online Coalition, individual rights are routinely discounted against collective rights to security. We conclude by arguing that, at least in Australia, the domestic conditions limit the practical application and enforcement of digital constitutionalism’s norms.