16 June 2018

Self-driving Vehicles and US Employment

America’s workforce and the self-driving future: Realizing Productivity Gains and Spurring Economic Growth from Securing America's Future Energy (SAFE) comments
In the last several years, the development and adoption of autonomous vehicles (AVs) has emerged as a central policy subject, both in the United States and across the world. The vision of a future where vehicles drive themselves has captured the imagination of the public, promising the potential for significant improvements in roadway safety, economic productivity, accessibility, and reducing fuel consumption and accompanying emissions.
At the same time, some have expressed concern about the long-term impacts of the technology, most intensely with regard to the question of the potentially farreaching impacts of the technology on the U.S. labor force. The individual identities of Americans are often intertwined both with the vehicles they drive and their occupations. The potential significant changes on both fronts in the years and decades to come is, understandably, an unsettling prospect for some. To ensure that policy decisions are made on the basis of solid evidence, SAFE engaged us to answer a series of questions that cut to the core of these issues. The questions were:
1. What precedents should we look to in thinking about the impacts AVs will have on society and the economy?
2. What are some concrete examples that illustrate the nature and magnitude of the economic and social benefits that AVs can offer?
3. What will be the medium- to long-term impacts of vehicle automation on the workforce? Upon what will the scale and timing of those impacts depend? What steps can be taken today to ensure the best outcome for both the public that stands to gain from AVs and the workers whose jobs could be impacted?
These questions were selected because of the importance of improving the social impact of the technology, the potential for impacts on the labor force, and the importance of these considerations to policymakers in weighing AV regulation. A deeper knowledge of the broader economic impacts of AVs will help to encourage constructive choices in a resource-constrained world. 
Over the last six months, we divided these questions amongst this group, with a report dedicated to each question. We performed independent and rigorous research utilizing well-accepted methods of economic analysis that culminated in three reports—referred to in this brief as the Compass Transportation report (focused on the question of precedents), the Montgomery AV benefits report (focused on the benefits of AVs) and the Groshen employment report (focused on the employment impacts)—that each addressed one of the questions posed above.
The authors state
Although they are not yet in widespread commercial use, there is intense public interest in autonomous vehicles (AVs). Much of the focus has been on the broad societal benefits this technology can offer. AVs also have the potential to influence society in a way unseen since the invention of the automobile. In addition to dramatically reducing traffic accidents and roadway fatalities, AVs hold the promise of improved mobility—critical for economic growth and quality of life. AVs can dramatically improve the lives of communities underserved by our current transportation system and those most vulnerable to its inefficiencies, namely Americans with disabilities, seniors, and wounded veterans. 
However, some have raised concerns about the potential for AVs to negatively impact workers and exacerbate wealth inequality. SAFE believes that AV-related labor displacement concerns—many of which have been expressed sensationally—must be addressed seriously rather than merely dismissed out of hand or repeated without verification. In response to these concerns, SAFE commissioned a panel of highly regarded transportation and labor economists to conduct a fact-based and rigorous assessment of the economic costs and benefits of AVs, including labor impacts.
The commissioned research painted a detailed outlook for the future economic and labor market impacts of AVs. They found:
• AVs have many of the characteristics of “catalyzing innovations” whose positive impacts are felt broadly throughout the economy. 
• Significant economic benefits from the widespread adoption of AVs could lead to nearly $800 billion in annual social and economic benefits by 2050, mostly from reducing the toll of vehicle crashes, but also from giving productive time back to commuters, improving energy security by reducing dependence on oil, and providing environmental benefits. 
• A study of traffic patterns and job locations found that some economically depressed regions could see improved access to large job markets for their residents through the deployment of AVs. 
• AVs will create new jobs that will, in time, replace jobs eliminated by automation. Strong workforce development infrastructure can both mitigate employment disruption and speed the evolution of worker skill requirements that will contribute to full employment and economic growth. 
• There is significant time before the impacts of AVs on employment are fully realized. Simulations of the impact of AVs on employment showed a range of impacts that would be felt starting in the early 2030s but would only increase the national unemployment rate by 0.06–0.13 percentage points at peak impact sometime between 2045 and 2050 before a return to full employment. 
• The economic and societal benefits offered by AVs in a single year of widespread deployment will dwarf the cost to workers incurred over the entire multidecadal deployment of AVs when measured in purely economic terms. The benefits of AVs are sufficiently large to enable investment of adequate resources in assisting impacted workers. 
• By pursuing a rapid deployment of AVs, combined with investments in workforce policies that seek to mitigate costs to workers and policies that address other risks or costs that might emerge alongside greater AV adoption, the United States can enjoy the full benefits of AVs as soon as possible while simultaneously preparing the workforce for the jobs of the future. 
Economic and Societal Impact Many of the most compelling benefits of autonomous vehicle technology will be intangible or undetectable from modeling designed to capture incremental gains. Any economic estimates of these benefits should be understood as an attempt to capture just a portion of gains from AVs. This conservative microeconomic analysis estimates economic benefits of up to $800 billion per year with full deployment of AVs. Utilizing the projections for AV deployment that SAFE developed, the value of AV benefits through 2050 will likely be between $3.2 trillion and $6.3 trillion. This is a partial estimate looking at a narrow set of case studies—a full estimate would likely be significantly higher. 
A projection of the annual consumer and societal benefits of AVs is in Figure A. The breakdown of these benefits (upon full adoption) is in Table A. 
Accident Reduction: In 2010, the National Highway Traffic Safety Administration (NHTSA) estimated the economic costs of car crashes to be $242 billion per year. When quality-of-life costs are added into the estimate, the total value of societal harm was approximately $836 billion per year. Extrapolating these values based on more recent crash and driving data puts the annual societal cost of crashes at over $1 trillion today. Using a conservative methodology in which we assume AVs would only address crashes resulting from a gross driver error (e.g. distraction, alcohol, and speeding), the annual benefit would exceed $500 billion. Given that human error contributes to over 94 percent of accidents, benefits could exceed this amount. 
Reduce Oil Consumption: Oil holds a virtual monopoly on vehicle fuels, with petroleum accounting for 92 percent of the fuel used to power the U.S. transportation system. By precipitating a shift away from petroleum as the dominant fuel source, AVs can substantially reduce America’s reliance on oil. An analysis of the energy security and environmental benefits of increased EV uptake as a result of AV deployment supports an estimated $58 billion societal benefit. 
Congestion: Crashes are a major source of road congestion and improved safety from AVs and better throughput (e.g. through reduced bottlenecks) could significantly reduce the current costs of congestion: Close to 7 billion hours are lost in traffic and over 3 billion gallons of fuel similarly are wasted every year. 
Improved Access to Retail and Jobs: SAFE modelling of road speeds around specific retail establishments found that the increased willingness of shoppers to travel—even by just two minutes each way—could increase a mall’s customer base by nearly 50 percent in some instances. Additionally, SAFE modeling identified numerous economically disadvantaged localities for whom better transportation options would lead to greater employment opportunities. For a group of four struggling cities (Gary, IN, Benton Harbor, MI, Elmira, NY, and Wilmington, DE), SAFE modeled how increased traffic speeds from AV adoption and greater willingness to travel could impact the number of jobs within reach. An illustrative example is in Figure B.
The Effect of AVs on the U.S. Labor Force
From the automobile to the internet, history has demonstrated time and again that new technologies lead to sizable economic and social benefits in the long run. However, with significant change always comes the specter of potential loss, particularly in the short term. Like many new technologies before it, the public discourse around AVs has witnessed a significant focus on potential downsides, often with considerable exaggeration. However, the potential losses must be balanced with the benefits from highly significant improvements in safety, reductions in vehicle crash fatalities, gains in productivity, reduced congestion and increased fuel efficiency that will result from AV deployment. Indeed, the benefits are sufficiently large to enable investment of adequate resources in assisting those affected.
A study of historical precedents for the impacts of new technologies found a common pattern: Adoption of new technologies improves productivity and increases quality of life. Widely adopted technologies can transform our way of life and improve economic well-being at a national scale. Often, technological progress leads to improved opportunities for workers in the short term; a recent study found that the rise of e-commerce has, on net, improved jobs for high school graduates.1 However, the impacts of those technologies can also present temporary challenges for the workforce, both for employers needing skilled workers, and for workers whose skills may no longer be as competitive in the labor market.
In the absence of concrete estimates, the media and public have a tendency to concentrate on the worst possible outcome. A recent report claimed that “more than four million jobs will likely be lost with a rapid transition to autonomous vehicles.”2 The methodology used to develop this number was simply to count driving jobs in the United States and assume that they would be rapidly lost as AVs deploy. Such assumptions and conclusions lack context, nuance, or grounding in labor market dynamics and the natural cycle of labor force evolution. Using the scenarios SAFE provided for the adoption of AVs, the Groshen employment report modeled the technology’s impact on the workforce. The study concluded that AVs would not lead to the long-term loss of jobs, although some number of workers could experience unemployment and wage losses. As there are far more professionally employed truck drivers than professionally-employed car drivers, impacts would be tied more closely to the adoption of very high automation in trucks (defined as no driver “in the loop” for most of operation). In contrast, partial automation or teleoperation of trucks is not likely to have significant negative impacts on the workforce.
Figure C and Table A contextualize the job loss within a broadly understood metric—the unemployment rate. Relative to a baseline of full employment, the advent of AVs are projected to increase the unemployment rate to a small degree in the 2030s and to a somewhat larger degree in the late 2040s, with a peak, temporary addition to unemployment rates of 0.06– 0.13 percentage points. Table A contextualizes the size of this employment impact with the shock of the recent Great Recession and a previous mild recession. Policy steps to address the evolution of the labor market must ultimately be placed in the context of the broader impacts of AVs in order to ensure the best outcome. Due to the large-scale societal benefits from the deployment of AVs, policies to address labor force issues must carefully consider their potential impact in delaying the deployment and thus the benefits of AVs. 
Delaying the deployment of AVs would represent a significant and deliberate injury to public welfare. Rather than delaying the benefits, policymakers could ensure that the interests of the people who may lose jobs are well protected through effective mitigation programs. Figure D illustrates the importance of balancing these two priorities. It plots both the conservative projected AV benefits and the range of projected wages that will be lost to individual workers due to AV-related unemployment. The range of projected wage loss reaches as high as $18 billion in 2044 and 2045. However, it is essential to note that this goes hand-in-hand with projected social benefits well in excess of $700 billion for each of those years. In fact, not only are the social and economic benefits of AV deployment significantly more than their costs to workers on an annual basis, but the benefits of AVs each year are far greater than the total cost to workers over the next 35 years combined (illustrated by the middle range of this graph)

15 June 2018

DNS Scams

The Federal Court has ordered that Domain Corp Pty Ltd and Domain Name Agency Pty Ltd (also trading as Domain Name Register) pay combined penalties of $1.95 million for breaching the Australian Consumer Law.

The ACCC states
 From November 2015 to at least April 2017, the two Domain Companies sent out approximately 300,000 unsolicited notices to businesses, which looked like a renewal invoice for the business’s existing domain name. Instead, these notices were for the registration of a new domain name at a cost ranging from $249 to $275. 
The Court declared that the Domain Companies made false and misleading representations and engaged in misleading and deceptive conduct in sending these notices. Australian businesses and organisations paid approximately $2.3 million to the Domain Companies as a result of receiving the notices. 
“The Domain Companies misled businesses into thinking they were renewing payment for the business' existing domain name, when in fact the business was paying for a new domain name,” ACCC Acting Chair Delia Rickard said. Any business or consumer receiving a renewal notice for a ‘.com’ or '.net.au’ domain name should check that the notice is to renew their proper domain name. 
“These sham operations target small businesses, capitalising on a lack of understanding of the domain name system or a busy office environment. We encourage businesses to be vigilant when paying invoices, especially if it is for a domain name registration service,” Ms Rickard said. 
The Court also declared that the sole director of both Domain Companies, Mr Steven Bell (also known as Steven Jon Oehlers), was knowingly concerned in, and a party to, the conduct. 
The Court made other orders by consent, including injunctions for three years against each of the Domain Companies and for five years against Mr Bell. These injunctions include a requirement that if any of the parties decide to send out further notices, each notice has to prominently include the words, “This notice does not relate to the registration of your current domain name. This is not a bill. You are not required to pay any money”. 
The Court also made an order disqualifying Mr Bell from managing a corporation for five years and ordered him to pay costs to the ACCC, fixed at $8,000.

Disclosure and Traditional Culture

I'llegal Designs? Enhancing Cultural and Genetic Resource Protection Through Design Law' (Emory Legal Studies Research Paper, 2018) by Margo A. Bagley comments
Just a decade ago, a requirement that a designer disclose the origin of traditional cultural expressions, traditional knowledge, and biological or genetic resources used in creating a design in an industrial design application was virtually unheard of in national or regional protection systems. But as awareness of the many ways in which cultural and genetic resource use and misappropriation can occur is evolving, some developing countries have begun exploring the appropriateness of—and in some cases even instituting—such a requirement. 
These developments have taken center stage in the negotiations of a draft Design Law Treaty (DLT) in the World Intellectual Property Organization Standing Committee on the Law of Trademarks, Industrial Designs, and Geographical Indications, which is expected to make it easier for applicants to obtain design protection globally by limiting domestic design registration requirements. Currently, a controversy exists over an African Group proposal to allow policy space in the draft DLT for countries to be able to require design applicants to disclose the origin of traditional cultural expressions, traditional knowledge, and biological or genetic resources used in creating protectable designs. 
This paper focuses on that controversy. It highlights possible justifications countries may have for desiring the flexibility to impose disclosure requirements on design protection applicants; and opines on the broader ramifications of the dispute for policy coherence and mutual supportiveness goals in relation to cultural and genetic resource protection issues.

FDA regulation and the CRISPR Zoo

‘A Menagerie of Moral Hazards: Regulating Genetically Modified Animals’ by Sarah Polcz and Anna C.F. Lewis in (2018) 40 The Journal of Law, Medicine and Ethics  180-184 comments
Dairy cattle naturally grow long and dangerous horns. So as a protective measure, farmers permanently remove calves’ small horns through a painful procedure. Recently, scientists have used modern genetic editing techniques to create dairy cattle that never develop horns, and so never need to be “dehorned.” The regulatory fate of these genetically dehorned cattle may be bound up with numerous more controversial cases from the same rapidly diversifying field: the genetic editing of animals. Or, at least so it could be under draft Food and Drug Administration (FDA) guidance which closed for public commentary in June 2017. The FDA’s challenge is to chart a flexible regulatory course. One which will support the potential of gene-editing technology, while staking out the boundaries of acceptable risks — i.e. the ethical boundaries — of the looming “CRISPR zoo.” Here we review the background to the draft guidance, as well as the scathing comments it received from disparate interest groups. The comments show that to foster public trust in how gene-editing technologies are used, it is imperative that the FDA re-engage with all stakeholders, including the public at large. 
In its draft guidance, the FDA proposed to regulate “intentionally altered genomic DNA” of animals as a drug being evaluated for use in animals. The original altered animal and all its progeny would be subject to the animal drug regulations. The FDA invited comments on these proposed amendments. As we discuss here, a mere handful of the 151 comments received were supportive,4 and most were extremely critical, including those from the National Association of State Departments of Agriculture (NASDA). We argue that the FDA’s proposals are unsatisfactory and should be withdrawn.
The authors conclude
 We agree that the product should be the subject of regulation, and not the process that created it. A focus on product would force the FDA to clarify its regulatory intent, increas- ing transparency not just for product developers, but also for the public. It would elevate the importance of understanding risk, a view that was shared by a broad cross section of commenters. By contrast, a focus on process bunches diverse applications under one umbrella, from curing inherited diseases to adding in genes from distant species, from food to de- extinction of animals. And it divides products that should be considered together. In contrast, the USDA excluded from its proposed regulations any genetically modified organ- ism that could have been produced using traditional breeding techniques because “[s]uch organisms are essentially identical, despite the method of creation.” 
Moreover, history has shown that biotech regulations focusing on process have a frustratingly short shelf life. Regulation introduced in 2009 focused on process, calling out recombinant DNA. Just a few years later, new technologies arrived that are not covered by that regulation. Updating guidance with updated technologies repeats the original error. Already technology is outpacing proposals: mice have had their epigenomes successfully edited, and as these terms are currently understood they would not qualify as having “intentionally altered genomic DNA”. 
A focus on process has allowed the FDA to avoid answering tough product questions. This needs to change. The approach proposed by the NAS in its report concerning the future products of biotechnology, in our opinion represents a sound core for a revised approach: a single cross-agency entry point for the risk based appraisal of new products. In the context of technology designed to affect animals, an additional concern should be for animal welfare. Technology such as gene editing has huge potential to not only improve animal welfare, but also to decrease it. 
The lack of consistency across different agencies concerning geneti- cally altered organisms is not only confusing within the U.S., it makes it almost impossible for other coun- tries to follow the U.S.’s lead. The aim should be internal consistency within the U.S. as part of a cohesive international approach for these international issues: protection from large-scale risk while helping provide for the food security and better health of humanity. 
Finally, the FDA received a similar number and spread of comments as the USDA received in reaction to its proposals. The same arguments cited by the USDA in withdrawing its own proposals apply equally to the FDA. In both cases, the propos- als clearly fail to attract even minimal support from a broad range of commenters. Moreover, as the USDA noted, the publication of the proposed rules constrained its ability to explore alternatives with stakeholders. The withdrawal opened up the opportunity for “a more open and robust policy dialogue.” Given the opposition to the proposals, it would be consistent for the FDA to likewise withdraw them as a first step towards a cohesive framework. Then we can start the much needed dialog to build a risk-based approach, both for gene-editing and for the biotechnologies to come.

14 June 2018

Lex Americana

'The Failure of Internet Freedom' by Jack Goldsmith in the Knight First Amendment Institute’s Emerging Threats series comments
From the second term of the Clinton administration to the end of the Obama administration, the U.S. government pursued an “internet freedom” agenda abroad. The phrase “internet freedom” signaled something grand and important, but its meaning has always been hard to pin down. For purposes of this paper, I will use the phrase to mean two related principles initially articulated by the Clinton administration during its stewardship of the global internet in the late 1990s.
The first principle is that “governments must adopt a non-regulatory, market-oriented approach to electronic commerce,” as President Clinton and Vice President Gore put it in 1997. Their administration opposed government taxes, customs duties and other trade barriers, telecommunications constraints, advertisement limitations, and most other forms of regulation for internet firms, communications, or transactions. The premise of this commercial non-regulation principle, as I’ll call it, was that “the Internet is a medium that has tremendous potential for promoting individual freedom and individual empowerment” and “[t]herefore, where possible, the individual should be left in control of the way in which he or she uses this medium.”  In other words, markets, individual choice, and competition should presumptively guide the development of the internet. When formal governance is needed, it should be supplied by “private, nonprofit, stakeholder-based” institutions not tied to nations or geography.
The Clinton administration acknowledged the need for traditional government regulation in narrow circumstances—most notably, and self-servingly, to protect intellectual property—but otherwise strongly disfavored it.
The second principle of internet freedom, which I’ll call the anti-censorship principle, argued for American-style freedom of speech and expression on the global internet. This principle originated as a component of the effort to promote electronic commerce. Over time, however, it developed into an independent consideration that sought to influence foreign political structures. The Clinton administration devoted less policy attention to the anti-censorship principle than to the commercial non-regulation principle because it believed that “[c]ensorship and content control are not only undesirable, but effectively impossible,” as the administration’s internet czar Ira Magaziner put it.
China’s effort “to crack down on the Internet,” Bill Clinton famously quipped in 2000, was “like trying to nail Jell-O to the wall.”
The George W. Bush administration embraced both internet freedom principles, and it took novel institutional steps to push the anti-censorship principle. In 2006, the State Department established the Global Internet Freedom Task Force (GIFT). The main aims of GIFT were to “[m]aximize freedom of expression and the free flow of information and ideas,” to “[m]inimize the success of repressive regimes in censoring and silencing legitimate debate,” and to “[p]romote access to information and ideas over the Internet.”
GIFT provided support for “unfiltered information to people living under conditions of censorship,” and it established “a $500,000 grant program for innovative proposals and cutting-edge approaches to combat Internet censorship in countries seeking to restrict basic human rights, including freedom of expression.”
In this way, the Bush administration got the U.S. government openly in the business of paying for and promoting “freedom technologies” to help break authoritarian censorship and loosen authoritarian rule across the globe.
The Obama administration continued to advocate for the commercial non-regulation principle and further expanded the United States’ commitment to the anti-censorship principle.
The landmark statement of its approach, and the most elaborate and mature expression of the American conception of internet freedom, came in Secretary of State Hillary Clinton’s much-lauded January 2010 speech on the topic.10 Invoking American traditions from the First Amendment to the Four Freedoms, Clinton pledged American support for liberty of speech, thought, and religion on the internet and for the right to privacy and connectivity to ensure these liberties for all. Clinton’s successor to GIFT, the State Department’s NetFreedom Task Force, oversaw “U.S. efforts in more than 40 countries to help individuals circumvent politically motivated censorship by developing new tools and providing the training needed to safely access the Internet.”  Other federally funded bodies served similar goals.
The Obama administration spent at least $105 million on these programs, which included investment in encryption and filter-circumvention products and support to fight network censorship abroad.
Across administrations, the U.S. internet freedom project has pursued numerous overlapping aims. It has sought to build a stable and robust global commercial internet. It has sought to enhance global wealth—especially the wealth of the U.S. firms that have dominated the computer and internet technology industries. It has sought to export to other countries U.S. notions of free expression and free trade. And it has sought to impact politics abroad by spreading democracy with the ambitious hope of ending authoritarianism. “The Internet,” Magaziner proclaimed, is “a force for the promotion of democracy, because dictatorship depends upon the control of the flow of information. The Internet makes this control much more difficult in the short run and impossible in the long run.” The Bush administration and especially the Obama administration engaged in high-profile and expensive diplomatic initiatives to use and shape the internet to spread democracy and human rights.
The U.S. internet freedom project deserves significant credit for the remarkable growth of the global internet, and especially global commerce, in the last two decades. But on every other dimension, the project is failing, and many of its elements lie in tatters. In response to perceived American provocations, other nations have rejected the attempted export of American values and are increasingly effective at imposing their own values on the internet. These nations have become adept at clamping down on unwelcome speech and at hindering the free flow of data across and within their borders. Authoritarian countries, in particular, are defeating unwanted internet activities within their borders and are using the internet to their advantage to deepen political control. The optimistic hope that the internet might spread democracy overseas has been further belied by the damage it has done to democracy at home. Digital technologies “are not an unmitigated blessing,” Secretary Clinton acknowledged in her 2010 speech. She understated the point. The relatively unregulated internet in the United States is being used for ill to a point that threatens basic American institutions.
Goldsmith's conclusion is that
The Trump administration has hollowed out the State Department and has deemphasized human rights and free trade. It is thus doubtful that it will give much support to the internet freedom agenda. But even a future administration more sympathetic to the agenda will need to address its failures to date by acknowledging some uncomfortable realities about the internet and by facing some large tradeoffs. Here are what I think are the three most important ones.
The first set of tradeoffs arise from how the United States promotes its anti-censorship principle abroad. That principle is premised on a commitment to spreading democracy and U.S. constitutional values that has been a lynchpin of American foreign policy since at least World War II, if not earlier. There are many ways to maintain this commitment while rethinking the tactic of meddling in foreign networks to undermine authoritarian governments. The American people are angry about and threatened by Russian cyber interference in the 2016 election. But the Russian government, as well as China’s and Iran’s governments and others, are angry about and threatened by U.S. intervention in their domestic networks with the ultimate aim of changing their forms of state and society. Network interventions to promote freedom and democracy are not on the same moral plane as network interventions to disrupt or undermine democracy. But regardless of the morality of the situation, it is fanciful to think that the digitally dependent United States can continue its aggressive cyber operations in other nations if it wants to limit its own exposure to the same. Unless the United States can raise its cyber defenses or improve its cyber deterrence—a dim prospect at the moment—it will need to consider the possibility of a cooperative arrangement in which it pledges to forgo threatening actions in foreign networks in exchange for relief from analogous adversary operations in its networks. The Russian government recently proposed a mutual ban on foreign political interference, including through cyber means. The significant hurdles to such an agreement include contestation over the terms of mutual restraint, a lack of trust, and verification difficulties.87 These high hurdles are not obviously higher than the hurdles to improving U.S. cyber defenses and cyber deterrence. And yet, no one in the U.S. government appears to be thinking about which sorts of operations the United States might be willing to temper in exchange for relief from the devastating cyber incursions of recent years.
The second set of tradeoffs concern U.S. skepticism about more extensive government regulation of, and involvement in, domestic networks. The devastating cyber losses that the United States has been suffering result in large part from market failures that only government regulation can correct. The government will also need to consider doing more to police and defend telecommunications channels from cyberattack and cybertheft, just as it polices and defends threats that come via air, space, sea, and land. This might involve coordination with firms to scan internet communications, to share threat information, and to frame a response. And it might require accommodations for encrypted communications. The hazards for privacy from these steps are so extreme as to make them seem impossible today. But there are also serious hazards for privacy from not providing adequate cybersecurity. If the threat to our valuable digital networks becomes severe enough, the American people will insist that the government take steps to protect them and the forms of social and economic life they enable. Our conception of the tradeoffs among different privacy commitments and between privacy and security will adjust.
Finally, U.S. regulators, courts, and tech firms may need to recalibrate domestic speech rules. Tim Wu has recently proposed some ways to rethink First Amendment law to deal with the pathologies of internet speech. For instance, First Amendment doctrine might be stretched to prevent government officials from inciting attack mobs to drown out disfavored speakers, as President Trump has sometimes appeared to do. Or the doctrine might be tempered, to allow the government to more aggressively criminalize or regulate cyberstalking and trolling, or even to require speech platforms to provide a healthy and fair speech environment. These are bold reforms, but they are also potentially very dangerous. The line between genuine political speech (including group speech) and propaganda and trolling will be elusive and controversial. The effort to ensure a healthy speech environment is even more fraught and will invariably ban or chill a good deal of speech that should be protected. These misgivings do not mean that such modifications are not worth exploring or that current understandings of the First Amendment are sacrosanct. They just mean that here, as with the other tradeoffs, the choices we face are painful.

13 June 2018

Marriage

The Gay Rights Canon and the Right to Nonmarriage' by Courtney G. Joslin in (2017) 97 Boston University Law Review 425 comments
In the line of cases from Romer v. Evans to Obergefell v. Hodges, lesbian, gay, bisexual, and transgender (“LGBT”) people went from outlaws to citizens entitled to dignity and equality. These decisions represent incredible successes for the LGBT rights movement. Some who support LGBT equality, however, argue that these victories came at a great cost: the gay rights canon, it is said, entrenches the supremacy of marriage and the marital family. 
Marriage equality skeptics are right to be concerned about this possibility. Marriage is increasingly a marker of privilege. Individuals who marry and stay married are disproportionately likely to be white and more affluent. It is also important, however, not to overlook the more progressive potential of the gay rights canon. This Article reclaims this potential. 
This Article offers two novel and important contributions. First, it identifies and gives substance to the constitutional principles of the gay rights canon. Second, this Article uses the principles of the gay rights canon to offer a rereading of Obergefell. This progressive rereading supports, rather than forecloses, the extension of constitutional protection to those living outside marriage.

AI Crime and Ethics

'Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions' by Thomas King, Nikita Aggarwal, Mariarosaria Taddeo and Luciano Floridi comments
 Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which we term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems, and a possible solution space.
In the UK the Department for Digital, Culture, Media and Sport has released a short consultation paper regarding a new national Centre for Data Ethics and Innovation.

It states
● The use of data and artificial intelligence (AI) is set to enhance our lives in powerful and positive ways. ​We want the UK to be at the forefront of global efforts to harness data and artificial intelligence as a force for good. 
● For this, our businesses, citizens and public sector need clear rules and structures that enable safe and ethical innovation in data and AI. ​The UK already benefits from well established and robustly enforced personal data laws, as well as wider regulations that guide how data driven activities and sectors can operate. 
● However, advances in the ways we use data are giving rise to new and sometimes unfamiliar economic and ethical issues. ​We need to make sure we have the governance in place to address these rapidly evolving issues, otherwise we risk losing confidence amongst the public and holding businesses back from valuable innovation. 
● This is why we are establishing a new Centre for Data Ethics and Innovation.​ It will identify the measures needed to strengthen and improve the way data and AI are used and regulated. This will include articulating best practice and advising on how we address potential gaps in regulation. The Centre will not, itself, regulate the use of data and AI - its role will be to help ensure that those who govern and regulate the use of data across sectors do so effectively. 
● The Centre will operate by drawing on evidence and insights from across regulators, academia, the public and business.​ It will translate these into recommendations and actions that deliver direct, real world impact on the way that data and AI is used. The Centre will have a unique role in the landscape, acting as the authoritative source of advice to government on the governance of data and AI. 
● Across its work, the Centre will seek to deliver the best possible outcomes for society from the use of data and AI. ​This includes supporting innovative and ethical uses of data and AI. These objectives will be mutually reinforcing: by ensuring data and AI are used ethically, the Centre will promote trust in these technologies, which will in turn help to drive the growth of responsible innovation and strengthen the UK’s position as one of the most trusted places in the world for data-driven businesses to invest in. 
● We propose that the Centre acts through:
a. analysing and anticipating gaps​ in the governance landscape  
b. agreeing and articulating best practice​ to guide ethical and innovative uses of data 
c. advising Government on the need for specific policy or regulatory action 
● Understanding the public’s views​, and acting on them, will be at the heart of the Centre’s work, as well as responding to and seeking to shape the international debate. 
● We recognise that the issues in relation to data use and AI are complex, fast moving and far reaching, ​and the Centre itself - as well as the advice it delivers - will need to be highly dynamic and responsive to shifting developments and associated governance implications. 
● To enshrine and strengthen the independent advisory status of the Centre, we will seek to place it on a statutory footing as soon as possible.​ This will be critical in building the Centre’s long term capacity, independence and authority. 
● This consultation seeks views on the way the Centre will operate and its priority areas of work. ​We want to ensure the Centre adds real value and builds confidence and clarity for businesses and citizens. We will therefore engage extensively with all those who have an interest and stake in the way data use and AI are governed and regulated. This includes citizens, businesses, regulators, local and devolved authorities, academia and civil society.
'The Other Side of Autonomous Weapons: Using Artificial Intelligence to Enhance Precautions in Attack' by Peter Margulies in Eric Talbot Jensen (ed.) The Impact of Emerging Technologies on the Law of Armed Conflict (Oxford University Press, 2018) comments
The role of autonomy and artificial intelligence (AI) in armed conflict has sparked heated debate. The resulting controversy has obscured the benefits of autonomy and AI for compliance with international humanitarian law (IHL). Compliance with IHL often hinges on situational awareness: information about a possible target’s behavior, nearby protected persons and objects, and conditions that might compromise the planner’s own perception or judgment. This paper argues that AI can assist in developing situational awareness technology (SAT) that will make target selection and collateral damage estimation more accurate, thereby reducing harm to civilians. 
SAT complements familiar precautionary measures such as taking additional time and consulting with more senior officers. These familiar precautions are subject to three limiting factors: contingency, imperfect information, and confirmation bias. Contingency entails an unpredictable turn of events, such as the last-minute entrance of civilians into a targeting frame. Imperfect information involves relevant data that is inaccessible to the planner of an attack. For example, an attack in an urban area may damage civilian objects that are necessary for health and safety, such as sewer lines. Finally, confirmation bias entails the hardening of preconceived theories and narratives. 
SAT’s ability to rapidly assess shifting variables and discern patterns in complex data can address perennial problems with targeting such as the contingent appearance of civilians at a target site or the risk of undue damage to civilian infrastructure. Moreover, SAT can help diagnose flaws in human targeting processes caused by confirmation bias. This Article breaks down SAT into three roles. Gatekeeper SAT ensures that operators have the information they need. Cancellation SAT can respond to contingent events, such as the unexpected presence of civilians. The most advanced system, behavioral SAT, can identify flaws in the targeting process and remedy confirmation bias. In each of these contexts, SAT can help fulfill IHL’s mandate of “constant care” in the avoidance of harm to civilian persons and objects.

Homoglobalism

Homoglobalism: The Emergence of Global Gay Governance' by Aeyal Gross in Dianne Otto (ed) Queering International Law ( Routledge, 2017) comments
In September 2016, the UN Human Rights Council appointed an Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (SOGI). Less than a month later, the World Bank President announced the appointment of an advisor on SOGI, a newly created senior position tasked with promoting lesbian, gay, bisexual, transgender and intersex (LGBTI) inclusion throughout the work of the World Bank. Both developments are part of a wider trend of global institutions beginning to engage with LGBT/LGBTI issues. 
In this article, I identify and analyse these developments, arguing that we are witnessing an emerging phenomenon I call 'global gay governance' (GGG). By 'gay governance', following the work of scholars on governance feminism, I mean the forms in which LGBT advocacy and ideas get incorporated into state, state-like and state-affiliated power. In previous work, I showed that gay governance occurs at the municipal, national and global levels. This article extends this work by focusing on GGG at the level of global institutions.
Gross' 'Gay Governance: A Queer Critique' in Governance Feminism: An Introduction (Minnesota University Press, Forthcoming) comments
Referencing Foucault’s idea of governmentality, Janet Halley describes governance feminism as “every form in which feminists and feminist ideas get incorporated into state, state-like, and state-affiliated power.” In her words, when feminists and feminist ideas achieve sufficient legitimacy and influence “the conduct of men [,] women and children and even of inanimate beings like discourses, literary genres, and moral panics, we deem them to govern, and call them GF [Governance Feminism-A.G.].” Following this definition, this chapter examines “gay governance,” meaning the forms whereby LGBT people and ideas get incorporated into state, state-like, and state-affiliated power. 
Gay governance is apparent in various practices — government funding for LGBT organizations and involvement in LGBT events such as LGBT pride parades; LGBT advocacy of policies and laws that may lead to increased carcerality by the state in the name of protection for LGBT people, and global export of LGBT rights, including a threat of financial sanctions to states failing to comply, known as “gay conditionality.” These three examples of gay governance, in the occurrences discussed in this chapter, correlate with governance at the municipal, national, and global levels respectively.

12 June 2018

US Prison Economy

The fascinating 'Economic Consequences of the U.S. Convict Labor System' by  Michael Poyker considers
the economic externalities of U.S. convict labor on local labor markets. Using newly collected panel data on U.S. prisons and convict-labor camps from 1886 to 1940, I show that competition from cheap prison-made goods led to higher unemployment, lower labor-force participation, and reduced wages (particularly for women) in counties that housed competing manufacturing industries. At the same time, affected industries had higher patenting rates. I find that the introduction of convict labor accounts for 16% slower growth in U.S. manufacturing wages. The introduction of convict labor also induced technical changes and innovations that account for 6% of growth in U.S. patenting in affected industries. 
Poyker notes
Convict labor is still wide-spread, not only in developing countries but also among the world’s most developed countries.  In 2005 U.S. convict-labor system employed nearly 1.4 million prisoners, among them 0.6 million worked in manufacturing (constituting 4.2% of total U.S. manufacturing employment). Prisoners work for such companies as Wal-Mart, AT&T, Victoria’s Secret, and Whole Foods, and their wages are substantially below the minimum wage, ranging from $0 to $4.90 per hour in state prisons. 
Convict labor may impose externalities on local labor markets and firms. Prison-made goods are relatively cheap. Companies that hire free labor find it harder to compete with prisons, especially in industries that rely on low-skilled labor. They face lower demand on their products, pushing down their labor demand. Excess labor moves to industries not competing with prisons and overall wages decreased. Convict labor affects firms, too. Many (predominantly labor-intensive firms) go out of business, unable to compete with prison-made goods, even when they lower wages.  Finally, those affected firms that do not close have to innovate and adopt new technology, either to decrease their production costs, or to produce higher-grade goods that do not compete with prison-made goods.  In this paper, I use a historical setting to evaluate the effect of competition with prison-made goods on firms and free workers. It is challenging to identify causal effects of convict labor in the contemporary setting, since the data on prison output are not available, and due to the embedded endogeneity problem. First, U.S. prisons are built in economically depressed counties under the assumption that they will provide jobs (e.g., guards) in the local labor market (Mattera and Khan (2001)). Second, contemporary convict-labor legislation is endogenous. For these reasons I rely on the historical setting, to identify the effects of convict labor. I digitize a dataset on U.S. convict- labor camps and prisons. Starting in the 1870s, states enacted laws that allowed convict labor, but the timing varied from state to state. Its introduction was unanticipated, both by firms and by prison wardens, who were suddenly in charge of employing prisoners within their institutions.
Moreover, as all convict-labor decisions were determined at the prison level, subsequent changes in convict-labor legislation were exogenous to the choices of individual prison wardens. In addition, I use the fact that pre-convict-labor-era prisons were built without any anticipation that they would be used to employ prisoners. In comparison with contemporary prisons, old prisons were built in populated areas with higher wages and employment, which hinders my ability to find negative effects on local labor markets. Finally, the historical setting allows me to document long-run effects of convict labor in a developed country.
To elicit the effect of prison-labor competition on the local labor market, I construct a county- decade panel data set spanning 1850 to 1950. I measure the exposure of each county to convict labor as the industry-specific value of convict-made goods in all U.S. prisons weighted by the county’s industry labor share and by the distance from those prisons to the county centroid. This imposes two central assumptions: low labor mobility across counties, and iceberg costs of trade.
I estimate the effect of exposure to convict labor on manufacturing wages, employment outcomes, and patenting rates using ordinary-least-squares specification with fixed effects. While the panel dataset allows me to account for time- and county-invariant unobserved heterogeneity and state- specific time trends, three endogeneity concerns remain. First, there is an omitted-variable bias due to the endogenous choice of industry and the amount of goods produced by prisons. Second, prisons could be strategically located to earn higher profits for their states. Third, convict labor was used in industries where local labor unions were stronger and the wage growth rate was higher (Hiller (1915)). To address these concerns, I employ an instrumental variable estimation. I use state-level variation in the timing of passage of convict-labor laws interacted with the capacity of prisons that existed before convict-labor laws were enacted to construct an instrument for the prevalence of convict labor. Prison production was determined by a prison’s warden, and the state-level legislature can be considered exogenous. Old prisons were built without any anticipation that they would be used for production of goods; their locations were determined primarily by population size and urban share of population. Thus, conditional on factors important to the location of the old prisons, the interaction of convict-labor legislation and capacities of old prisons is likely uncorrelated with wardens’ activity and possible strategic location of prisons constructed after convict-labor systems were enacted. I find that the introduction of convict labor in 1870-1886 accounts for 16% slower growth in manufacturing wages in 1880-1900, 20% smaller labor-force participation, and 17% smallermanufacturingemploymentshare.
Comparingtwocounties,oneatthe25thpercentileand the other at the 75th percentile of exposure to convict labor, the more exposed county would on average experience a 2 percentage-point larger decline in mean log annual wages in manufacturing, a 0.9-percentage-point larger fall in manufacturing employment share, and a 0.6-percentage-point larger decline in labor-force participation.
While prison labor was used in quite a few industries, most prisons were producing clothes and shoes. Apparel and shoemaking industries employed mostly women, and they were the most affected by coerced labor. Female wages decreased 3.8 times more than those of men.
I also show that convict-labor shocks affected technology adoption. Comparing two counties, one at the 25th percentile and the other at the 75th percentile of exposure to convict labor, the more exposed county would be expected to experience a 0.6-standard-deviation larger number of registered patents in industries where prisoners were employed. I calculate that the introduction of convict labor accounts for 6% of growth in U.S patenting in affected industries. Because forms of convict labor differed in the North and South, I analyze subsamples. I show that results are mainly driven by the Northeastern and Midwestern states. For the Southern states, all coefficients remain significant, while the magnitudes of all effects are smaller.
I show that the results are robust to various model specifications and ways I construct the explanatory variable. Results hold if I use exposure to convict labor, weighted only by distance to prison (i.e. disregarding industry shares). I also demonstrate that results are not entirely driven by differences between counties with and without prisons: I find that results hold within the sample of counties with prisons. Then, comparing counties with prisons to counties adjacent to counties with prisons, and to second-order adjacent counties, I find that effects of convict labor decay with distance. Also, I find no effect on manufacturing outcomes when using as a placebo convict-labor output in farming. Further, I find no significant effect of convict labor on the number of patents in industries where prisoners were not employed. Finally, I employ firm-level repeated cross-section data for 1850-1880 from Atack and Bateman (1999) to show that firms in affected industries experienced larger decreases in wages. The firm-level data also suggest a decrease in the number of firms in affected labor-intensive industries.
My results relate to three broad economic literatures. I find that the problem of convict labor is similar to the discussion of low-skilled labor competition related to trade shocks (Autor, Dorn and Hanson (2013, 2016), and Holmes and Stevens (2014)). I find that local labor-market shocks come not only from foreign competition or technological progress but can arise from internal sources. Besides, my findings relate penitentiary policies to patterns of directed technological progress (Acemo ̆glu (2002, 2007), and Autor et al. (2016)). I provide evidence in support of findings in Bloom, Draca and Van Reenen (2016) that firms increase patenting as a way to survive competition. Moreover, in contrast to these recent shocks, I estimate the long-run effects of competition coming from the convict labor system. While sociologists and criminologists thoroughly studied convict labor in the 20th century, only a few qualitative papers raised the topics of competition be- tween prison-made goods and products created by free laborers (Roback (1984), McKelvey (1934), and Wilson (1933)).
The rest of the paper is organized as follows. Section 2 reviews the existing literature and relates this paper’s contributions to it. Section 3 introduces the history of U.S. convict labor and the records of its competition with free labor. Section 4 describes the data. Section 5 presents my identification strategy and estimation results. Section 8 assesses the possible impact of the contemporary U.S. convict-labor system and concludes.

10 June 2018

Blockchain and the Art market

Yet another 'blockchain is the answer' report, this time as an answer to problems in the high end art market.

The Art Market 2.0: Blockchain and Financialisation in Visual Arts report  by Duncan MacDonald-Korth, Vili Lehdonvirta and Eric T Meyer (University of Oxford and The Alan Turing Institute) comments
This report examines the potential impacts of blockchain technologies on the art market. Using a primarily interview-based approach with sector experts, the report analyses how and in what specific areas blockchain technologies could be used to change the composition of the art market, including the method of sale, record of provenance, and transparency of ownership. It also considers how blockchain technologies may change the balance of economic power in the art market, integrate art into the financial sector, and whether the art industry is likely to grow more or less consolidated as blockchain and/or other digital technologies are introduced. Finally, the report proposes the creation of a new fair trading standard for the art market, and argues that London will need to fight to maintain its dominant position in the art market.
The authors state that
• “Blockchain” is more than a technology: it is a discourse that unites and divides, and holds great meaning for all those involved. 
• Blockchain is not as far along in its development as many expect, with one leading technologist comparing it to the internet in 1993. 
• Blockchain is a concept that is pushing organisations and individuals to compete and collaborate to hash out a new digital future. 
• The economic stakes involved in the introduction of digital ledger technologies into the art market are very high. 
• Digital ledgers could help with not only the trading of art, but also provenance tracking and tax collection related to art transactions. 
• The conflicts of interest which plague the art market will not be solved by technology, but technology can offer an infrastructure to ease them. 
• Art market liquidity and value are likely to soar if digital ledger technologies are successfully introduced, creating new side industries, such as a boom in art-based lending, and making art an integral part of the financial industry. 
• Such financialisation of the art market holds significant promise for artists if correctly governed, but also comes with risks. 
• A single large company seems likely to dominate the art market as technologies are introduced. 
• The UK is likely to lose out on tax and royalties if it does not work hard to adopt digital art technologies. 
• The art market and the UK can set a standard for the adoption of digital technologies across the economy.
The key findings  are
 “As important as the internet itself” is how one of our most esteemed technological interviewees described blockchain. The comment captures well the uproar surrounding the poorly understood yet sensationally hyped technology. The technology, which fits into a group broadly referred to as “digital ledger technologies”, is as hard to define as it is easy to proselytise. In its most simple form, blockchain refers to a shared digital ledger, but such a summary hardly does justice to the range of uses, or better, the range of promised uses, for what at present appears among the most celebrated emerging technologies.
The sheer volume of media coverage and industry reports are a testament to both the technology’s promise, but also its power to manifest both hope and greed in industry and society. One of the most interesting aspects of blockchain is how it is imagined and presented by such diverse groups with varying goals and beliefs. Blockchain is the technology of the future for both the staunchest capitalists as well as those hoping for a utopian future of information sharing and the end of big business dominating the use of personal data. How could a single technology fulfil the hopes of such seemingly irreconcilable visions? One set of possible scenarios would see distributed ledger technologies develop into a generative platform comparable to the internet, which supports both the flow and the control of information, although the balance between these are the source of ongoing tensions among stakeholders. 
Because of the hype surrounding blockchain, it has been covered extensively in the media and by industry experts. What more is there to add? The answer is quite a bit, especially in specific areas which will have substantial impacts for stakeholders. The report will focus on the implications for blockchain on the art market. This is one of the leastdiscussed applications for blockchain, yet one where the technology may hit hardest. Our research has shown us that despite art frequently being seen as a niche, standalone sector, the battle over blockchain and the way in which it is implemented here may have extensive implications for its adoption across the rest of the the economy.
Looking at the art market, it is hard to miss blockchain’s potential. Art is currently plagued by fraud, illicit business, and tax evasion, all products of a fragmented physical market that is hard to follow. Enter blockchain, which on the surface appears a silver bullet. In one shot, blockchain could ensure the veracity of an art piece, make the price and parties to a sale transparent, and allow oversight to monitor the flow of art assets in and out of different tax jurisdictions. But surely it won’t be this easy, especially given how high the stakes. The total volume of annual art transactions is over $70bn year and growing, and that is just what is visible. 
The level of transparency provided by blockchain is what artists and regulators want, but will buyers, sellers, and the agents who represent them block such a development? Our research shows that all sides may be able to achieve their goals, and in doing so, set a model for how blockchain and the digital economy may evolve.
'Price Fixing the Priceless? Discouraging Collusion in the Secondary Art Market' by Nicole Dornbusch Horowitz in (2014) 66 Hastings Law Journal 331 comments 
 
In the 1920s and 1930s, major oil companies took advantage of market conditions to raise gasoline prices. They sold a limited amount of gasoline on smaller submarkets and the remainder of their gasoline by other methods. Despite the fact that the submarkets only represented a small portion of the overall gasoline market, pricing in the greater market was based on them. Thus, through collusive agreements, major oil companies were able to raise prices in the overall market by inflating prices in the smaller markets. In United States v. Socony-Vacuum Oil Co., the U.S. Supreme Court held that these agreements constituted price-fixing and violated the Sherman Act. 
 
Today, conditions in the art market create opportunities and incentives for coordinated price manipulation similar to those present in Socony-Vacuum. Art sold at auction represents a small portion of the art market, but prices paid for art at auction are used to determine prices in the larger market. Further, the art market’s opacity and the fact that small, tight-knit groups buy and sell high-end artworks provide even greater opportunities for collusion than those present in Socony-Vacuum. This Note examines these comparable opportunities and incentives through a study of activity in the market for artworks by Andy Warhol. 
 
In United States v. Socony-Vacuum Oil Co., the U.S. Supreme Court held that major oil companies engaged in per se illegal price fixing in violation of the Sherman Act by agreeing to purchase and store gasoline sold on “spot markets.” The oil companies made those agreements to restrict excess spot market supply and, as a result, increase gasoline prices in the market generally. Although spot markets occupied a small percentage of sales in the gasoline market, spot market supply and pricing were used to determine overall contract pricing. By restricting the spot market supply, the oil companies were able to create the appearance of greater demand or decreased supply, which increased prices for their own gasoline contracts. 
 
Today, pricing for art functions in a similar manner to the pricing for oil that occurred in Socony-Vacuum. Art sold at auction represents a small portion of the art market, but prices paid for art at auction are used to determine prices in the larger market. Moreover, data shows that, like the oil companies in Socony-Vacuum, art dealers and invested collectors of certain artists’ work frequently appear to be involved in the purchase and sale, and hence the price determination, of those artists’ work at auction. For example, with regard to the market for Andy Warhol’s artworks, an analysis of publicly available data shows that, since 2005, less than twenty parties have dominated bidding on Warhol works at auction. Many of these bidders are either secondary art market dealers or collectors with large Warhol holdings who have an interest in ensuring that Warhol works retain their high value. Currently, these dealers and collectors have similar incentives and opportunities to collude and fix Warhol prices through these auction “spot markets,” akin to the activity that occurred in Socony-Vacuum
 
Part I of this Note discusses market factors that make pricing in the art market, and particularly in the secondary art market, subjectively opaque and easy to manipulate. Part II compares the conditions in the art market to the incentives and opportunities for collusion that led to Socony-Vacuum. It also explains the characteristics that make the Warhol market a good example of the broader secondary art market and assesses auction records to determine bidding patterns in the Warhol market. Part III proposes a solution that would discourage collusion and offer greater market transparency, while preserving buyers and sellers’ much-desired privacy at auction.

Consistency and Precedent

'On Treating Unlike Cases Alike' by Frederick Schauer in (2018) 34 Constitutional Commentary comments 
From Aristotle’s time to the present, the idea of “treating like cases alike” has informed popular and academic thinking about equality, principally, and also about the idea of decision according to precedent. And although a host of thinkers has exposed the fundamental emptiness of the idea unless supplemented by some criterion of which likenesses are relevant and which not, an important implication of this line of thought is that a regime of precedent generally or of stare decisis particularly is one in which acts, events, and decisions that are in fact unalike are deemed to be alike in the service of various normative goals. This paper, written as part of a symposium on Randy Kozel’s Settled versus Right: A Theory of Precedent (Cambridge 2017), focuses on the way in which a norm of precedent requires treating unalike cases alike, and on the way in which such a norm has potential community-creating virtues.