05 June 2014

Technopanic

'Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle' by Adam D. Thierer in (2013) 14(1) Minnesota Journal of Law, Science & Technology comments that
 Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence. Such fears are frequently on display in the Internet policy arena and take the form of full-blown 'technopanic,' or real-world manifestations of this illogical fear. While it’s true that cyberspace has its fair share of troublemakers, there is no evidence that the Internet is leading to greater problems for society. 
This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The various factors contributing to 'fear cycles' in these policy areas are documented. 
To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress. 
Thierer offers a four-part framework for analysing risks associated with new technological developments
A. Defining the Problem
The first step involves defining the problem to be addressed and determining whether harm or market failure exists. These are two separate inquires. Defining the problem is sometimes easier said than done. What is it that we are trying to accomplish?
It is vital that “harm” or “market failure” not be too casually defined. Harm is a particular nebulous concept as it pertains to online safety and digital privacy debates where conjectural theories abound. Some cultural critics insist that provocative media content “harms” us or our kids. Many moral panics have come and gone through the years as critics looked to restrict speech or expression they found objectionable. In cases such as these, “harm” is very much an eye-of-the-beholder issue. It is important to keep in mind that no matter how objectionable some media content or online speech may be, none of it poses a direct threat to adults or children.
Likewise, some privacy advocates claim that advertising is inherently “manipulative” or that more targeted forms of marketing and advertising are “creepy” and should be prohibited. “But creating new privacy rights cannot be justified simply because people feel vague unease,” notes Solveig Singleton, formerly of the Cato Institute. If harm in this context is reduced to “creepiness” or even “annoyance” and “unwanted solicitations” as some advocate, it raises the question whether the commercial Internet as we know it can continue to exist. Such an amorphous standard leaves much to the imagination and opens the door to creative theories of harm, which are sure to be exploited. In such a regime, harm becomes highly conjectural instead of concrete. This makes credible cost-benefit analysis virtually impossible since the debate becomes purely about emotion instead of anything empirical.
Turning to economic considerations, accusations of consumer “harm” are often breezily tossed about by many policymakers and regulatory advocates without any reference to actual evidence proving that consumer welfare has been negatively impacted. “Market failure” claims are also rampant even though many critics are sometimes guilty of adopting a simplistic “big is bad” mentality. Regardless, a high bar must be established before steps are taken to regulate information and digital technologies based upon market failure allegations. 
B. Consider Legal and Economic Constraints 
The second step is to identify constitutional constraints and conduct cost-benefit analysis of government regulation.
If harm or market failure can be demonstrated, the costs associated with government action must be considered. Even where there is harm and a market failure, it does not necessarily follow that government can effectively address the problem. Proposed rules should always be subjected to rigorous cost-benefit analysis. Regulation is not a costless exercise. All government action entails tradeoffs, both economic and social. Of course, not all legal solutions entail the same degree of cost or complexity as direct regulatory approaches. Can the problem be dealt with through traditional common law methods? Can contracts, property rights, antifraud statutes, or anti-harassment standards help? Again, consider privacy harms. Instead of trying to implement cumbersome, top-down privacy directives based upon amorphous assertions of privacy “rights,” the Federal Trade Commission (FTC) should hold companies to the promises or claims they make when it comes to the personal information they collect and what they do with it. The agency has already brought and settled many privacy and data security cases involving its authority under Section 5 of the Federal Trade Commission Act to police “unfair and deceptive practices.” Recently the FTC has brought enforcement actions against Google and Facebook. Both companies agreed through a consent decree to numerous privacy policy changes, and they must also undergo privacy audits for the next 20 years. Again, no new law was needed to accomplish this. The FTC’s plenary authority was more than sufficient.
Of course, information technology is, by definition, tied up with the production and dissemination of speech. Consequently, First Amendment values may be implicated and limit government action in many cases. 
C. Consider Alternative, Less Restrictive Approaches 
The third step involves an assessment of the effectiveness of alternative approaches to addressing the perceived problem.
Because preemptive, prophylactic regulation of information technology can be costly, complicated, and overly constraining, it is often wise to consider alternative, less restrictive approaches. Education and awareness-building strategies can be particularly effective, as well as being entirely constitutional. Empowerment-based strategies are also useful. As noted previously, these strategies can help build resiliency and ensure proper assimilation of new technologies into society. If regulation is still deemed necessary, transparency and disclosure policies should generally trump restrictive rules. For example, after concerns were raised about wireless “bill shock”—abnormally high phone bills resulting from excessive texting or data usage—FCC regulators hinted that regulation may be needed to protect consumers. Eventually, the wireless industry devised a plan to offer their customers real-time alerts before they go over monthly text or data allotments. Although these concessions weren’t entirely voluntary, this transparency-focused result is nonetheless superior to cumbersome rate regulation or billing micromanagement by regulatory officials. Many wireless operators already offered text alerts to their customers before the new notification guidelines were adopted, but the additional transparency more fully empowers consumers. Transparency and disclosure are also the superior options for most online safety and privacy concerns. Voluntary media content ratings and labels for movies, music, video games, and smartphone apps have given parents and others more information to make determinations about the appropriateness of content they may want to consume. Regarding privacy, consumers are better served when they are informed about online privacy and data collection policies of the sites they visit and the devices they utilize.
D. Evaluate Actual Outcomes 
Finally, if and when regulatory solutions are pursued, it is vital that actual outcomes be regularly evaluated and, to the extent feasible, results be measured. To the extent regulatory policies are deemed necessary, they should sunset on a regular basis unless policymakers can justify their continued existence. Moreover, even if regulation is necessary in the short-term, resiliency and adaptation strategies may emerge or become more evident over time.