10 May 2014

Identity Offences

In looking at yesterday's arrests over fake identities and allegations of $7m 'insider trading' (an employee of the Australian Bureau of Statistics allegedly feeding an associate with early access to business data) I'm reminded of recent prosecutions of people using what the mass media dub 'fake names' for domestic air travel.

In February last year a 29 year old man "allegedly travelling under a false name" was arrested at Brisbane Airport by Australian Federal Police, having flown from Perth with an intended destination of Rockhampton.

The AFP media release at that time stated
As part of routine airport patrol operations, the man’s identification was checked by AFP officers and was found to be different to the name listed on his travel documentation. 
Recent changes to legislation [noted here] make it a criminal offence to travel under a false name and this arrest is the first arrest relating to these offences. 
The legislation came into force on 28 November 2012 following recommendations made by the 2011 Parliamentary Joint Committee on Law Enforcement’s Inquiry into the Adequacy of Aviation and Maritime Security Measures  [noted here].
The man was charged with the following offences:
Taking a flight using an air passenger ticket in a false name – an offence under Section 376.4(2) Criminal Code Act 1995.
Using false information to obtain an air passenger ticket – an offence under Section 376.3(1) Criminal Code Act 1995.
AFP National Manager Aviation Shane Connelly said that passengers should be aware that officers randomly check identification at airports. 
“Our airport officers will uphold Australian law in respect to persons travelling or attempting to travel with false documentation at all of Australia’s major domestic airports,” Assistant Commissioner Connelly said. “The new legislation makes it a criminal offence to both buy a ticket under a false name and to use a ticket under a false name.” 
It isn't, of course, an offence to travel by train or bus or ferry using another name (subject of course to the ticket being legitimately obtained rather than, for example, through unauthorised use of someone's credit card).

In March this year a 23 year old man was charged with travelling under false identification documents.
The man was arrested today while AFP officers were conducting routine patrols around the departure gates at Cairns Airport domestic terminal. 
The man had attempted to board a flight to Brisbane when airline staff identified that he was travelling under false documentation. Airline staff refused to accept the passenger. 
The matter was referred to the AFP and police enquiries commenced. The man was arrested and charged with the following offences:
False identification information – air passenger tickets obtained using a carriage service, contrary to section 376.3 (1) of the Criminal Code Act 1995. 
Carriage Service Offence – taking a flight using an air passenger ticket, contrary to section 376.3 (2) of the Criminal Code Act 1995.
Airport Police Commander Cairns Airport Glen Fisher said that passengers should be aware of random identification checks. “Passengers are reminded that it is an offence to both buy a ticket under a false name and to use a ticket under a false name,” Superintendent Fisher said. The maximum penalty for these offences is 12 months imprisonment.
 In the ABS case the alleged trader in Melbourne (an employee of the NAB) was charged with
One count of conspiracy to engage in insider trading, pursuant to section 11.5 of the Criminal Code Act 1995 (Cth), which is an offence by virtue of sub-section 1311(1) of the Corporations Act 2001 (Cth) in that they contravened paragraph 1043A(1)(c) of the Corporations Act 2001 (Cth). 
One count of giving a bribe to a Commonwealth public official, with the intention to influence the official in the exercise of his duties as a Commonwealth public official, contrary to section 141.1(1)(a)(iii) of the Criminal Code Act 1995 (Cth); 
Three counts of insider trading, by trading in foreign exchange derivatives whilst in possession of inside information not generally available to the public, contrary to sections 1043A(1)(c) and 1311(1) of the Corporations Act 2001 (Cth); 
One count of dealing in identification information using a carriage service and dealing with that identification information, with the intention to use the identification information to pretend to be or to pass themselves off as another person for the purpose of facilitating the commission of an offence, contrary to section 372.1A(1) of the Criminal Code Act 1995 (Cth). 
One count of dealing in proceeds of crime, money or property worth AU$1,000,000 or more, contrary to section 400.3(1) of the Criminal Code Act 1995 (Cth).
 The ABS employee was charged with -
One count of conspiracy to engage in insider trading, pursuant to section 11.5 of the Criminal Code Act 1995 (Cth), which is an offence by virtue of sub-section 1311(1) of the Corporations Act 2001 (Cth) in that they contravened paragraph 1043A(1)(c) of the Corporations Act 2001 (Cth). 
One count of receiving a bribe as a Commonwealth public official, contrary to section 141.1(3)(a)(iii) of the Criminal Code Act 1995 (Cth); 
One count of abuse of public office to dishonestly obtain a benefit, contrary to section 142.2(1)(b)(i) of the Criminal Code Act 1995 (Cth); 
One count of dealing in proceeds of crime, money or property worth $10,000 or more, contrary to section 400.6(1) of the Criminal Code Act 1995 (Cth).

Chaos

'Law's System: The Necessity of System in Common Law' by Gerald J. Postema in (2014) New Zealand Law Review comments that
T.E. T.E. Holland infamously described the common law “chaos with a full index.” One might detect a note of perverse pride in that quip, something entirely missing from Bentham’s summary dismissal of the common law of his day as harmful fiction. More recently, Peter Birks criticized contemporary common law for its manifest “absence of system,” its disorderly collection of legal categories and miscellany of odd rules and doctrines making it difficult to determine its prescriptions in particular cases with any clarity or conviction. Yet Birks was inclined to think that this grave deficiency was due not to ineradicable deficiencies in common law itself, but rather to inattention and lack of intellectual discipline. He insisted that, with hard work and intellectually responsible, systematic thinking, this garden gone to seed could be tamed, restored to use as a model instrument of the rule of law. For this, a comprehensive, logically coherent, rationally cogent structure of concepts and doctrine — a systematic theory, or rather, map of the common law — was essential, he argued. Such a map promised to provide the discipline manifestly lacking in common-law thinking. Birks proposed a cartographic solution to the historic chaos of the common law. 
Critics of this cartographic proposal argue that it does not merely regiment common-law thinking, but it replaces it with something altogether different. They argue that common law cannot be disciplined by theory or system, because it has emerged over a long period of time through a haphazard historical process, and even more because common law is “a splendidly anti-theoretical contrivance. . . . The distinguishing marks of the common law as an intellectual tradition,” it is argued, “are its resistance to systematization, its refusal to consider more than the case at hand, and the extraordinary weight of inertia with which it resisted attempts at ‘academic’ or comprehensively analytical statements of substantive rules and their presuppositions.” Unlike Bentham, these critics treat this bred-in-the-bone resistance to theory and system not as sufficient cause to raze the obsolete structure and replace it with a fully modern, rational, built-from-scratch code, but rather, like Blackstone, to festoon its ancient, ramshackle ramparts with celebratory banners. 
This debate (or more accurately, this exchange of hyperbolic epithets) obscures the depth and breadth of the common law’s commitment to system and to reflective, theory-inclined practical reasoning in its ordinary practice. Indeed, law of any jurisdiction, because of features essential to its distinctive mode of operating and because of the need to maintain its integrity, cannot ignore the demands of system. Moreover, distinctive modes of common-law reasoning presuppose and respond to the demands of system. However, to acknowledge this is not to deny the radical nature of the reform of common-law thinking and practice proposed by cartography partisans, but rather to highlight it and to put into perspective the kind of regimentation they call for. Once we appreciate the nature and scope of any law’s (and hence the common law’s) commitment to genuinely reflective and systematic thinking in its ordinary practice, we will be in a position to assess the merits of the cartographic proposal. I will argue that, while Birks’s mapping of the common law promises theoretical elegance, considerations of justice, integrity, and the rule of law decisively favor the common-law conception of and commitment to system.
Postema goes on to argue that
 the ambition to regiment common-law chaos to system driving Birks’ cartographic enterprise was not uncommon among common-law jurists. However, Birks’ project differs from these various proposals, not because it is far more fully developed, or because it draws inspiration from Roman law rather than nineteenth-century views of science, but because it makes no concessions to what I loosely called the pragmatism of common-law practice as embraced by Langdell, Holmes, or Pollock. 
Birks’s aim was to reduce the private-law doctrines of (English) common law to rational order, to abstract from the particulars of cases and rules to reveal the scaffolding of implicit concepts and to put them into a manifestly rational order. He regarded this as kind of logical mapping, as an exercise in value-free conceptual classification. The task was not merely to provide a convenient or efficient means of retrieving rules and doctrines from storage, nor was it to derive the core principles of law from foundational moral notions or principles; it was not meant as a retrieval device since such a device is not theoretical at all — moreover, now we can write software to do that job — and it not a matter of deriving principles from more fundamental moral standards because that project is theoretical in the wrong way, relying too heavily on contested moral notions. Law, after all, is meant to settle matters, not to fund endless debate, in his view. Birks took Gaius’s work on the Roman law and Linnaeus’s work on biological phenomena as models for his work. “Abstract rationality” and “logic,” not, as Holmes insisted, “experience” or “empiricism,” were the tools. The aim was to construct a scheme or map of core concepts of private common law that was exhaustive (i.e., comprehensive in scope), expressed in a hierarchy of concepts, related by logical relations of inclusion or subsumption rather than justificatory support or dependence, defining exclusive categories, which allow no overlap or extension of content across categories. The aim was to explain the content of existing private common law in terms of concepts and categories based on “similarities that really matter” and dissimilarities that mark “real differences,” but which do not depend on morally substantive values or principles. That is to say, the method was meant to be value-free, enabling lawyers and students of law to identify key legal concepts and to see how disparate parts of the law fit together without having to look to the principles which purport to justify them or the norms they comprise. Once completed, the map was meant to offer lawyers and judges a framework from which to work out clear and determinate solutions to (all?) legal problems. The map, rather than any intuitive, common-law situation sense was to guide lawyers and judges in assessing legally relevant similarities and differences and locating concrete cases under governing legal norms. Finally, although Birks advanced his preferred map as a tentative hypothesis, open to further refinement, he did not shy from proposing to reshape and even to reject long-established legal categories. The enterprise of mapping the common law was, as he practiced it, not merely expository, in Bentham’s terms, it is also self-consciously critical and revisionary.

Big Data, Credit Profiling and the FCRA

The crisp 'How the Fair Credit Reporting Act Regulates Big Data' by Chris Jay Hoofnagle, for the Future of Privacy Forum's Big Data and Privacy: Making Ends Meet event (September 2013) offers
two observations concerning "big data." First, big data is not new. Consumer reporting, a field where information about individuals is aggregated and used to assess credit, tenancy, and employment risks, achieved the status of big data in the 1960s. Second, the Fair Credit Reporting Act of 1970 (FCRA) provides rich lessons concerning possible regulatory approaches for big data. 
Some say that "big data" requires policymakers to rethink the very nature of privacy laws. They urge policymakers to shift to an approach where governance focuses upon "the usage of data rather than the data itself." Consumer reporting shows us that while use-based regulations of big data provided more transparency and due process, they did not create adequate accountability. Indeed, despite the interventions of the FCRA, consumer reporting agencies (CRAs) remain notoriously unresponsive and unaccountable bureaucracies. 
Like today's big data firms, CRAs lacked a direct relationship with the consumer, and this led to a set of predictable pathologies and externalities. CRAs have used messy data and fuzzy logic in ways that produce error costly to consumers. CRAs play a central role in both preventing and causing identity fraud, and have turned this problem into a business opportunity in the form of credit monitoring. Despite the legislative bargain created by the FCRA, which insulated CRAs from defamation suits, CRAs have argued that use restrictions are unconstitutional. 
Big data is said to represent a powerful set of technologies. Yet, proposals for its regulation are weaker than the FCRA. Calls for a pure use-based regulatory regime, especially for companies lacking the discipline imposed by a consumer relationship, should be viewed with skepticism.

Global Surveillance Standards

'The Feasibility of Transatlantic Privacy-Protective Standards for Surveillance' by Ian Brown considers
the feasibility of the adoption of specific, international human rights law-compliant, transatlantic standards on foreign surveillance, in the context of Edward Snowden’s revelations of large-scale surveillance programs operated by the US National Security Agency (NSA) and selected European intelligence services. The article describes examples of current good State practice, and options for setting standards for transatlantic data sharing, control of state interception and data monitoring capabilities, and oversight of intelligence agencies. It identifies relevant principles developed by civil society and industry groups that are leading political campaigns for reform, and the conditions under which these efforts are likely to succeed. It concludes by discussing the key intergovernmental forums where these standards could be considered.
Brown comments that
The US and European states are all parties to the UN’s International Covenant on Civil and Political Rights (ICCPR), which protects privacy and correspondence under Article 17, while the regional European Convention on Human Rights (ECHR) Article 8 has been interpreted in a robust way by the European Court of Human Rights to restrict governmental surveillance. The European Union’s Data Protection Directive (95/46/EC) and Charter of Fundamental Rights both apply additional strong privacy protections – although not in the area of national security, which is a competence reserved to the Member States. 
This section describes privacy standards developed from these instruments by civil society, political bodies and courts, covering international sharing of personal data, controls on government surveillance activities, and oversight of intelligence agencies. ... 
There are several US-EU agreements allowing bulk data sharing of air passenger and financial transaction records, and a Mutual Legal Assistance Treaty (MLAT) allowing a case-by-case sharing of law enforcement information. The two parties have been attempting to negotiate an overarching data protection agreement, as urged by the European Parliament, but have so far found their differences insurmountable. The US and EU agreed in 2004 to allow EU-based air carriers to supply the US Department of Homeland Security with Passenger Name Record (PNR) data on passengers flying to the US, as required by US law. Without this agreement, airlines would have been in breach of EU data protection law if they supplied the data. A second agreement was reached in 2007, after the European Court of Justice found that the EU concluded the first agreement on the wrong legal basis. A third agreement was made in 2011 following the Lisbon Treaty, which gives the European Parliament (EP) greater power over justice and home affairs issues, and requires its consent for treaties. 
Serious political controversy resulted from the revelation in June 2006 that the Belgium-based SWIFT global inter-bank payment service was providing the US Treasury with access to its transaction database in the US, containing all transaction instructions. The European Data Protection Supervisor criticised the European Central Bank, as a SWIFT oversight group member, for allowing this, while the Belgian data protection authority found that SWIFT had broken European data protection law. 
In response, SWIFT redesigned its computing system so that, from 2010, intra- European bank instructions were not automatically copied to the US processing centre. The EU and US concluded an agreement allowing targeted access to European instructions. However, it does not require a judicial ruling for data transfer; contains a broad definition of terrorism; and provides EU citizens with no legal redress in US courts. There are allegations that the US Treasury is still receiving up to 25% of all SWIFT transactions – billions each year – since SWIFT is only able technically to provide bulk access to data. Controls are in place on searches of this data, with data mining banned, and regular reviews by an EU team. 
Following revelations that NSA has anyway gained unauthorised access to SWIFT’s data systems, the European Parliament resolved that the agreement should be suspended, and reiterated its call for “any data sharing agreement with the US [to be based on] on a coherent legal data protection framework offering legally binding personal data protection standards, including with regard to purpose limitation, data minimisation, information, access, correction, erasure and redress”. 
The EU-US Mutual Legal Assistance Treaty was agreed in 2003, but not concluded until November 2009. It allows the use of shared data for the purpose of criminal investigations and proceedings, and for preventing an ‘immediate and serious threat to ... public security’. Both NGOs and industry have called for all future US foreign data collection to take place through such MLATs, and that the US ‘desist from any and all data collection measures which are not targeted and not based on concrete suspicions’. Industry groups have also called on the US Congress to fully fund the Department of Justice’s processing of MLAT requests, given that they can currently take up to 18 months – far too long for law enforcement agencies’ needs. 
Additionally, a joint set of principles endorsed by over 200 NGOs argues: ‘Where States seek assistance for law enforcement purposes, the principle of dual criminality should be applied. States may not use mutual legal assistance processes and foreign requests for protected information to circumvent domestic legal restrictions on communications surveillance. Mutual legal assistance processes and other agreements should be clearly documented, publicly available, and subject to guarantees of procedural fairness.’ 
Europol and Eurojust have signed agreements with the US on policing (dated 6/12/2001) and judicial cooperation (dated 6/11/2006). Transfer of data to third countries is addressed in the EU Council Framework Decision on the protection of personal data processed in the framework of police and judicial cooperation in criminal matters, which is currently being revised by the European Parliament. 
Since 2006 the European Commission has been negotiating an overarching agreement with the US on information sharing and privacy, initially in an informal High Level Contact Group, and since 2011 with a formal negotiating mandate. The mandate is confidential, but a draft was leaked and is likely to be substantively similar. The intention is for this to be a binding instrument that sets data protection standards without itself authorizing specific data sharing, which would be done in specific further instruments. After three years the privacy standards would apply to existing EU and member state agreements, including the PNR and SWIFT agreements, unless they are brought into conformity in that time. 
In response to the final report from the High Level Contact Group, the European Data Protection Supervisor suggested a number of principles that should guide an EU-US sharing agreement. Most are at least partially included in the European Commission negotiating mandate, but some remain controversial with the US government: 
‘Clarification as to the nature of the instrument, which should be legally binding in order to provide sufficient legal certainty; A thorough adequacy finding, based on essential requirements addressing the substance, specificity and oversight aspects of the scheme. The EDPS considers that the adequacy of the general instrument could only be acknowledged if combined with adequate specific agreements on a case by case basis. 
A circumscribed scope of application, with a clear and common definition of law enforcement purposes at stake; Precisions as to the modalities according to which private entities might be involved in data transfer schemes; Compliance with the proportionality principle, implying exchange of data on a case by case basis where there is a concrete need; Strong oversight mechanisms, and redress mechanisms available to data subjects, including administrative and judicial remedies; Effective measures guaranteeing the exercise of their rights to all data subjects, irrespective of their nationality; Involvement of independent data protection authorities, in relation especially to oversight and assistance to data subjects.’ … 
Because nation states jealously guard their sovereignty over ‘national security’ issues, it will be more difficult to impose international standards on surveillance by intelligence agencies. Taking lawsuits through Europe’s national courts to the European Court of Human Rights is one possible mechanism. NGOs Privacy International and Liberty have commenced actions in the UK Investigatory Powers Tribunal (IPT), which has exclusive competence to hear complaints on intelligence matters, while a Paris court has opened an investigation following complaints from the International Federation of Human Rights and the French League of Human Rights. Big Brother Watch, the Open Rights Group and English PEN have made an application directly to the European Court of Human Rights, claiming that English law cannot provide a remedy for breaches of Article 8 because of the limited capacity of the IPT. 
While Canada, Australia and New Zealand are also members of the ‘Five Eyes’ intelligence alliance, the US and UK governments are the most important actors in Snowden’s leaks. A number of bills have already been proposed in Congress to constrain the NSA’s domestic surveillance, and key existing powers (such as the Patriot Act s 215, which NSA has used to gather records of all US telephone calls) must be renewed between 2015-2017. EFF, ACLU and EPIC have taken a number of court actions in an attempt to uncover and restrain NSA surveillance activities. However, the privacy rights of non-US persons are negligible under the US Constitution and Privacy Act of 1974, which has received very little US political attention. 
A case can be made that the European Convention on Human Rights requires States parties to protect the privacy rights of all those within their jurisdiction – including those spied upon internationally – but achieving this in the US without the cooperation of the executive branch will be extremely difficult, involving modification of the Privacy Act and either a Constitutional amendment or the overturning of several Supreme Court precedents. It will be difficult to persuade the US that it should accept any limitations on its abilities to monitor data and communications relating to non-US persons that physically transit US territory – which NSA Director Keith Alexander has called a huge ‘home-field advantage’.  
However, as a party to the ICCPR and the Council of Europe Cybercrime Convention, civil society has argued that the US is bound ‘to extend privacy protection to non-US citizens and to observe the principles of legality, necessity and proportionality ... in their surveillance activities.’ EPIC has previously made detailed proposals for an update to the Privacy Act. North American and European advocates have also called on the US government to support high EU standards for data protection; reform Patriot and FISA Amendments Act provisions, enact the Consumer Privacy Bill of Rights, stop lobbying against the EU Data Protection Regulation, and to ratify the Council of Europe’s Convention 108 on data protection. 
Internationally, civil society groups have identified some key features of a more human rights-compliant legal framework, and produced a joint set of principles that have been endorsed by over 200 organisations. These include: 
  • Intelligence agencies should only have targeted, limited access to data. EFF suggests ‘a specific person or specific identifier (like a phone number or email address) or a reasonable, small and well-cabined category (like a group on the terrorist list or member of a foreign spy service).’ EDRi suggests a ban on ‘all data collection measures which are not targeted and not based on concrete suspicions’. 
  • Agency access should be to specific records and communications. They should not be authorised to undertake ‘bulk’, ‘pervasive or systematic monitoring, [which] has the capacity to reveal private information far in excess of its constituent parts’ – such as the submarine cable taps that give NSA and GCHQ access to vast quantities of data, which they then winnow down in secret, or be given access to all telephone records. Any data access should trigger legal protections – this should not come only when data is picked out of a large datastream already collected by an agency. 
  • Data collected using special national security powers should be completely blocked from use for other government purposes, including law enforcement. It should be retained for limited periods, and deleted once no longer required. 
  • ‘Metadata’/’communications data’ can be extremely revealing about individuals’ lives, and currently receives very low levels of legal protection. This was highlighted by the EU Court of Justice in its judgment invalidating the Data Retention Directive, which required the storage of such data for a period of up to two years.47 EFF has called for a requirement for a probable cause warrant for agencies to access previously non-public information e.g. revealing identity, websites/info accessed, who with/where/when people communicate. 
  • Strict limits on intrusion into freedom of association by network analysis (the creation of very large datasets linking people through several communication hops – three in the NSA’s case, which can intrude on the privacy of millions of people). 
  • The incorporation of privacy-protective technologies and limitations within surveillance systems. As President Obama has observed: ‘[T]echnology itself may provide us some additional safeguards. So for example, if people don't have confidence that the law, the checks and balances of the court and Congress, are sufficient to give us confidence that government's not snooping, well, maybe we can embed technologies in there that prevent the snooping regardless of what government wants to do.’  EFF has campaigned against the extension of interception capability requirements to social networking sites and other Internet services, while the joint NGO principles say: ‘States should not compel service providers or hardware or software vendors to build surveillance or monitoring capability into their systems, or to collect or retain particular information purely for State surveillance purposes... and refrain from compelling the identification of users as a precondition for service provision.’ 
  • Illegal surveillance should be criminalised, with effective remedies when individuals’ rights are breached. Illegally gathered material should be inadmissible as evidence, while whistleblowers should be protected for revealing illegal behaviour.
EDRi has demanded ‘that any foreign data collection measures include provisions giving all affected individuals, at the very least, equal rights to US citizens at all stages of an investigation and, to avoid ‘jurisdiction-shopping’, rights that are not significantly lower than any democratically approved safeguards in their country of residence’ The European Commission is pushing for this in their negotiations with the US over a data sharing privacy agreement. ... 
Finally, stronger oversight of intelligence agencies can reduce the likelihood that they misuse their surveillance powers. All democracies acknowledge the necessity of this oversight (especially to protect against the risk of their abuse against political opponents of the government): agencies have very intrusive powers and wide discretion in their use, but the secrecy they operate under severely hinders the scrutiny measures applied to the rest of government. Oversight can also improve agency effectiveness, by challenging waste and poor performance. 
All of the European and North American democracies have special bodies appointed by the legislature and/or executive to oversee intelligence agency activity. Key features for effective oversight include the active participation of opposition parties, the resourcing of expert investigators and advisers, and full access to agency documents. The joint NGO principles state: ‘Oversight mechanisms should have the authority to access all potentially relevant information about State actions, including, where appropriate, access to secret or classified information; to assess whether the State is making legitimate use of its lawful capabilities; to evaluate whether the State has been transparently and accurately publishing information about the use and scope of communications surveillance techniques and powers; and to publish periodic reports and other information relevant to communications surveillance.’ 
Many countries also have specific officials responsible for oversight, including the NSA Inspector General and a to-be-appointed Privacy and Civil Liberties Officer, and the UK’s Interception of Communications Commissioner and independent reviewer of terrorism legislation. In the light of the Snowden revelations, the impact of the US and UK oversight bodies and officials has clearly been limited. A broader membership of oversight panels could be one way to improve their ability to challenge disproportionate surveillance – in particular including individuals with the technical expertise required to understand complex surveillance systems, which it seems has been a severe challenge for the Foreign Intelligence Surveillance Court. Requirements for individuals to undergo highly intrusive security vetting before participating in oversight activities will reduce the diversity of those willing to do so. The European Parliament has stated that “oversight of intelligence services’ activities should be based on both democratic legitimacy (strong legal framework, ex ante authorisation and ex post verification) and adequate technical capability and expertise, the majority of current EU and US oversight bodies dramatically lack both, in particular the technical capabilities”. 
NGOs are campaigning for greater transparency of surveillance activities, with publication of details of all surveillance programmes, allowing the media, civil society and individuals to understand and if necessary criticise agency activity. Industry groups are also attempting to persuade the US government to allow them to publish more detailed statistics on access to their customer data, with Microsoft and Google taking legal action to uphold their ‘clear right under the U.S. Constitution to share more information with the public.’ 
The NGO joint surveillance principles further require notification of surveillance targets once investigations have concluded; publication of aggregate information on the number of requests approved and rejected or contested by courts (including the number of users affected), with a disaggregation of the requests by service provider and by investigation type and purpose; and the removal of confidentiality requirements that block Internet companies from publishing details of the procedures they apply when they receive surveillance orders. 
NGOs have also suggested that secret procedures used to authorise surveillance should feature a ‘privacy advocate’ making a case against the government request. President Obama has already conceded that such an advocate should appear in appropriate cases at the US Foreign Intelligence Surveillance Court. EFF suggests that such an advocate needs full access to case materials, with the ‘independence and protections that public defenders enjoy’.

Stasi and other archives

'The Poisoned Madeleine: Stasi Files As Evidence and History' by Rachel E. Beattie in (2009) 1(3) Faculty of Information Quarterly (University of Toronto) examines
 the privacy debates surrounding records from the former East German governments secret police agency, the Stasi. The Stasi records of mass surveillance and punishment of citizens serve as both a record of the East German peoples oppression and a vital tool for coming to terms with the communist dictatorship in East Germany. Debates about opening versus sealing these files allowed Germans to analyze the disconnect between rights of privacy and rights of information. The success of the laws created to both open and restrict the use of the Stasi files can be attributed to the innovative way that these laws were able to accommodate those who needed the information as well as those concerned about their privacy. Further, the whole debate raises many questions about the power of sensitive documents. In the course of the debate, the veracity of the files was questioned repeatedly, and individually they are highly suspect. However, taken as a group, they build a very accurate picture of a repressive police state. Additionally, the files work as a collective memory-building project for former East Germans. Through the files, they can acknowledge and work through the trauma of the East German governments surveillance. Ultimately, the German resolution to the problem of the Stasi files serves as an example for other governments struggling with the disposal of extremely sensitive documents. 
Beattie comments
 There is a moment at the end of the film The Lives of Others (Das Leben der Anderen, 2006) when, after German unification, the lead character, East German writer and cultural critic Georg Dreyman (Sebastian Koch), goes to look at his Stasi file. Georg is shocked to find within his file the story of another man, Agent Gerd Wiesler (Ultich Mhe). This scene demonstrates the incredible power of the ability to see the information collected by the secret police in that it positions the Stasi file as a method for reclaiming identity. 
The film neatly articulates the function of the Stasi record as evidence of not only the East German governments oppression but also the East German peoples defiance of it. In the film, a visibly shaken Georg reads a transcript of surveillance on his house. Georg had thought he was free from Stasi surveillance, so this scene acts as an important revelation and forces him to reinterpret his view of his former government. These revelations throw Georgs world into confusion as he learns that his deceased former lover had informed on him and that it was the Stasi agent assigned to watch him who removed incriminating evidence before a Stasi search of his apartment. Through the collection of records that is his file, Georg learns the truththat of the power of one man resisting an unfair government. Thus, suspect records with questionable evidential value in the legal sense of the term come to stand for much more. They mix with former East Germans personal memory, build identity, and create catharsis for survivors of oppression. Through the Stasi lies, former East Germans can learn the truth about their government. 
This article, through the example of the East German Stasi files, will examine the role of records from totalitarian regimes as objects of power, evidence, and keys to memory. These records are automatically suspect, as they were often filled with conjecture, errors, and straight-out lies. However, it is possible that these questionable records also act as evidence in a different, deeper sense of the word and in fact are important agents of memory and identity both of the criminal excesses of the system that created them and of the events they reflect. The Stasi files are imperfect documents and would likely not be admissible as evidence in a court of law. However, they are vitally important for all Germans, and especially former East Germans, who seek to understand the communist regime, hold responsible individuals accountable, and incorporate their pasts into their identities. In essence, the same records that once oppressed can now free and facilitate healing.
'The Opening of the State Security Archives of Central and Eastern Europe' by Paul Maddrell in (2014) 27(1) International Journal of Intelligence and CounterIntelligence 1-26 comments that
Laws passed since 19911 have opened the state security archives of the former Communist states of Central and Eastern Europe. Such legislation is in place throughout the former Soviet Bloc, but the focus here is on the opening of the German and Romanian archives. The process is far advanced in Germany and much less so in Romania. The contrast between the two very well displays the issues involved. The opening of the archives has been an important tool of de-Communization. The process has been fullest in Germany because of the strength and self-confidence of the German legal system and because of the weakness of the Communists’ political position. It has been partial in Romania because the legal system there lacks authority, independence, and self-confidence, and the Communists have remained strong. 
Important differences exist between the various former Soviet bloc countries, but, generally speaking, the institutions which now hold the Communist-era state security records have four tasks: 
1. to enable the connections of public officials with the former Communist security and intelligence services to be investigated so that those who collaborated with those services can be removed from public office (‘‘lustration,’’ as it is called); 
2. to make available to targets of Communist-era surveillance and repression the records held on them; 
3. to make records available for the prosecution of those who committed crimes during the period of Communist rule; and 
4. to enable historians and journalists to write the history of Communist surveillance and repression more accurately and fully. 
These four tasks serve the purposes of building stable democratic institutions which enjoy public trust, and of giving victims of the Communist security services a measure of retrospective justice.
In discussing 'history writing and journalism' Maddrell comments
The East German Stasi’s files are expressly, by law, to be made available to historians, journalists, and other researchers to ensure that its long-secret role in maintaining the Communist regime in power is revealed to its victims and the whole world. This task is becoming increasingly important for the BStU since its responsibility for vetting will end in the near future and most of those on whom the Stasi kept a file and who want to read it have already done so, making this too a declining area of activity for the agency. 
To ensure that historical research was undertaken, the Law on the Stasi Records provided for the establishment of a special team of researchers, employed by the BStU, to publish histories of the Stasi and its operations. Despite a reform undertaken in 2006, the researchers in the BStU’s Research and Education Department (Abteilung Bildung und Forschung) still have privileged access to the Stasi’s records since they may read them in the original, without any redactions, whereas most other researchers may read only either original files which contain no personal information (or other information exempt from release) or copies from which this information has been redacted. They must also wait for a long time for access to the records for which they have applied, and have to rely on a case officer being competent to make suitable records available to them. Despite their advantages, however, the BStU’s researchers are under the same legal restrictions as other readers as regards what they publish. 
The works of the BStU researchers dominate the academic literature on the Stasi—for better and for worse. Research on the Stasi has advanced further than that on any other Communist security service, due in part to the BStU’s own researchers, who started their work earlier than those in the other ex-Communist states. Moreover, fewer records were destroyed in East Germany than elsewhere. The records have done much to shed light on the repression carried out by the Stasi. Yet the researchers’ agenda has not been comprehensive. For example, more than twenty years after the Stasi was dissolved, very little work has been done on its large-scale counterintelligence operations, and little attempt has been made to compare the Stasi’s activities with those of the other Soviet Bloc security services. There had been little engagement in the BStU’s publications with the academic literature on intelligence in other languages, particularly English. The BStU’s publications vary greatly in quality, the best having been written by a few prolific individuals. There is much to be said for facilitating further research on the Stasi by scholars and doctoral candidates who have been through the university system; they would broaden the range of perspectives on the Stasi, and be better able to integrate scholarship on the Stasi into the literature on intelligence available in other languages. 
Romania has, again, been less fortunate. The Communists remained in power longer and the state security service remained longer possession of its records. Many files have been destroyed. How many, no one will ever know. The SRI’s archivist has suggested that 130,000 files were destroyed during the uprising of 1989, but this estimate is only one of many (the Romanian Information Service’s figure of 27,000 is perhaps an indication of how unreliable its figures are). The larger figure does not include the files which were destroyed in the 45 years before 1989. Estimates relating to the older destroyed files run into the hundreds of thousands. Where to find many of those that have not been destroyed is unknown, as is, consequently, the size of the available archive. More than 1.9 million files are known to exist. Starting later, when fewer records were available, and when it was not known what records were available, has meant that serious historical writing about Romania’s Securitate has barely begun. Uncertainty prevails about key aspects of the service’s history. The records destroyed at the order of the Communist Party in the late 1960s (273,805 files on ‘‘collaborators,’’ according to the Romanian Information Service) concerned party members who supplied information to the Securitate; those who destroyed them were keen to ensure that the Party’s close involvement in the security service’s surveillance and repression remained obscure. Such destruction will create significant gaps in the histories of the service that can be written. 
Another example is the size of its informer network. The Securitate made a distinction between ‘‘collaborators’’ (Communist Party members who provided it with information) and ‘‘informers,’’ (non-Party members who reported to it). Official figures for the size of its human informant network, seemingly obtained from the SRI in 1993, indicate that 507,003 ‘‘informers’’ reported to the Securitate during the course of the Communist regime, and an unknown further number of ‘‘collaborators.’’ The figure seems to be based on the number of informer files in existence in 1993 and therefore does not take account of file destruction—at least 78,000 files are believed to have been destroyed in the years 1989–1993 alone. In 1989, the Securitate had an informer network of 144,289 and a full-time staff of 15,087 officers. This force monitored a population of 21.5 million. These figures are far from being accepted. A former Securitate officer, Liviu Turcu, has maintained that the service had one million informants. The Romanian Information Service long claimed that about 100,000 informers reported to the Securitate, a figure believed to be far too small. No reliable figures for the size of the informant network after 1989 are available. These uncertainties do not exist in the case of Germany. The Stasi’s size, including its network, is known: some 174,000 informers in 1989, reporting to a full-time staff of 102,000 officers. Lavinia Stan, the leading authority on Romania’s efforts to come to terms with its Communist past, claims that Romania had a larger informer network, relative to the population’s size, than East Germany. This is unlikely to have been the case.
'Of Provenance and Privacy: Using Contextual Integrity to Define Third-Party Privacy' by Steven Bingo in (2011) 74 The American Archivist 507–521
approaches the issue of third-party privacy by examining how contextual factors related to the creation and use of records can inform decisions to restrict or open access. Helen Nissenbaum’s theory of contextual integrity, which originates from the discourse surrounding digital privacy, is applied as a means to expand an archival concept of provenance to address privacy risks. Applying contextual integrity to privacy decisions also allows archivists to frame decisions in terms of circulation, rather than as a simple dichotomy between access and restriction. Such nuance is invaluable when considering the impact of making records available digitally. 
The central problem identified by many who have written about the protection of third-party privacy in manuscript collections is the lack of clear guidelines or principles that can be enacted on a profession-wide level. Because of the ethical dimensions of the privacy debate, concern for consistency from institution to institution is all the more pressing. At the center of this debate is what Mark Greene refers to as “the tension between access and property or privacy rights.” Bound up in this tension are concerns regarding the unintentional censorship of materials caused by restrictions on one hand and maintaining the trust of donors and third parties on the other. 
Separate from the archival discussion is a discussion in the fields of computer science and information ethics regarding the privacy of digital information. One of the concepts to emerge from this discourse is Helen Nissenbaum’s theory of “contextual integrity.” One can begin to define information privacy rights, Nissenbaum argues, by understanding norms related to the context in which information is supplied, gathered, and used. In other words, the norms of privacy surrounding a document may be determined by investigating a document’s provenance. While this may seem obvious to archivists, emphasizing provenance as a tool to negotiate privacy concerns focuses the discussion toward appraisal, which has not been covered in much depth, other than to say that archivists should work with donors to identify and properly mediate risk. Contextual integrity, as I will argue, provides archivists another tool with which to tackle privacy concerns in a more prospective, upstream manner. As suggested by some current literature, dealing with risk prospectively provides opportunities to make decisions at broad levels of organisation. 
A second application of contextual integrity concerns access. Contextual integrity, Nissenbaum states, is violated when information divulged within one context is recast in another context, particularly of how the information is allowed to flow in radically different ways. Nissenbaum cites the aggregation of consumer information gathered online as an example of how information provided in one context is appropriated in new contexts without the subject’s knowledge. As archivists embark upon mass digitization projects and seek out options for making born-digital documents publicly accessible, the question of reframing documents in new contexts becomes extremely pertinent. After summarizing the current archival debate regarding third-party privacy, I will flesh out the specifics of contextual integrity. I will then articulate how contextual integrity translates into an archival concept of provenance and apply it to questions of appraisal and access. As a theory that incorporates both appraisal and access, I argue that contextual integrity can help align appraisal and access policies in a systematic and holistic fashion. I will also point out the limitations of Nissenbaum’s theories within an archival setting that arise out of challenges unique to archivists regarding access and privacy. Specifically, contextual integrity does not resolve questions of privacy so much as it identifies key factors that may bear upon the sensitivity of a document, such as the roles of the creator, recipient, and subject of a document. While I promote contextual integrity as a means of bringing privacy risks into focus, I also believe that the default position regarding restrictions should be on the side of access. In other words, it is important for the archivist to prove why a document presents a privacy risk great enough to override our duty as archivists to provide access. I present con- textual integrity as a tool within a larger decision-making process informed by our professional ethics.

Interpol Red Notices

Last year's Strengthening respect for human rights, strengthening INTERPOL report [PDF] by Fair Trials International states that
Police, judges and prosecutors across the globe should work together to fight serious crime. Mechanisms designed to achieve this, however, must be protected from abuse to ensure that their credibility is not undermined and to prevent unjustified violations of individuals’ rights. This Report is designed to assist INTERPOL, the world’s largest police cooperation body, in meeting this challenge. 
'Red Notices’, international wanted person alerts published by INTERPOL at national authorities’ request, come with considerable human impact: arrest, detention, frozen freedom of movement, employment problems, and reputational and financial harm. These interferences with basic rights can, of course, be justified when INTERPOL acts to combat international crime. However, our casework suggests that countries are, in fact, using INTERPOL’s systems against exiled political opponents, usually refugees, and based on corrupt criminal proceedings, pointing to a structural problem. We have identified two key areas for reform. 
First, INTERPOL’s protections against abuse are ineffective. It assumes that Red Notices are requested in good faith and appears not to review these requests rigorously enough. Its interpretation of its cardinal rule on the exclusion of political matters is unclear, but appears to be out of step with international asylum and extradition law. General Secretariat review also happens only after national authorities have disseminated Red Notices in temporary form across the globe using INTERPOL’s ‘i-link’ system, creating a permanent risk to individuals even if the General Secretariat refuses the Red Notice. Some published Red Notices also stay in place despite extradition and asylum decisions recognising the political nature of the case. This report therefore recommends that:
(a) Combat persecution: INTERPOL should refuse or delete Red Notices where it has substantial grounds to believe the person is being prosecuted for political reasons. National asylum and extradition decisions should, in appropriate cases, be considered decisive.
(b) Thorough reviews: INTERPOL should require national authorities to provide an arrest warrant before they can obtain a Red Notice, and should conduct a thorough review of Red Notice requests and Diffusions against human rights reports and public information.
(c) Draft Red Notices only in urgency: INTERPOL should ensure that Red Notice requests are not visible to other NCBs while under review except in urgent cases; the NCB should justify its use of the urgency exception and INTERPOL should monitor exception usage closely.
(d) Continual review: INTERPOL should systematically follow up with countries which have reported arrests based on Red Notices, six or 12 months after it is informed of an arrest, and enquire as to the outcome of the proceedings following the arrest.
Secondly, those affected by Red Notices currently lack an opportunity to challenge the dissemination of their information through INTERPOL’s databases in a fair, transparent process. INTERPOL, which has apparently not, to date, been subjected to the jurisdiction of any court, must provide alternative avenues of redress and effective remedies for those it affects. However, the Commission for the Control of INTERPOL’s Files (CCF), its existing supervisory authority, is a data protection body unsuited to this responsibility and lacks essential procedural guarantees. INTERPOL’s judicial immunity is thus currently unjustified. This Report therefore recommends:
(a) Reform the CCF: INTERPOL should develop the competence, expertise and procedures of the CCF to ensure it is able to provide adequate redress for those directly affected by INTERPOL’s activities. It should explore the idea of creating a separate chamber of the CCF dedicated to handling complaints, leaving the existing CCF to advise horizontally on data protection issues.
(b) Ensure basic standards of due process: INTERPOL should ensure that reforms of the procedures of the CCF provide for the following essential safeguards: (i) adversarial proceedings with a disclosure process; (ii) oral hearings in appropriate cases; (iii) binding, reasoned decisions, which should be published; and (iv) a right to challenge adverse decisions.
If INTERPOL implements these reforms, police will spend less time arresting refugees and political exiles, at great human cost to those involved, and more time arresting criminals facing legitimate prosecutions. This will enhance confidence in the Red Notice system and, thereby, INTERPOL’s credibility with national authorities.
The report features the following conclusions and recommendations -
General Conclusions 
Red Notices, despite their nature as mere electronic alerts, bring about concrete consequences and often have serious human impact, placing individuals at risk of arrest and lengthy detention, restricting freedom of movement, and impacting upon the private and family life of the individual concerned. 
INTERPOL’s rules properly seek to exclude inappropriate uses of its systems. They seek to restrict the human impact associated with Red Notices to only such cases as fall within INTERPOL’s remit, ensuring that human rights restrictions caused by INTERPOL are justified and proportionate. This conclusion is, however, restricted to the rules themselves, as distinct from their application in practice. 
The Problem of Abuse 
In practice, INTERPOL’s Red Notices are being used as political tools by NCBs, and are being issued and maintained on the basis of criminal cases which have been recognised as being politically-motivated by extradition courts and asylum authorities. 
INTERPOL’s systems are also being used in respect of criminal cases which arise as a result of prosecutorial and judicial corruption, sometimes deriving from private disputes with powerful individuals. INTERPOL cannot necessarily be expected to detect such abuses ab initio, but those affected need an opportunity to present their complaint before an independent authority. 
In some cases, countries are failing to seek extradition when this would be possible. This represents a misuse of a Red Notice and breaches INTERPOL’s rules, and may provide evidence of political abuse. INTERPOL recognises this as an abuse. 
Detecting and Preventing Abuse 
INTERPOL’s interpretation and application of Article 3 is unclear. We recommend that INTERPOL provide detailed information on how it assesses political motivation and the significance it attaches to extradition refusals and asylum grants. 
On the basis of the available information, it appears that INTERPOL is applying a test under Article 3 which is out of step with international asylum and extradition law. We recommend that INTERPOL adopt the test in Article 3(b) of the UN Model Extradition Treaty as it is applied by extradition courts. As a first step, INTERPOL could commission and publish an expert study analysing relevant international extradition and asylum law and its own obligations. 
There is insufficient information available to understand how INTERPOL approaches the task of reviewing Red Notice requests and Diffusions after they have been published. We recommend that INTERPOL make more information publicly available about this, within reasonable limits. 
Proactive background research into the requesting country’s human rights record and the circumstances of the case are essential to detecting political motivation cases. We recommend INTERPOL provide more disclosure about the extent to which it does this. 
The provision of arrest warrants may help detect cases of abuse. We recommend that INTERPOL require NCBs to supply arrest warrants, either at the point of requesting a notice or promptly thereafter if the matter is urgent. INTERPOL should also insist upon complete factual circumstances being provided in Red Notice requests and Diffusions. 
The i-link system, allowing NCBs to issue draft Red Notices, unrealistically assumed that all NCBs will respect INTERPOL’s rules. This said, human checks within 24 hours minimise the risk of arrest on the basis of abusive Draft Red Notices. However, there remains a risk of NCBs accessing and copying Draft Red Notices, which may create a permanent risk to the person concerned even if INTERPOL eventually refuses the Red Notice request. 
We recommend that INTERPOL change the standard process of the i-link system so that Draft Red Notices are, by default, not visible to other NCBs while they are under review by the General Secretariat. In urgent cases, the NCB should be able to push an ‘override’ button, providing an explanation of the circumstances justifying this. The General Secretariat and CCF would then be required to assess carefully whether this power was being used appropriately. 
We recommend the adoption of a clear rule requiring the deletion of a Red Notice or Diffusion when either (a) a request for extradition based on the proceedings giving rise to the Red Notice/Diffusion has been rejected on political motivation grounds or (b) asylum has been granted under the 1951 Convention on the basis of the of the criminal proceedings giving rise to the Red Notice/Diffusion. 
We recommend that, where the extradition refusal or asylum grant is made on the basis of criminal allegations which are not the same as those giving rise to the Red Notice, this should give rise to a strong presumption in favour of deleting the Red Notice. 
In either case, the NCB concerned should have the opportunity to bring information to the attention of INTERPOL in order to maintain the Red Notice. However, the burden should be on the NCB to justify why neither of the above rules should apply. 
It is not clear how INTERPOL understands the significance of a grant of international protection or a refusal of extradition on human rights grounds for the validity of a Red Notice or Diffusion. We recommend that INTERPOL publish a Repository of Practice on the interpretation and application of Article 2 of its Constitution. 
We recommend that INTERPOL institute a practice whereby the General Secretariat, when informed of an arrest, systematically follows up with the NCB of the arresting country either six or 12 months after the event, and asks standard questions as to whether an extradition request was made and whether this was accepted or refused, and on what grounds. 
Sanctions for misuse of INTERPOL’s systems can play a part in preventing future abuses. We recommend that INTERPOL explain what criteria are applied to determine when an NCB has failed to fulfil its obligations, and how many times this power has been used. 
Creating Effective Remedies 
Given the human impact of Red Notices and Diffusions, those affected must have access to effective remedies to obtain redress when NCBs abuse INTERPOL’s systems. 
We conclude that, in so far as INTERPOL currently escapes the jurisdiction of national courts, it is under a responsibility, in accordance with Article 2 of its Constitution, to provide effective remedies within its own internal structure. This is also a condition of its judicial immunity. 
The CCF, in its broader role of advising INTERPOL on a horizontal basis, appears to be working responsibly. This conclusion is, however, without prejudice to our assessment of its function of handling individual complaints. 
The ability to withhold disclosure is not inherently objectionable, provided the exemptions recognised by the CCF are interpreted broadly so as to enable a person who has been arrested to access their file, even if they do not possess documents specifically mentioning INTERPOL. However, the access process works far too slowly, because NCBs do not respond swiftly enough to the CCF’s enquiries. 
We recommend that the CCF and/or INTERPOL establish a clear rule requiring NCBs to respond to access requests within one calendar month. Failure to comply with this time limit should result in disclosure of the full file and, thereafter, deletion of the information. 
The CCF, in handling complaints requesting the deletion of information, falls far short of basic standards of fairness, effectiveness and independence. In light of these shortcomings, INTERPOL’s judicial immunity is currently unjustifiable. 
The CCF’s failure to meet basic standards in the processing of individual complaints results from its relatively weak position within INTERPOL, in particular its meagre resources and over- dependence on the General Secretariat for finance and legal expertise. It is also essentially a data protection body required to perform the role of a specialised human rights tribunal. 
We recommend that INTERPOL seek to enhance the competence and expertise role of the CCF, and develop its procedures to be more transparent, adversarial, and effective. We suggest that INTERPOL explore the idea of creating a separate chamber of the CCF, responsible for handling complaints. Reforms of the complaints procedure should ensure, as a minimum, (i) a functioning disclosure system, (ii) a right to be heard in appropriate cases, (iii) binding and reasoned decisions, which should be published on INTERPOL’s website subject to necessary anonymisation, and (iv) a requirement for NCBs to cooperate so as to achieve reasonable time frames for proceedings.

08 May 2014

Clouds and the proposed EU Data Protection Regulation

'Cloud Accountability: The Likely Impact of the Proposed EU Data Protection Regulation' by W. Kuan Hon, Eleni Kosta, Christopher Millar and Dimitra Stefanatou considers
the implications for cloud accountability of current proposals under the draft General Data Protection Regulation to modernise the EU Data Protection Directive. It makes recommendations aimed at improving the technology-neutrality of the proposals and their appropriateness for cloud computing, with a view to ensuring that the proposals will maintain or enhance protection of personal data for data subjects while not unduly deterring cloud computing. 
It is based on documents publicly available as at 14 February 2014, and analyses and compares the European Commission's January 2012 draft, the LIBE Committee's November 2013 draft (since approved unamended by the full European Parliament in March 2014), and the first full draft of the Council published in December 2013. 
A similar work by Kosta et al was noted here.

In the 60 page current paper the authors offer several recommendations -
  • Cloud and other new technologies should not be treated as risky per se – risks depend on their intended use and the type and sensitivity of the data concerned. 
  • For technology neutrality, only persons with logical access to intelligible personal data should be regulated. Physical access is not necessary or sufficient to access intelligible personal data. 
  • The ‘personal data’ definition triggers obligations and liability in an ‘all or nothing’ fashion and could encompass most data. A concept of pseudonymous data is one way to calibrate obligations, but definitions and obligations for each data type need further consideration. 
  • The extra-territorial scope of EU data protection law is unclear. To avoid discouraging non-EU controllers and providers from using EU data centres and EU cloud providers or sub-providers, the status of data centres and hardware/software providers should be clarified explicitly, as should the key definitions of ‘establishment’, ‘context of activities’ and ‘offering’. 
  • Clarity is needed regarding which obligations should trigger ‘strict liability’ for any non- compliance regardless of fault, and which should be risk-based, eg requiring only the taking of measures appropriate to the individual situation or reasonable measures to industry standards. 
  • We support a more focused risk-based approach, as opposed to requiring privacy impact assessments etc in a broad range of situations that may not warrant it from a risks perspective. 
  • To incentivise adoption of accountability measures such as codes of conduct, certifications, and seals, consequences of adoption should be made clear. In particular, defences or reductions in liability should be available to those who have obtained and complied with such measures. 
  • Defences available to intermediaries under the E-Commerce Directive should be available to cloud providers if they do not know that data stored with them by their users are personal data, or do not or cannot access intelligible personal data. Also, provisions regarding ‘instructions’ to processors should instead target the underlying mischief, namely misuse or disclosure of intelligible personal data by processors.
  • Rather than impose joint liability on processors and co-controllers, a more fault-based allocation of liability is recommended. Careful consideration is needed of exactly which obligations should be imposed on processors. 
  • Consideration should be given to abolishing the data export restriction and international agreement sought on jurisdictional conflicts and rules restricting, or compelling, government access to personal data. If the restriction is retained, ‘transfer’ should be defined by reference to intention to give or allow logical access to intelligible personal data to a third party recipient who is subject to the jurisdiction of a third country. Prior authorisations by data protection authorities are not practicable and should be required only in selective appropriate cases. Any ‘legitimate interests’ derogation should be based not on size or frequency of transfers but on risk-appropriate safeguards and a balancing against data subjects’ rights and interests. 
  • We support updating security requirements in line with general concepts of confidentiality, integrity and availability. 
  • The requirements and scope of data protection by design and default need to cater for infrastructure providers who may not know the nature of data processed using their infrastructure, and controllers and processors who may have limited control over infrastructure. 
  • Clarification is needed regarding the types of data breaches to be notified, thresholds and the detailed contents of any public register, but we support the deletion of ‘hard’ time limits. 
  • The right to data portability is very limited in scope, and this could be reconsidered, as well as its relationship with the right to erasure.

06 May 2014

Adulthood and anonymity

I'm underwhelmed by 'Recognizing Younger Citizens: Statutes and Structures in Support of Earlier Adulthood' by John Lunstroth in (2014) 18(1) Michigan State College of Law Journal of Medicine and Law, which comments
I question the way the law regulates adolescents by looking at the interaction between i) pre-Enlightenment norms in which scientific and political notions of human development did not exist for all practical purposes, ii) Enlightenment philosophical and scientific norms of human development, and iii) modern legal norms of human development. Are there ways in which pre-modern holistic notions of being human can contribute to a more rational approach to adolescents by the law? How can the ideology of childhood, which through Enlightenment scientific and political thought, is reified (has the illusion it is real), naturalized (has the illusion it is natural) and legitimized (encoded in the law), be overcome for adolescents? 
I make four contributions to the discussion of children’s rights: 
1. I make an argument that the authority of science is overrated when dealing with children. Science is used by the medical and other professions to justify their authority and competence in certain environments, including the classroom and at the bedside. It is used primarily by physicians, but is used by the other authorities that make decisions on behalf of adolescents: parents, teachers, and, in general, the state. I argue that the life sciences are now both institutionally corrupt and theoretically unsound, and therefore should no longer be a proxy for or token of authority. Decisions about children, especially adolescents, based on the life sciences, or made by life scientists (including physicians) whose authority primarily flows from their social authority as scientists, should be seen as categorically problematic and therefore categorically devalued. 
2. I suggest that states and other socio-political entities should recognize committees of adolescents to represent the wishes and needs of adolescents and other children. 
3. I suggest that, as a general matter, the legal presumption that adult competence happens only at 18 or 21, with carve out exceptions for some earlier competencies, should be reversed, and that the law should recognize adult competence at 14 (e.g.) with exceptions for some later competencies. And
4. Until more general reforms can be enacted, I join with those who suggest that the general rule for adolescents facing terminal or grave illnesses should be to deem them competent to make their own medical decisions.
In Australia we have Gillick Competence and a somewhat less jaundiced view of the sciences. Lunstroth comments that
The sciences of living things are institutionally corrupt. Peer review, ghost articles, intellectual property regimes in the academy, etc. are now the norm. Our post-WW2 science has always been in the thrall of industry, but never to the extent it is now. Corruption has now become naturalized in the sciences of living things; the idea of conflicts of interest has become so weakened that it barely exists; and corruption is predictable as a consequence of Darwinian determinism. Oligarchy instrumentalizes science for profit-seeking. This is an end in practice of the sciences of living things because it is simply not reliable anymore. The ideals and values of science-as-the-pursuit-of-truth-come-what-may are in general deceased, or at the very least chronically and very seriously ill. 
Therefore, the promise of reason in medicine is now questionable, and its role in decision- making in the clinic with chronically and fatally ill children should not be over-rated; the ideology of medicine must be unmasked if we are to give children more authority to make decisions affecting their integrity.
A more nuanced view is provided in 'The Neurobiology of Decision-Making in High Risk Youth &  the Law of Consent to Sex' (Indiana University Robert H. McKinney School of Law Research Paper No. 2014-2) by Jennifer Drobac and Leslie A Hulvershorn, who comment that
 Under certain circumstances, the law treats juvenile consent the same as it treats adult decisions, even though a growing body of scientific research demonstrates that children make decisions using less developed cognitive processes. This Article highlights the gaps and deficiencies of legal treatment of juvenile decisions in the context of sex with an adult, as well as integrates new scientific information regarding the decision-making of minors in risky situations. Part I examines recent pediatric brain imaging findings during a risky decision-making task. Specifically, a new study demonstrates that brain scan results differed between juveniles at high risk for potentially harmful or criminal conduct and healthy children. These differences within juvenile populations support the notion that particular biological and environmental traits in children may further distinguish juvenile decision-making from adult decision-making. Part II explores the potential impact of these novel neurobiological findings on the legal treatment of juvenile “consent” to sexual activity. A discussion and summary of the juvenile sex crime statutes of all fifty states demonstrates how the law attributes legal capacity and ability to make legally binding decisions to even very young teenagers. Part II also highlights where state civil and criminal law treat juvenile “consent” inconsistently. Criminal and civil laws’ treatment of juvenile capacity, in the context of sexual activity with an adult, is not congruent with recent neurobiological discoveries regarding juvenile risk-taking and decision-making. Therefore, society should reconsider designations regarding legal capacity in light of novel neurobiological findings regarding decision-making in juveniles.
The UK Telegraph, in reporting the demise of Cornelius Gurlitt, meanwhile states that
... Cornelius was stopped during a routine passenger check as he returned to Germany by train from Zurich. “He appeared nervous,” a customs official later noted. 
Although he was released, German authorities remained suspicious. Their investigations showed that he was not, as he had claimed, a resident of Salzburg and he had no clear form of income. He did not have a bank account, pension or insurance, had never had a job, or been married and, as he was not registered with the authorities, was not known either to the tax authorities or to social services. According to an article in Der Spiegel, Gurlitt stopped watching television in 1963, never used the internet and booked hotel rooms months in advance when he needed to travel. “He was a man who didn’t exist,” said one official. 
The police applied for a warrant to enter his rented apartment, and it was there in February 2012 that, behind heaps of tinned food long past its sell-by date, they discovered the astonishing stash of missing artworks. Alongside Old Masters and pieces from the 19th century, the haul was found to include many works by artists considered “degenerate” by the Nazi regime — such as Franz Marc, Paul Klee, Marc Chagall, Max Beckmann and Otto Dix. Gurlitt found himself at the heart of a global sensation.

Foreign Affairs

'Human Rights Treaties and Foreign Surveillance: Privacy in the Digital Age' by Marko Milanovic in Harvard International Law Journal comments
The 2013 revelations by Edward Snowden of the scope and magnitude of electronic surveillance programs run by the US National Security Agency (NSA) and some of its partners, chief among them the UK Government Communications Headquarters (GCHQ), have provoked intense and ongoing public debate regarding the proper limits of such intelligence activities. Privacy activists decry such programs, especially those involving the mass collection of the data or communications of ordinary individuals across the globe, arguing that they create an inhibiting surveillance climate that diminishes basic freedoms, while government officials justify them as being necessary for the prevention of terrorism. 
The purpose of this article, however, is not to assess the general propriety or usefulness of surveillance programs or their compliance with relevant domestic law. I do not want to argue that electronic surveillance programs, whether targeted or done on a mass scale, are per se illegal, ineffective or unjustifiable. Rather, what I want to look at is how the legality of such programs would be debated and assessed within the framework of international human rights law, and specifically under the major human rights treaties to which the ‘Five Eyes’ and other states with sophisticated technological capabilities are parties. 
In the wake of the UN General Assembly's 2013 resolution on the right to privacy in the digital age, it can be expected that electronic surveillance and related activities will remain on the agenda of UN bodies for years to come, especially since the political relevance of the topic shows no signs of abating. Similarly, cases challenging surveillance on human rights grounds are already pending before domestic and international courts. The discussion has just started, and it will continue at least partly in human rights terms, focusing on the rights and interests of the affected individuals, rather than solely on the interests and sovereignty of states. 
The primary purpose of this article is to advance this conversation by looking at one specific, threshold issue: whether human rights treaties such as the ICCPR and the ECHR even apply to foreign surveillance. The article will show that while there is much uncertainty in how the existing case law on the jurisdictional threshold issues might apply to foreign surveillance, this uncertainty should not be overestimated – even if it can and is being exploited. The only truly coherent approach to the threshold question of applicability, I will argue, is that human rights treaties should apply to virtually all foreign surveillance activities. That the treaties apply to such activities, however, does not mean that they are necessarily unlawful. Rather, the lawfulness of a given foreign surveillance program is subject to a fact-specific examination on the merits of its compliance with the right to privacy, and in that, I submit, foreign surveillance activities are no different from purely domestic ones.

Surveillance Safeguards

'Modern Safeguards for Modern Surveillance: An Analysis of Innovations in Communications Surveillance Techniques' by Gus Hosein and Caroline Wilson Palow in (2013) 74(6) Ohio State Law Journal [PDF] comments that
 To understand communications surveillance law is to try to resolve the three-body problem of simultaneously comprehending law, policy, and technology while at least two of the three may be changing at any moment in time. This makes it one of the more exciting domains for scholars, analysts, and technologists, but it is also one of the most challenging. 
Communications surveillance is a rapidly shifting landscape from the perspectives of policy and technology. Governments across the world are deploying new techniques and technologies with alarming speed. We are achieving new levels of surveillance, quickly approaching what Justice Brandeis warned about when he said that “[s]ubtler and more far-reaching means of invading privacy have become available to the Government,” and that “[d]iscovery and invention have made it possible for the Government, by means far more effective than stretching upon the rack, to obtain disclosure in court of what is whispered in the closet.” There is a rapidly growing market in communications surveillance technologies that can conduct surveillance in ways that just ten years ago were well beyond the limits of our technology and often our imaginations. 
With widespread innovations in policy and technology across the world, what is most surprising is how old-fashioned our legislation is, and in turn, our safeguards. Many communications surveillance laws were drafted in the 1980s and 1990s, with updates in the 1990s and early 2000s. Many countries across the world are still introducing laws on communications surveillance, but their models are quite old, often borrowing language from laws from the 1990s (in the case of U.S. and UK law) and international conventions such as the Council of Europe Cybercrime Convention of 2001. They all ban interception of communications content, grant exceptions to government agencies, permit access to information about the communications (so-called communications metadata), establish authorization and oversight regimes, and permit government to order communications service providers to provide capabilities for lawful intercept and/or access. But they are not keeping pace with the new forms of advanced surveillance techniques and policies being deployed. 
In this Article we will draw out the modern landscape of surveillance policy and technologies (Part I). The deployment of new techniques and technologies is being done without new legal frameworks, and as such, we must resort to relying on older frameworks that may be unable to understand these new techniques, or constitutional safeguards that have a long and troubled history with innovation (Part II). Using the example of the lack of legislative activity in the United States, we look at how lower courts are trying to resolve the Fourth Amendment concerns inherent in some of these techniques, and make suggestions regarding how we believe current law should be applied (Part III). We are in a moment of great uncertainty characterized by the absence of legislative activity implementing real safeguards, use of new communications surveillance capabilities often in secretive ways, and courts grappling to understand new technologies. Laws, technologies, and the courts have, until now, maintained a delicate balance on communications surveillance; when new technologies posed new threats, often the courts or the legislative bodies would respond. If one branch failed, another would usually pick up the gauntlet. After the U.S. Supreme Court decided that interception of communications did not qualify as a search under the Fourth Amendment in 1928, Congress responded in the 1930s with strict controls. In the 1960s the Supreme Court and Congress fed off one another to develop jurisprudence and legislation. 
Responding to abuses in the 1970s, Congress enacted new laws, and when the Supreme Court decided against protecting certain metadata, Congress responded with rules on “trap and trace” and “pen registers.”  Unfortunately, we are currently seeing a lack of interest in safeguards from Congress and other legislatures around the world, while technical capabilities are expanding. There is even speculation that the Foreign Intelligence Security Court is being activist in enabling surveillance. It is high time to reintroduce safeguards into the conversation, and to apply them against technologies that are increasingly used to conduct directed and mass surveillance of our sensitive information.
The authors conclude -
We are in need of a reconceptualization of modern surveillance powers. Inasmuch as we still consider “interception” as the tapping of a line outside of someone’s home, we still believe that surveillance is as targeted in design as it is in implementation. Neither is necessarily true anymore. Even the emphasis in the literature on the “third party doctrine” may require re-thinking, as it presumes that modern surveillance requires third party Internet companies and telephony providers. As we see with the technologies reviewed in this Article, the human is no longer necessarily the observer nor the identified target within modern surveillance. An individual may be placed under communications surveillance because of his or her location (e.g., near an IMSI catcher), virtual address (e.g., on the same stream of communications as someone else or using the same IP address or computer that is then attacked with a Trojan), or characteristics (e.g., same spoken language, similar topic of conversation). The surveillance may be authorized because of these characteristics, not because a known individual is using a particular communications medium. Such practices only increase the need for legal safeguards to protect our privacy. 
Technological change has long been compelling us to rethink the application of our constitutional values. In theory, Parliaments and Congresses could act to regulate these technologies, to place them under strict rules. They have, to date, failed to do so. Instead, we are seeing that the courts are exploring these questions around new communications surveillance techniques. Our analysis, based on Supreme Court decisions regarding the Fourth Amendment, recommends that the courts establish strong boundaries around these new investigative techniques.

Dignity and difference

'Dignity as a Value in Agency Cost-Benefit Analysis' by Rachel Bayefsky in (2014) 123 Yale Law Journal 1732 comments [PDF] that
 President Obama’s 2011 Executive Order 13,563 on cost-benefit analysis (CBA) authorizes agencies to consider “human dignity” in identifying the costs and benefits of proposed regulation. The notion of incorporating dignity into CBA, this Note points out, highlights the importance of choosing between different conceptions of CBA: one that aims to derive a monetary figure for dignity, and one that seeks to take dignity into account in unmonetized form. This Note illuminates the stakes of the choice between monetized and unmonetized CBA by drawing attention to various ways in which dignity might be incorporated into CBA. 
The Note then argues that CBA can and must include dignity in unmonetized form. In doing so, agencies should embrace “qualitative specificity,” which involves elucidating in qualitative terms the nature and gravity of dignitary considerations in a particular regulatory context. Qualitative specificity, the Note indicates, enables agencies more transparently to assess the positive and negative consequences of government regulation, and it facilitates public participation in the process of defining the nature of dignity in the senses relevant to the effects of government regulation. In response to the critique that qualitative specificity is indeterminate and fails to constrain administrative discretion, the Note contends that qualitative specificity provides only as much determinacy as is actually available; this approach is preferable to monetization that emerges with a determinate number but fails to accommodate the complex and malleable nature of dignity.
Bayefsky concludes
Agencies incorporating dignity into CBA at times portray dignity as a monetizable value and at times emphasize the unmonetizable nature of dignity (sometimes within the same RIA).  This combination of ways to treat dignity does not necessarily reflect a direct contradiction. OMB guidance, after all, instructs agencies to “monetize quantitative estimates wherever possible.”  Agencies may consistently decide that certain dignitary values can be monetized, while others “cannot be quantified due to methodological and data constraints.”  The “monetize whenever possible” approach may also be on display in Sunstein’s work. Sunstein takes the example of the disability rule to promote building access for people in wheelchairs, which would reduce stigmatic harm and humiliation.  According to Sunstein, the agency could believe that a $25 million shortfall in monetized benefits is “not fatal, because nonquantifiable values are involved. Those values may well be sufficient to justify the regulation.”  This suggests that dignity as an unmonetized benefit is being weighed against monetized costs, and winning. But as Sunstein further explicates this model, he tends towards monetization of dignity:
Suppose that the regulation would benefit relatively few people—that the number of disabled people who would have access to bathrooms, as a result of the regulation, would be around 200 per year. If so, the question would be whether it would be worthwhile to spend over $46 million annually for each. Recall that some studies suggest that the value of a statistical life ranges around $7-$8 million; in that light, a $46 million annual expenditure would seem difficult to defend.
After “200 per year,” Sunstein has reached Quantitative Monetization (Option 2). After “whether it would be worthwhile to spend over $46 million annually for each,” Sunstein has reached Cost Monetization (Option 3). But in order to resolve the question of whether to spend $46 million, he turns to studies on the value of a statistical life, such as those based on wage differentials in risky jobs, which are commonly used in conventional CBA.  These techniques constitute monetization in the sense of Full Monetization (Option 4). 
Sunstein, like the authors of the PREA Rule, is not necessarily being inconsistent. He is plausibly read as pointing out that unmonetized dignitary benefits could serve as a “finger on the scale” in a case in which the monetized costs and benefits were fairly close, but that these unmonetized dignitary benefits cannot serve the same role when the gap between monetized costs and benefits is high, since doing so would implicitly value dignitary benefits several times higher than the value of a statistical life. In Sunstein’s view, therefore, dignity could be seen as monetizable as a matter of scale—not as precisely monetizable. The idea of providing an implicit valuation of dignity by comparison to the value of a statistical life nevertheless suggests that dignity can be monetarily valued at least within a certain range, and potentially that the effort to derive a range of monetary values for dignity is the ideal for agency CBAs even if it cannot always be realized. It remains to be seen whether a robust commitment to incorporating unmonetized values in CBA can operate alongside a preference for approximate monetization. This Note seeks to build on Sunstein’s approach and push this approach in an even more clearly unmonetized direction. 
More broadly, the decision to portray dignity as monetizable in some RIAs (such as the disability rule) raises certain difficulties. The most important is that monetization of dignity is undesirable as a general matter, as I argue in the next Part. Another is that because not all dignitary benefits are susceptible to monetization (as the next Part contends), there is a risk that those dignitary benefits that are monetized will be taken more seriously (whether by regulators or the public) than dignitary benefits left unmonetized. The favor shown in OMB guidance towards monetization reflects a general interest among those conducting CBA in producing “hard numbers.” Partial monetization, therefore, may result in agencies’ focusing primarily on the harms that can be monetized. But the fact that there happen to exist monetary figures for dignity in a particular context (such as usage figures for dial-a-ride versus adapted transit, or the value of a statistical life for comparative purposes) does not provide sufficient reason to weight dignity in this context more heavily than in others. 
The overall point is that greater self-consciousness about the choice whether or not to monetize dignity would be beneficial. Dignity is a highly context-specific value, and the proper conception of dignity for the purposes of one regulatory program may legitimately diverge from the appropriate conception of dignity for the purposes of another. However, divergences in agency treatment of dignity should reflect a considered decision to draw on context-specific understandings of dignity. In the next Part I explain the way in which qualitatively specific descriptions of dignity can achieve this task. ... Another feature of most current allusions to dignity in agency CBAs is their fairly general nature. For instance, in three separate paragraphs, the RIA for the prison rape rule refers to “loss of dignity,” “our country’s deepest commitments to human dignity and equality,”  and prison rape victims’ “loss of dignity and privacy.” In one brief paragraph, the RIA for the air toxics standard indicates that air-pollution-related deaths can be more protracted, “involving prolonged suffering and loss of dignity and personal control.”  The health privacy rule discusses, at somewhat greater length though still tersely, “the impossibility of monetizing the value of individuals’ privacy and dignity, which we believe will be enhanced by the strengthened privacy and security protections, expanded individual rights, and improved enforcement enabled by the rule.”  The age discrimination RIA indicates that “[r]educing discrimination against older individuals promotes human dignity and self-respect, and diminishes feelings of exclusion and humiliation.” 
The relative generality of these statements about dignity  limits the public’s ability to ascertain the basis on which a given regulation is being defended. For instance, what exactly does “loss of dignity” mean in the context of a death from air pollution? Without greater elaboration it is difficult to gain a sense of the agency’s concerns and to evaluate their validity. “Dignity” risks becoming an abstract term that agencies can draw on without providing a clear sense of what is at stake. Such an outcome may increase administrative opacity and decrease the likelihood that agencies will give meaningful reasons for their actions to the public. The unavailability of these reasons, in turn, hampers the public’s ability to participate in and inform agency regulation. 
The regulations considered in this Part—regarding age discrimination, disability, and prison rape, for example—have genuine dignitary benefits. Agencies should therefore be considering dignity when assessing the benefits and drawbacks of regulation. The prevalent form of such a consideration is currently CBA, and I argue in the next Part that we should understand this practice to include examination of dignity in unmonetized form. One possible response to the concern about transparency, in particular, is to urge the monetization of dignity to the greatest extent possible to ensure that agencies must work with “hard numbers” and pursue clear goals.
A student has meanwhile pointed me to Richard Mohr's statement that:
The chief problem of the social institution of the closet is not that it promotes hypocrisy, requires lies, sets snares, blames the victim when snared, and causes unhappiness—though it does have all these results. No, the chief problem with the closet is that it treats gays as less than human, less than animal, less even than vegetable—it treats gays as reeking scum, the breath of death.