Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts

15 June 2019

Cloud Robotics

'Cloud Robotics Law and Regulation' (Queen Mary School of Law Legal Studies Research Paper No. 295/2018) by Eduard Fosch Villaronga and Christopher Millard states
 This paper assesses some of the key legal and regulatory questions arising from the integration of physical robotic systems with cloud-based services, also called “cloud robotics.” The literature on legal and ethical issues in robotics has a strong focus on the robot itself, but largely ignores any background information processing. Conversely, the literature on cloud computing rarely addresses human-machine interactions, which raise distinctive ethical and legal concerns. In this paper we investigate, from legal and regulatory perspectives, the growing interdependence and interactions of tangible and virtual elements in cloud robotics environments. We highlight specific problems and challenges in regulating such complex and dynamic ecosystems, and explore potential solutions. To illustrate practical challenges, we consider several examples of cloud robotics ecosystems involving multiple parties, various physical devices,and various cloud services. These examples illuminate the complexity of interactions between relevant parties. By identifying pressing legal and regulatory issues in relation to cloud robotics we hope to inform the policy debate and set the scene for further research.

28 December 2018

Cloud Robotics and the law

'Cloud Robotics Law and Regulation' by Eduard Fosch Villaronga and Christopher Millard comments
This paper assesses some of the key legal and regulatory questions arising from the integration of physical robotic systems with cloud-based services, also called “cloud robotics.” The literature on legal and ethical issues in robotics has a strong focus on the robot itself, but largely ignores any background information processing. Conversely, the literature on cloud computing rarely addresses human-machine interactions, which raise distinctive ethical and legal concerns. In this paper we investigate, from legal and regulatory perspectives, the growing interdependence and interactions of tangible and virtual elements in cloud robotics environments. We highlight specific problems and challenges in regulating such complex and dynamic ecosystems, and explore potential solutions. To illustrate practical challenges, we consider several examples of cloud robotics ecosystems involving multiple parties, various physical devices, and various cloud services. These examples illuminate the complexity of interactions between relevant parties. By identifying pressing legal and regulatory issues in relation to cloud robotics we hope to inform the policy debate and set the scene for further research.

30 March 2018

Cloudy Property

'Property and the cloud' by Cesare Bartolini, Cristiana Santos and Carsten Ullrich in (2018) 34(2) Computer Law and Security Review 358-390 comments
Data is a modern form of wealth in the digital world, and massive amounts of data circulate in cloud environments. While this enormously facilitates the sharing of information, both for personal and professional purposes, it also introduces some critical problems concerning the ownership of the information. Data is an intangible good that is stored in large data warehouses, where the hardware architectures and software programs running the cloud services coexist with the data of many users. This context calls for a twofold protection: on one side, the cloud is made up of hardware and software that constitute the business assets of the service provider (property of the cloud); on the other side, there is a definite need to ensure that users retain control over their data (property in the cloud). The law grants protection to both sides under several perspectives, but the result is a complex mix of interwoven regimes, further complicated by the intrinsically international nature of cloud computing that clashes with the typical diversity of national laws. As the business model based on cloud computing grows, public bodies, and in particular the European Union, are striving to find solutions to properly regulate the future economy, either by introducing new laws, or by finding the best ways to apply existing principles.

06 February 2018

Clouds

The new Commonwealth Secure Cloud Strategy from the Digital Transformation Agency (DTA) states
The case for cloud is no secret to industry or government. A move to cloud computing - away from on premise owned and operated infrastructure - can generate a faster pace of delivery, continuous improvement cycles and broad access to services. It can reduce the amount of maintenance effort required to ‘keep the lights on’ and refocus that effort into improving service delivery.
Cloud, however, is a new way of sourcing Information Communication and Technology (ICT) services and many agencies will have to change the way they operate to make the most of this new model. In the Australian Government, a number of factors can get in the way of agencies realising their cloud aspirations, from a shortage of knowledge and experience, decades old, stubborn operating models and a struggle to sell the case for cloud across the business.
The Secure Cloud Strategy has been developed to guide agencies past these obstacles and make sure everyone has the opportunity to make the most of what cloud has to offer. This is not a simplistic ‘lift and shift’ view of the transition. Instead, the strategy aims to lay the foundations for sustainable change, seizing opportunities to reduce duplication, enhance collaboration, improve responsiveness and increase innovation across the Australian Public Service.
Some agencies have already embraced the cloud model. A coordinated approach for further adoption will make sure government derives the maximum value from this shift. The strategy will ensure experience and expertise is not locked-up and create opportunities to reuse and share capabilities through increased collaboration.
The strategy is based around a number of key initiatives designed to prepare agencies for the shift to cloud and support them through the transition:
  • Agencies will develop their own cloud strategies. There is no one-size-fits-all approach to implementing cloud. Agencies will use the Secure Cloud Strategy as a starting point to produce their own value case, workforce plan, best-fit cloud model and service readiness assessment.
  • Cloud implementation will be guided by seven Cloud Principles: − make risk-based decisions when applying cloud security − design services for the cloud − use public cloud services as the default − use as much of the cloud as possible − avoid customisation and use cloud services as they come − take full advantage of cloud automation practices, − monitor the health and usage of cloud services in real time.  
  • A layered Cloud Certification Model will be created. The certification model creates greater opportunity for agency-led certifications, rather than just ASD certifications. It creates a layered certification approach where agencies can certify using the practices already in place for certification of ICT systems. 
  • Service procurement will be aligned with the ICT Procurement Review Recommendations. As cloud services move more rapidly than services available through panels traditionally do, the recommendations in the ICT Procurement Review align well with creating a better pathway for cloud procurement. 
  • cloud qualities baseline and assessment framework will be introduced to clarify cloud requirements. The cloud qualities baseline capability and assessment framework will enable reuse of assessments. 
  • A Cloud Responsibility Model will be developed to clarify responsibilities and accountabilities. Traditional head agreements cannot cover all cloud services and their frequent variations. A shared capability for understanding responsibilities, supported by contracts, will address unique cloud risks, follow best practice and maintain provider accountability. 
  • A cloud knowledge collaboration platform will be built. The platform will enable secure sharing of cloud service assessments, technical blueprints and other agency cloud expertise, to iterate on work already done rather than duplicating it. 
  • Cloud skills uplift programs will be designed. Increase government skills and competencies for cloud aligned with the Australian Public Service Commission Digital Skills Capability Program and create the pathways to leverage industry programs to enhance cloud-specific skills in the Australian Public Service. 
  • Common shared platforms and capabilities will be explored including: − Federated identity for government to enable better collaboration in the cloud. − A platform for PROTECTED information management to reduce enclaves in agencies, and continue to iterate cloud.gov.au as an exemplar platform. − Service Management Integrations services to enable agencies to manage multi provider services.
These platforms will include the integration toolkits that enable agencies to seamlessly transition between the cloud services. 
These initiatives will be supported through a Digital Transformation Agency-led community of practice that will support agencies to plan and transition their environments for cloud. It will include delivering training and advice to agencies to build confidence in their ability to manage cloud services. 
The Australian Government has an ambitious agenda to transform its digital service delivery. Cloud offers reusable digital platforms at a lower cost, and shifts service delivery to a faster, more reliable digital channel. Cloud services have the opportunity to make government more responsive, convenient, available and user-focused.
The Strategy comments -
Myth: Privacy reasons mean government data cannot reside offshore
“Generally, no. The Privacy Act does not prevent an Australian Privacy Principle (APP) entity from engaging a cloud service provider to store or process personal information overseas. The APP entity must comply with the APPs in sending personal information to the overseas cloud service provider, just as they need to for any other overseas outsourcing arrangement. In addition, the Office of the Australian Information Commissioner’s Guide to securing personal information: ‘Reasonable steps’ to protect personal information discusses security considerations that may be relevant under APP 11 when using cloud computing.” https://www.oaic.gov.au/agencies-and-organisations/agency-resources/privacy-agency-resource-4-sending-personalinformation-overseas 
Additionally, APP 8 provides the criteria for cross-border disclosure of personal information, which ensures the right practices for data residing off-shore are in place. Our Australian privacy frameworks establish the accountabilities to ensure the appropriate privacy and security controls are in place to maintain confidence in our personal information in the cloud.
'The Ethics of Cloud Computing' by Boudewijn de Bruin and Luciano Floridi in (2017) 23(1) Science and Engineering Ethics 21-39 comments
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacenters (e.g., Amazon). It considers the cloud services providers leasing ‘space in the cloud’ from hosting companies (e.g, Dropbox, Salesforce). And it examines the business and private ‘clouders’ using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (banks, law firms, hospitals etc. storing client data in the cloud, e.g.) will have to follow rather more stringent regulations.

06 December 2016

CloudBanks

'Use by Banks of Cloud Computing: An Empirical Study' (Queen Mary School of Law Legal Studies Research Paper No. 245/2016) by W Kuan Hon and Christopher Millard explores
the extent to which public cloud computing is in fact being used in practice by banks operating in the EU, including global banks. It is based primarily on anonymised interviews with banks, cloud providers, advisers, and financial services regulators. This paper describes how banks are using cloud computing and their key drivers (such as time to market), as well as real and perceived barriers (such as misconceptions about cloud, and financial services regulation), including cultural and technical/commercial as well as legal/regulatory aspects. It summarises how banks and regulators have approached the cloud, as well as how cloud providers have approached the banking sector. 
Specific consideration is given to barriers arising from banking regulatory rules on outsourcing, critical or material, and the contentious issue of contractual audit rights for regulators. The paper also analyses legal and practical issues such as risk assessments, security, business continuity including exit plans, concentration risk and bank resolution, continuing regulatory oversight, banking secrecy laws, barriers under data protection law including personal data export restrictions, problems arising from layered service models where SaaS services are built on another provider’s IaaS/PaaS service, and commonly-negotiated contractual provisions regarding termination, service changes and liability. 
The paper concludes that, while some barriers are internal and some external, cloud is still misunderstood, and further educational efforts are needed to ensure regulatory approaches and guidance are sufficiently cloud-aware to strike the appropriate balance between risk management and efficiency/innovation across the European Economic Area.

16 June 2016

Data Exceptionalism

'Against Data Exceptionalism' by Andrew Keane Woods in (2016) 68 Stanford Law Review comments
 One of the great regulatory challenges of the Internet era — indeed, one of today’s most pressing privacy questions — is how to define the limits of government access to personal data stored in the cloud. This is particularly true today because the cloud has gone global, raising a number of questions about the proper reach of one state’s authority over cloud-based data. The prevailing response to these questions by scholars, practitioners, and major Internet companies like Google and Facebook has been to argue that data is different. Data is “un-territorial,” they argue, and therefore incompatible with existing territorial notions of jurisdiction. 
This Article challenges this view. The Article argues that the jurisdictional challenges presented by the global cloud are not conceptually as novel as they seem. Despite the technological wizardry of modern life, the “cloud” is actually a network of storage drives bolted to a particular territory, and there is a substantial body of case law suggesting that courts think of data as a physical object. Moreover, even if the cloud were a free-floating ether, data can be thought of as an intangible asset, like money or debt, which flows easily across borders; courts have been adjudicating jurisdictional disputes over intangible assets for centuries. These precedents suggest a number of distinct legitimate grounds for states to assert jurisdiction over data — not a single test, as major Internet service providers have claimed. 
After showing that these jurisdictional problems are not unprecedented, the Article turns more practical. Drawing from these precedents, the Article outlines steps that courts, Congress, and the President can take to alleviate jurisdictional conflicts over the cloud. As the recent Microsoft Ireland case works its way through the courts, the President negotiates a treaty with the United Kingdom regarding cross-border access to the cloud, and Congress rewrites the Electronic Communications Privacy Act, finding a grounded approach to addressing this problem — one rooted in longstanding jurisdictional and conflicts principles — has never been more critical.
'Data Institutionalism: A Reply to Andrew Woods' by Zachary D. Clopton in (2016) 69 Stanford Law Review Online responds
In Against Data Exceptionalism, Andrew K. Woods explores “one of the greatest societal and technological shifts in recent years,” which manifests in the “same old” questions about government power. The global cloud is an important feature of modern technological life that has significant consequences for individual privacy, law enforcement, and governance. Yet, as Woods suggests, the legal challenges presented by the cloud have analogies in age-old puzzles of public and private international law. 
Identifying these connections is a conceptual advance, and this contribution should not be understated. But, to my mind, the most telling statement in Woods’s excellent article comes early on: “Showing that the jurisdictional challenges presented by the global cloud are not conceptually novel does not resolve those problems.” Data may not be exceptional, and the legal puzzles posed by data sound in existing notions of jurisdiction and conflict of laws. The problem, however, is that existing answers to these puzzles are unsatisfying. They are unsatisfying in that they do not provide clear answers, but instead pose even more challenging normative questions. And they are unsatisfying because some consensus answers sit on shaky normative footing. More satisfying answers, I contend, require attention to institutions, not just laws.

28 October 2015

Cloud Conditions

The 71 page 'Privacy in the Clouds: An Empirical Study of the Terms of Service and Privacy Policies of 20 Cloud Service Providers' (Queen Mary School of Law Legal Studies Research Paper No. 209/2015) by Dimitra Kamarinou, Christopher Millard and W. Kuan Hon is an empirical study of the Terms of Service and Privacy Policies of 20 cloud providers.

The authors state
Our study focuses on the ways these 20 cloud providers treat various key rights that individuals have under data protection law, either when they contract directly with a cloud provider or when they access cloud services through a business or institution, such as their employer, including the right to have their personal data processed fairly and lawfully, the right to be informed about the collection of data, the specific purposes of processing and the way their data may be shared with or disclosed to third parties, including law enforcement agencies. We also look at the right to access, correct or erase personal data, the right to object to processing, the right to object to direct marketing, and the right to have personal data processed securely and be protected from accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access to data. In addition, this paper discusses the providers’ approach to disputes arising out of the use of their cloud service and their approach to compensation and indemnification. This paper also uncovers common approaches adopted by providers and mismatches between their various legal documents, and highlights the advantages and disadvantages of various practices found in the study. Finally, we make some suggestions for more effective transparency and redress options for individuals, and conclude the paper with a number of practical findings arising from the review.
'The Challenge of Bitcoin Pseudo-Anonymity to Computer Forensics' by  Edward J. Imwinkelried and  Jason Luu in (2016) Criminal Law Bulletin (Forthcoming) argues
Digital forensics must constantly adapt to new technological developments. The advent of Bitcoin is such a development. Bitcoin represents a new model for financial transactions. In many cash transactions between strangers, the underlying model is parties-unknown/transaction-unknown. There is no ledger record of the transaction. In contrast, PayPal illustrates the parties-known/transaction-known model. An intermediary will record both items of information. Bitcoin differs from both of these models; Bitcoin uses a parties-unknown/transaction-known model. The Bitcoin block chain records the transaction, but the user’s Bitcoin address is not expressly tied to an identity. Thus, Bitcoin users enjoy pseudo-anonymity.
As the recent experience with Silk Road demonstrates, there is a downside to this pseudo-anonymity. Precisely because of that feature, Silk Road served a marketplace for vendors to sell illegal narcotics, forged identifications, and other illicit goods and services. Given that danger, law enforcement authorities have a felt need to develop techniques to penetrate the pseudo-anonymity. To do so, they have turned to digital forensics experts.
This article evaluates two techniques that have been proposed for this purpose. The first is traffic analysis. This technique relies on the entry nodes that users employ to access the Internet. The second is transaction graph analysis. This technique clusters transactions to identify natural chokepoints in the Bitcoin economy, that is, service islands where, for example, the user might convert Bitcoins to fiat currency. The chokepoints becomes a target for a law enforcement subpoena to learn the user’s IP address.
After describing each technique, the article assesses the research conducted to date. In particular, the article reviews Alex Biryukov’s research into traffic analysis and Sarak Meiklejohn’s work with transaction graph analysis. The article applies the standards announced in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) to determine whether, given the available data, expert testimony based on either technique would be admissible today. The article explains that it is doubtful whether testimony based on either technique would survive a Daubert admissibility challenge. The article concludes that further research is needed to enable law enforcement authorities to effectively penetrate the pseudo-anonymity of the new parties-unknown/transaction-known model.

31 May 2015

Clouds

'Cloud Investigations by European Data Protection Authorities: An Empirical Account by Vranaki Asma in John Rothchild (ed) Research Handbook on Electronic Commerce Law (Edward Elgar, 2016) is described as drawing on
 qualitative interviews, documentary analysis and observation data to analyse how European data protection authorities (‘EU DPAs’) exercise one of their statutory enforcement powers, namely, investigations more frequently to determine the compliance of cloud providers with the relevant data protection laws. The empirical analysis presented in this chapter supports two arguments. Firstly, the investigations of cloud providers by EU DPAs ('Cloud Investigations') are complex regulatory processes that often involve different co-operative relationships between various actors, such as DPAs. In reality, manifold interactions and practices, such as facilitative instruments, are deployed to form and perform such collaborations which are vital in ensuring the consistent application and enforcement of common data protection principles in an increasingly globalised context. Secondly, Cloud Investigations are also dynamic as they can involve continually evolving regulatory enforcement styles and compliance attitudes. Cloud Providers can often resist the attempts of the EU DPAs to direct the investigative process in specific ways. How such resistance is resolved is very much context-dependent.

08 February 2015

Clouds

'Did the National Security Agency Destroy the Prospects For Confidentiality and Privilege When Lawyers Store Clients' Files in the Cloud — And What, If Anything, Can Lawyers and Law Firms Realistically Do in Response?' by Sarah Jane Hughes in (2014) 41(3) Northern Kentucky Law Review comments
 Since the first commentaries about the range and depth of the National Security Agency’s (NSA) metadata-gathering from telephone calls, emails, and other uses of the Internet in 2013, less has been written about important collateral consequences of the NSA’s work. This article looks at two of these consequences -- threats posed to confidential and privileged information that communications between lawyers and clients often contain that is overheard by NSA and its cadre of contractors such as Edward Snowden and may be shared with U.S. government agencies for purposes many would not have connected to national security or counter-terrorism/counter-intelligence, and NSA’s deployment of “back-door” tools to de-encrypt encrypted data. 
The article also looks at challenges to confidentiality and privilege stemming from the storage of clients’ data in hosted clouds, building upon an article that Professor Hughes wrote with Roland L. Trope, entitled 'Red Skies in the Morning -- Professional Ethics at the Dawn of Cloud Computing' 38 William Mitchell Law Review 111 (2011). It also looks to the August 2012 amendment to American Bar Association’s Model Rules of Professional Conduct, with particular attention to Rules 1.1, 1.4, 1.6, and 1.15. The conclusion advocates a mix of cyber-smart steps as well as old-fashioned, non-cyber methods to reduce the risks that clients’ confidential and privileged information is properly protected by lawyers.

08 May 2014

Clouds and the proposed EU Data Protection Regulation

'Cloud Accountability: The Likely Impact of the Proposed EU Data Protection Regulation' by W. Kuan Hon, Eleni Kosta, Christopher Millar and Dimitra Stefanatou considers
the implications for cloud accountability of current proposals under the draft General Data Protection Regulation to modernise the EU Data Protection Directive. It makes recommendations aimed at improving the technology-neutrality of the proposals and their appropriateness for cloud computing, with a view to ensuring that the proposals will maintain or enhance protection of personal data for data subjects while not unduly deterring cloud computing. 
It is based on documents publicly available as at 14 February 2014, and analyses and compares the European Commission's January 2012 draft, the LIBE Committee's November 2013 draft (since approved unamended by the full European Parliament in March 2014), and the first full draft of the Council published in December 2013. 
A similar work by Kosta et al was noted here.

In the 60 page current paper the authors offer several recommendations -
  • Cloud and other new technologies should not be treated as risky per se – risks depend on their intended use and the type and sensitivity of the data concerned. 
  • For technology neutrality, only persons with logical access to intelligible personal data should be regulated. Physical access is not necessary or sufficient to access intelligible personal data. 
  • The ‘personal data’ definition triggers obligations and liability in an ‘all or nothing’ fashion and could encompass most data. A concept of pseudonymous data is one way to calibrate obligations, but definitions and obligations for each data type need further consideration. 
  • The extra-territorial scope of EU data protection law is unclear. To avoid discouraging non-EU controllers and providers from using EU data centres and EU cloud providers or sub-providers, the status of data centres and hardware/software providers should be clarified explicitly, as should the key definitions of ‘establishment’, ‘context of activities’ and ‘offering’. 
  • Clarity is needed regarding which obligations should trigger ‘strict liability’ for any non- compliance regardless of fault, and which should be risk-based, eg requiring only the taking of measures appropriate to the individual situation or reasonable measures to industry standards. 
  • We support a more focused risk-based approach, as opposed to requiring privacy impact assessments etc in a broad range of situations that may not warrant it from a risks perspective. 
  • To incentivise adoption of accountability measures such as codes of conduct, certifications, and seals, consequences of adoption should be made clear. In particular, defences or reductions in liability should be available to those who have obtained and complied with such measures. 
  • Defences available to intermediaries under the E-Commerce Directive should be available to cloud providers if they do not know that data stored with them by their users are personal data, or do not or cannot access intelligible personal data. Also, provisions regarding ‘instructions’ to processors should instead target the underlying mischief, namely misuse or disclosure of intelligible personal data by processors.
  • Rather than impose joint liability on processors and co-controllers, a more fault-based allocation of liability is recommended. Careful consideration is needed of exactly which obligations should be imposed on processors. 
  • Consideration should be given to abolishing the data export restriction and international agreement sought on jurisdictional conflicts and rules restricting, or compelling, government access to personal data. If the restriction is retained, ‘transfer’ should be defined by reference to intention to give or allow logical access to intelligible personal data to a third party recipient who is subject to the jurisdiction of a third country. Prior authorisations by data protection authorities are not practicable and should be required only in selective appropriate cases. Any ‘legitimate interests’ derogation should be based not on size or frequency of transfers but on risk-appropriate safeguards and a balancing against data subjects’ rights and interests. 
  • We support updating security requirements in line with general concepts of confidentiality, integrity and availability. 
  • The requirements and scope of data protection by design and default need to cater for infrastructure providers who may not know the nature of data processed using their infrastructure, and controllers and processors who may have limited control over infrastructure. 
  • Clarification is needed regarding the types of data breaches to be notified, thresholds and the detailed contents of any public register, but we support the deletion of ‘hard’ time limits. 
  • The right to data portability is very limited in scope, and this could be reconsidered, as well as its relationship with the right to erasure.

22 February 2014

Privacy's Midlife Crisis?

'Privacy Law’s Midlife Crisis: A Critical Assessment of the Second Wave of Global Privacy Laws' by Omer Tene in (2013) 74(6) Ohio State Law Journal argues that
Privacy law is suffering from a midlife crisis. Despite well-recognized tectonic shifts in the socio-technological-business arena, the information privacy framework continues to stumble along like an aging protagonist in a rejuvenated cast. The framework’s fundamental concepts are outdated; its goals and justifications in need of reassessment; and yet existing reform processes remain preoccupied with internal organizational measures, which yield questionable benefits to individuals. At best, the current framework strains to keep up with new developments; at worst, it has become irrelevant. More than three decades have passed since the introduction of the OECD Privacy Guidelines; and fifteen years since the EU Directive was put in place and the “notice and choice” approach gained credence in the United States. This period has seen a surge in the value of personal information for governments, businesses, and society at large. Innovations and breakthroughs, particularly in information technologies, have transformed business models and affected individuals’ lives in previously unimaginable ways. Not only technologies, but also individuals’ engagement with the data economy have radically changed. Individuals now proactively disseminate large amounts of personal information online via platform service providers, which act as facilitators rather than initiators of data flows. Data transfers, once understood as discrete point-to-point transmissions, have become ubiquitous, geographically indeterminate, and typically “residing” in the cloud.
This Article addresses the challenges posed to the existing information privacy framework by three main socio-technological-business shifts: the surge in big data and analytics; the social networking revolution; and the migration of personal data processing to the cloud. The term big data refers to the ability of organizations to collect, store, and analyze previously unimaginable amounts of unstructured information in order to find patterns and correlations and draw useful conclusions. Big data creates tremendous value for the world economy, individuals, businesses, and society at large. At the same time, it heightens concerns over privacy, equality, and fairness, and pushes back against well-established privacy principles. Social networking services have revolutionized the relationship between individuals and organizations. Those creating, storing, using, and disseminating personal information are no longer just organizations, but also geographically dispersed individuals who post photos, submit ratings, and share their location online. The term cloud computing encompasses (at least) three distinct models of utilizing computing resources through a network—software, platform, and infrastructure as a service. The advantages of cloud computing abound and include, from the side of organizations, reduced cost, increased reliability, scalability, and security, and from the side of users, the ability to access data from anywhere, on any device, at any time, and to collaborate on a single document across multiple users; however, the processing of personal information in the cloud poses new privacy risks.
In response to these changes, policymakers in the Organization for Economic Co-operation and Development (OECD), EU and the United States launched extensive processes for fundamental reform of the information privacy framework. The product of these processes is set to become the second generation of information privacy law. Yet, as discussed in this Article, the second generation is strongly anchored in the existing framework, which in turn is rooted in an architecture dating back to the 1970s. The major dilemmas and policy choices of information privacy remain unresolved.
First, the second generation fails to update the definition of personal data,  the fundamental building block of the framework. Recent advances in reidentification science have shown the futility of traditional de-identification techniques in a big data ecosystem. Consequently, the scope of the framework is either overbroad, potentially encompassing every bit and byte of information, ostensibly not about individuals; or overly narrow, excluding de-identified information, which could be re-identified with relative ease. More advanced notions that have gained credence in the scientific community, such as differential privacy and privacy enhancing technologies, have been left out of the debate.
Second, the second generation maintains and even expands the central role of consent. Consent is a wild card in the privacy deck. Without it, the framework becomes paternalistic and overly rigid; with it, organizations can whitewash questionable data practices and point to individuals for legitimacy. The Article argues that the role of consent should be demarcated according to normative choices made by policymakers with respect to prospective data uses. In some cases, consent should not be required; in others, consent should be assumed subject to a right of refusal; in specific cases, consent should be required to legitimize data use. Formalistic insistence on consent and purpose limitation can impede data driven breakthroughs that benefit society as a whole.
Third, the second generation remains rooted on a linear approach to processing whereby an active “data controller” collects information from a passive individual, and then stores, uses, or transfers it until its ultimate deletion. The explosion of peer produced content, particularly on social networking services, and the introduction into the data value chain of layer upon layer of service providers, have meant that for vast swaths of the data ecosystem, the linear model has become obsolete. Privacy risks are now posed by an indefinite number of geographically dispersed actors, not least individuals themselves, who voluntarily share their own information and that of their friends and relatives. Despite much discussion of “Privacy 2.0,” the emerging framework fails to account for these changes. Moreover, in many contexts, such as mobile applications, behavioral advertising, or social networking services, it is not necessarily the controller, but rather an intermediary or platform provider, that wields the most control over information.
Fourth, the second generation, particularly of European data protection laws, continues to view information as “residing” in a jurisdiction, despite the geographical indeterminacy of cloud storage and transfers. For many years, transborder data flow regulation has caused much consternation to global businesses, while generating formidable legal fees. Unfortunately, this is not about to change. While not providing solutions to these challenging problems, the Article sets an agenda for future research, identifying issues and potential paths towards a rejuvenated framework for a rapidly changing environment.
'The EU-US Privacy Collision: A Turn To Institutions And Procedures' by Paul M. Schwartz in (2013) 126 Harvard Law Review 1966 argues that
 Internet scholarship in the United States generally concentrates on how decisions made in this country about copyright law, network neutrality, and other policy areas shape cyberspace. In one important aspect of the evolving Internet, however, a comparative focus is indispensable. Legal forces outside the United States have significantly shaped the governance of information privacy, a highly important aspect of cyberspace, and one involving central issues of civil liberties. The EU has played a major role in international decisions involving information privacy, a role that has been bolstered by the authority of EU member states to block data transfers to third party nations, including the United States.
The European Commission’s release in late January 2012 of its proposed “General Data Protection Regulation” (the Proposed Regulation) provides a perfect juncture to assess the ongoing EU-U.S. privacy collision. An intense debate is now occurring about critical areas of information policy, including the rules for lawfulness of personal processing, the “right to be forgotten,” and the conditions for data flows between the EU and the United States.
This Article begins by tracing the rise of the current EU-U.S. privacy status quo. The European Commission’s 1995 Data Protection Directive (the Directive) staked out a number of bold positions, including a limit on international data transfers to countries that lacked “adequate” legal protections for personal information. The impact of the Directive has been considerable. The Directive has shaped the form of numerous laws, inside and outside of the EU, and contributed to the creation of a substantive EU model of data protection, which has also been highly influential.
This Article explores the path that the United States has taken in its information privacy law and explores the reasons for the relative lack of American influence on worldwide information privacy regulatory models. As an initial matter, the EU is skeptical regarding the level of protection that U.S. law actually provides. Moreover, despite the important role of the United States in early global information privacy debates, the rest of the world has followed the EU model and enacted EU-style “data protection” laws.
At the same time, the aftermath of the Directive has seen ad hoc policy efforts between the United States and EU that have created numerous paths to satisfy the EU’s requirement of “adequacy” for data transfers from the EU to the United States. The policy instruments involved are the Safe Harbor, the two sets of Model Contractual Clauses, and the Binding Corporate Rules. These policy instruments provide key elements for an intense process of nonlegislative lawmaking, and one that has involved a large cast of characters, both governmental and nongovernmental.
This Article argues that this policymaking has not been led exclusively by the EU, but has been a collaborative effort marked by accommodation and compromise. In discussing this process of nonlegislative lawmaking, this Article will distinguish the current policymaking with respect to privacy from Professor Anu Bradford’s “Brussels Effect.” This nonlegislative “lawmaking” is a productive outcome in line with the concept of “harmonization networks” that Professor Anne-Marie Slaughter has identified in her scholarship. “Harmonization networks” develop when regulators in different countries work together to harmonize or otherwise adjust different kinds of domestic law to achieve outcomes favorable to all parties. The Article then analyzes the likely impact of the Proposed Regulation, which is slated to replace the Directive. The Proposed Regulation threatens to destabilize the current privacy policy equilibrium and prevent the kind of decentralized global policymaking that has occurred in the past. The Proposed Regulation overturns the current balance by heightening certain individual rights beyond levels that U.S. information privacy law recognizes. It also centralizes power in the European Commission in a way that destabilizes the policy equilibrium within the EU, and thereby threatens the current policy processes around harmonization networks.
To avert the privacy collision ahead, this Article advocates modifications to the kinds of institutions and procedures that the Proposed Regulation would create. A “Revised Data Protection Regulation” should concentrate on imposing uniformity only on “field definitions,” that is, the critical terms that mark the scope of this regulatory field. The Revised Regulation should be clear that member states can supplement areas that do not fall within its scope with national measures. This approach would leave room for further experiments in data protection by the member states. The Revised Regulation should also alter the currently proposed procedures to limit the Commission’s assertion of power as the final arbiter of information privacy law.

29 August 2013

HIPAA in the Sky with Dollars

'The Future of HIPAA in the Cloud' (Seton Hall Public Law Research Paper No. 2298158) by Frank Pasquale and Tara Adams Ragone
examines how cloud computing generates new privacy challenges for both healthcare providers and patients, and how American health privacy laws may be interpreted or amended to address these challenges. Given the current implementation of Meaningful Use rules for health information technology and the Omnibus HIPAA Rule in health care generally, the stage is now set for a distinctive law of “health information” to emerge. HIPAA has come of age of late, with more aggressive enforcement efforts targeting wayward healthcare providers and entities. Nevertheless, more needs to be done to assure that health privacy and all the values it is meant to protect are actually vindicated in an era of ever faster and more pervasive data transfer and analysis. 
After describing how cloud computing is now used in healthcare, this white paper examines nascent and emerging cloud applications. Current regulation addresses many of these scenarios, but also leaves some important decision points ahead. Business associate agreements between cloud service providers and covered entities will need to address new risks. To meaningfully consent to new uses of protected health information, patients will need access to more sophisticated and granular methods of monitoring data collection, analysis, and use. Policymakers should be concerned not only about medical records, but also about medical reputations used to deny opportunities. In order to implement these and other recommendations, more funding for technical assistance for health privacy regulators is essential.

14 April 2013

Hot Clouds

From this month's The Power of Wireless Cloud: An analysis of the energy consumption of wireless cloud [PDF], a white paper by the Centre for Energy-Efficient Telecommunications (CEET) at Melbourne University -
Previous analysis and industry focus has missed the point: access networks, not data centres, are the biggest threat to the sustainability of cloud services. This is because more people are accessing cloud services via wireless networks. These networks are inherently energy inefficient and a disproportionate contributor to cloud energy consumption.
Cloud computing has rapidly emerged as the driving trend in global Internet services. It is being promoted as a green technology that can significantly reduce energy consumption by centralising the computing power of organisations that manage large IT systems and devices. The substantial energy savings available to organisations moving their ICT services into the cloud has been the subject of several recent white papers.
Another trend that continues unabated is the take-up and use of personal wireless communications devices. These include mobile phones, wireless-enabled laptops, smartphones and tablets. In fact, tablets don’t accommodate a traditional cable connection; rather it is assumed a local or mobile wireless connection will be used to support all data transferred to and from the device. There is a significant emerging convergence between cloud computing and wireless communication, providing consumers with access to a vast array of cloud applications and services with the convenience of anywhere, anytime, any network functionality from the device of their choice. These are services many of us use every day like Google Apps, Office 365, Amazon Web Services (AWS), Facebook, Zoho cloud office suite, and many more.
To date, discussion about the energy efficiency of cloud services has focussed on data centres, the facilities used to store and serve the massive amounts of data underpinning these services. The substantial energy consumption of data centres is undeniable and has been the subject of recent high-profile reports including the Greenpeace report, How Clean is Your Cloud.
However, focussing cloud efficiency debate on data centres alone obscures a more significant and complex problem and avoids the critical issue of inefficiency in the wireless access network Data centres are only part of a much larger cloud-computing ecosystem. In fact, as this white paper puts forward, the network itself, and specifically the final link between telecommunications infrastructure and user device is by far the dominant and most concerning drain on energy in the entire cloud system.
Based on current trends, wireless access technologies such as WiFi (utilising fibre and copper wireline infrastructure) and 4G LTE (cellular technology) will soon be the dominant methods for accessing cloud services. ‘Wireless cloud’ is a surging sector with implications that cannot be ignored. Our energy calculations show that by 2015, wireless cloud will consume up to 43 TWh, compared to only 9.2 TWh in 2012, an increase of 460%. This is an increase in carbon footprint from 6 megatonnes of CO2 in 2012 to up to 30 megatonnes of CO2 in 2015, the equivalent of adding 4.9 million cars to the roads. Up to 90% of this consumption is attributable to wireless access network technologies, data centres account for only 9%.
Curbing the user convenience provided by wireless access seems unlikely and therefor the ICT sector faces a major challenge. Finding solutions to the ‘dirty cloud’ at the very least requires a broader acknowledgment of the cloud computing ecosystem and each components’ energy requirements. There needs to be a focus on making access technologies more efficient and potentially a reworking of how the industry manages data and designs the entire global network.
This white paper sets out to establish a starting point for addressing these issues, presenting a detailed model that estimates the energy consumption of wireless cloud services in 2015 taking into account all of the components required to deliver those services.
The authors conclude -
Cloud computing is widely viewed as the next major evolutionary step for the Internet and Internet-based services. The shift to wireless access is also continuing at a great rate. Cisco projects that cloud computing will represent approximately 34% of data centre traffic in 2015 [3], with approximately 20% of data centre traffic will be served by wireless access networks.
Wireless and cloud are converging trends supported by the increased availability of affordable, powerful portable devices, convenient and useful applications, and highspeed wireless broadband infrastructure. This convergence is expected to be a key driver of traffic growth on telecommunications networks in the future.
There is evidence to show that cloud services access via fixed-line networks could result in lower energy consumption relative to current computing arrangements, such as replacing powerful desktop computers with cloud services [9,10,11]. Greenpeace has highlighted the carbon footprint of cloud computing but focused on data centres as being the biggest contributor to energy consumption. When considering the energy consumption of the wireless cloud, all aspects of the cloud ecosystem must be taken into account, including end-user devices, broadband access technology, metro and core networks, as well as data centres.
This white paper analysed the various components of the wireless cloud ecosystem to identify the dominant energy consumers. The CEET model explored the impact of the wireless cloud, accounting for all aspects of the ecosystem including devices, broadband access technology, and metro and core telecommunications, in addition to data centres. The predicted large-scale take-up of wireless cloud services will consume 32 to 43 TWh by 2015. The energy consumption of wireless access dominates data centre consumption by a significant margin.
To ensure the energy sustainability of future wireless cloud services, there needs to be a strong focus on the part of the ecosystem that consumes the most energy: wireless access networks. Further debate needs to move beyond the data centre to develop a holistic account of the ecosystem with this white paper being a step in that direction.

18 October 2012

Unleashing the EU Cloud

The European Commission has released its Communication [PDF] on Unleashing the Potential of Cloud Computing in Europe.

The Communication comments that
‘Cloud computing’ in simplified terms can be understood as the storing, processing and use of data on remotely located computers accessed over the internet. This means that users can command almost unlimited computing power on demand, that they do not have to make major capital investments to fulfil their needs and that they can get to their data from anywhere with an internet connection. Cloud computing has the potential to slash users' IT expenditure and to enable many new services to be developed. Using the cloud, even the smallest firms can reach out to ever larger markets while governments can make their services more attractive and efficient even while reining in spending. 
Where the World Wide Web makes information available everywhere and to anyone, cloud computing makes computing power available everywhere and to anyone. Like the web, cloud computing is a technological development that has been ongoing for some time and will continue to develop. Unlike the web, cloud computing is still at a comparatively early stage, giving Europe a chance to act to ensure being at the forefront of its further development and to benefit on both demand and supply side through wide-spread cloud use and cloud provision. 
The Commission therefore aims at enabling and facilitating faster adoption of cloud computing throughout all sectors of the economy which can cut ICT costs, and when combined with new digital business practices1, can boost productivity, growth and jobs. On the basis of an analysis of the overall policy, regulatory and technology landscapes and a wide consultation of stakeholders, undertaken to identify what needs to be done to achieve that goal, this document sets out the most important and urgent additional actions. It delivers one of the main actions foreseen in the Communication on e-Commerce and online services; it represents a political commitment of the Commission and serves as a call on all stakeholders to participate in the implementation of these actions, which could mean an additional EUR 45 billion of direct spend on Cloud Computing in the EU in 2020 as well as an overall cumulative impact on GDP of EUR 957 billion, and 3.8 million jobs, by 2020. 
Several of the identified actions are designed to address the perception, by many potential adopters of cloud computing, that the use of this technology may bring additional risks. The actions do so by aiming at more clarity and knowledge about the applicable legal framework, by making it easier to signal and verify compliance with the legal framework (e.g. through standards and certification) and by developing it further (e.g. through a forthcoming legislative initiative on cyber security). 
Addressing the specific challenges of cloud computing would mean a faster and more harmonised adoption of the technology by Europe's businesses, organisations and public authorities, resulting, on the demand side, in accelerated productivity growth and increased competitiveness across the whole economy as well as, on the supply-side, in a larger market in which Europe becomes a key global player. Here, the European ICT sector stands to benefit from important new opportunities; given the right context, Europe's traditional strengths in telecommunications equipment, networks and services could be deployed very effectively for cloud infrastructures. Beyond that, European application developers large and small could benefit from rising demand.
It indicates that
preparatory work undertaken by the Commission shows the key areas where actions are needed:
• Fragmentation of the digital single market due to differing national legal frameworks and uncertainties over applicable law, digital content and data location ranked highest amongst the concerns of potential cloud computing adopters and providers. This is in particular related to the complexities of managing services and usage patterns that span multiple jurisdictions and in relation to trust and security in fields such as data protection, contracts and consumer protection or criminal law. 
• Problems with contracts were related to worries over data access and portability, change control and ownership of the data. For example there are concerns over how liability for service failures such as downtime or loss of data will be compensated, user rights in relation to system upgrades decided unilaterally by the provider, ownership of data created in cloud applications or how disputes will be resolved. 
• A jungle of standards generates confusion by, on one hand, a proliferation of standards and on the other hand a lack of certainty as to which standards provide adequate levels of interoperability of data formats to permit portability; the extent to which safeguards are in place for the protection of personal data; or the problem of the data breaches and the protection against cyberattacks.
This strategy does not foresee the building of a "European Super-Cloud", i.e. a dedicated hardware infrastructure to provide generic cloud computing services to public sector users across Europe. However, one of the aims is to have publicly available cloud offerings ("public cloud") that meet European standards not only in regulatory terms but in terms of being competitive, open and secure. This does not preclude public authorities from setting up dedicated private clouds for the treatment of sensitive data, but in general even cloud services used by the public sector should – as far as feasible – be subject to competition on the market to ensure best value for money, while conforming to regulatory obligations or wider public- policy objectives in respect of key operating criteria such as security and protection of sensitive data.
The Communication highlights privacy issues, stating -
Data protection emerged from the consultation and the studies launched by the Commission as a key area of concern that could impede the adoption of cloud computing. In particular, faced with 27 partly diverging national legislative frameworks, it is very hard to provide a cost-effective cloud solution at the level of digital single market. In addition, given the cloud’s global scope, there was a call for clarity on how international data transfers would be regulated. These concerns have been addressed, in completion of another Digital Agenda Action, by the proposal of a strong and uniform legal framework providing legal certainty on data protection by the Commission on 25 January 2012. 
The proposed regulation addresses the issues raised by the cloud. Centrally, it clarifies the important question of applicable law, by ensuring that a single set of rules would apply directly and uniformly across all 27 Member States. It will be good for business and citizens by bringing about a level playing field and reduced administrative burden and compliance costs throughout Europe for businesses, while ensuring a high level of protection for individuals and giving them more control over their data. Increased transparency of data processing will also help increase consumer trust. The proposal facilitates transfers of personal data to countries outside the EU and EEA while ensuring the continuity of protection of the concerned individuals. The new legal framework will provide for the necessary conditions for the adoption of codes of conduct and standards for the cloud, where stakeholders see a need for certification schemes that verify that the provider has implemented the appropriate IT security standards and safeguards for data transfers. 
Given that data protection concerns were identified as one of the most serious barriers to cloud computing take-up, it is all the more important that Council and Parliament work swiftly towards the adoption of the proposed regulation as soon as possible in 2013. 
Meanwhile, as cloud computing involves chains of providers and other actors such as infrastructure or communications providers, guidance is required on how to apply the existing EU Data Protection Directive, notably to identify and distinguish the data protection rights and obligations of data controllers and processors for cloud service providers, or actors in the cloud computing value chain. Moreover, due to the specific nature of the cloud, questions have been raised about applicable law in case where the relevant place of establishment of a cloud provider may be hard to determine, e.g. for a non-EU user of a non-EU provider operating equipment in the EU. In this context, the Commission welcomes the guidance on how to apply the existing EU Data Protection Directive given in the Opinion of the data protection working party, the so called "Article 29 Working Party" on cloud computing of 1 July 2012. The Commission considers that the Article 29 Working Party Opinion provides a good basis for the transition from the current EU Data Protection Directive to the new EU Data Protection Regulation and that it should guide the work of national authorities and of businesses, thereby offering maximum clarity and legal certainty on the basis of the existing legal framework. Moreover, once the proposed regulation is adopted, the Commission will make use of the new mechanisms set out therein to provide, in close cooperation with national data protection authorities, any necessary additional guidance on the application of European data protection law in respect of cloud services.

04 August 2012

Enclosure

'The Cloud: Boundless Digital Potential or Enclosure 3.0?' by David Lametti argues that -
 The Cloud presents enormous potential for users to have access to facilities such as vast data storage and infinite computing capacity. Yet the Cloud, taken from the perspective of the average user, does have a dark side. I agree with a number of writers and the concerns that they raise about privacy and personal autonomy on the internet and the Cloud. However, I wish to voice concern over another change. From the perspective of users, the Cloud might also reduce the range of user possibilities for robust interaction with the internet/Cloud in a manner which then prevents users from participating in the internet as creators, collaborators, and sharers. The Cloud is “manageable” in a way the internet was not, and with users increasingly interacting with the internet with relatively less powerful devices than computers – smartphones, tablets and the like – this ability for Cloud service providers to control or manage users is enhanced. 
We owe the vocabulary of “enclosure” to Hungarian-Canadian political economist Karl Polanyi. In his seminal work, The Great Transformation, Polanyi described the enclosure movement in England in which communally integrated and collective farming practices on common lands were suppressed by authorities of the state, forcefully and sometimes brutally, in order to privatize land resources and create the conditions for a market economy in both agriculture as well as other sectors. More recently, the term “enclosure” has been used effectively by American intellectual property scholars such as James Boyle to describe the manner in which intellectual property rules and the concurrent practices of IP rights holders (for copyright, often large corporate interests) in the age of the internet were being used to restrict access to the public domain of ideas or the information commons. 
I argue that the Cloud, unless monitored and possibly directed, has the potential to go beyond undermining copyright and the public domain – Enclosure 2.0 – and to go beyond weakening privacy. This round, which I call “Enclosure 3.0”, has the potential to disempower internet users and conversely empower a very small group of gatekeepers. Put bluntly, it has the potential to relegate internet users to the status of digital sheep. 
By focusing on the entities that provide Cloud services, I argue that we might take steps to encourage or, if necessary, force private entities to keep the Cloud open and accessible in the long term. I also posit the desirability of a publicly-held Cloud to achieve this same end.
Let's not quibble about Polanyi (a vocabulary of 'enclosure' was in use a century before he arrived on the scene). Lametti in discussing a public cloud comments that -
So we must also be open to the possibility of the need to create a publicly-delivered Cloud to allow access to those who either cannot afford to use the privately-held public Cloud or who may not wish to participate under restrictive terms (or run the risk that they will become too restrictive). It would also give a voice to those who wish to maintain the various open software and public domain projects seen thus far on the internet. As such, a publicly-held Cloud does not have to be a massive investment in infrastructure. It is perhaps ironic, however, that the most important function of maintaining some sort of publicly-held Cloud, even if only a small one, is the positive impact that it will have on the privately-held Cloud. A Cloud that is open, inexpensive, flexible and secure is in effect a competitor in providing services on the Cloud and will hopefully encourage similar features throughout the Cloud. 
For the time being, in skeletal form, I would argue that the publicly-held Cloud needs to be created, bolstered and maintained by:
  •  providing resources to public actors (like universities) for building the computing and storage infrastructure to create and maintain a minimal, publicly-delivered Cloud service; 
  • encouraging open software, open access, open knowledge and digital sharing movements to continue; and to provide Cloud services where possible; 
  • where necessary, encouraging or forcing universities and other agencies funded by the state to maintain a Cloud, providing the various kinds of Cloud services (SaaS, IaaS, PaaS) directly to not only their staff and students, but to the wider community; and 
  • perhaps using public-private partnerships (PPPs).
Admittedly, this last scenario is a more challenging option, but might nevertheless be appropriate in those contexts where states do not have the capacities in their public institutions to provide internet and Cloud services. It may also be the case – as has been the case in the varied contexts and economic histories of many countries – that the quango (or quasi-autonomous state agency, Crown corporation, etc.) is the appropriate tool for the development of this critical resource. No good idea for a hybrid solution should be rejected a priori. Different countries might find different solutions depending on their policy contexts. 
Moreover, I would argue that governments need to ensure that the privately-held Cloud remains accessible by:
  • mandating and implementing the highest standards of interoperability in Cloud technology, encouraging the use of open platforms and open access software, and barring attempts by individual providers to lock their systems; 
  • protecting users from monopolistic business practices through competition and consumer law; 
  • requiring privately-delivered Cloud service providers to make space available to community driven projects such as Ubuntu 1; 
  • mandating and implementing the highest privacy standards perhaps via a user’s bill of rights; and 
  • mandating the highest standard of basic user rights, again perhaps via a user’s bill of rights.
Further, as far as possible, it would be beneficial to make the privately-held Cloud conform to these last desiderata, either through positive legislation or incentives. As regards the architecture of the publicly-held Cloud, the availability of resources (human know-how, physical infrastructure and ongoing financial resources) is necessary. The key may very well be in “reminding” universities and public research centres of their public vocation, which in Europe, Canada and the US could work effectively, provided that the resources to maintain the public Cloud are indeed furnished. But the use of universities, for example, does not preclude other loci for the provision of cloud computing capacities. Collaborations among governments, say the EU and Canada, for example, might be encouraged to build facilities – built and perhaps operated jointly – in northern climates that are both cold enough to cool and are close to clean sources of electricity; resources currently necessitated by Cloud server technology. 
I am aware that governments have not always been the most virtuous players on the internet. They have blocked access to the internet, and its content, and even governments generally considered to be “responsible” and “democratic” have used it for surveillance purposes. Indeed, in some places it is clear that governments ought best be feared. Hence, there is also a serious, related concern with the possibility that governments may use the potential controllability of the Cloud as an efficient means to gather information about individual users for a variety of purposes. Acknowledging this fact, I would still maintain that a collaboration between accountable governments and government institutions, on their own or with the private sector, could set a high ethical standard for internet and Cloud participation. 
Thus, in the end, polycentric solutions – private, directly provided government services, and indirectly “government-encouraged” services by public, quasi-public and even private actors – will form a part of the mix in keeping the Cloud’s gates from being controlled by private Cerberus. Of course this means that governments will need to take a proactive role domestically, and cooperate at an international level. But hopefully even the most minimalist political ideology will (1) see the importance of this role for the development of its own citizenry and economy, and (2) find within the various governance options ones that it can implement according to its own philosophy.

31 July 2012

Cloudy Weather

The Article 29 Working Party - the EU data protection policy body that comprises representatives of the 27 EU data protection authorities, the European Data Protection Supervisor and the European Commission - has formally adopted a 27 page Opinion on cloud computing [PDF].

The Opinion is aimed at cloud providers (processors) and users of cloud services (data controllers), with an emphasis on greater understanding of their responsibilities. It features recommendations including requiring cloud providers to tell their clients where their data may be physically stored, to make sure cloud providers delete all personal data in the cloud if it's no longer necessary, and to inform clients about any sub-contractors they plan to use to process data. It also includes specific recommendations covering transfer of European data to the US, notably that cloud clients demand the implementation of data protection safeguards with model contract clauses or a legal agreement which imposes regular reporting and auditing requirements on cloud providers to prove that data is being handled according to EU law.

The Article 29 Working Party comments that -
In this Opinion the Article 29 Working Party analyses all relevant issues for cloud computing service providers operating in the European Economic Area (EEA) and their clients specifying all applicable principles from the EU Data Protection Directive (95/46/EC) and the e-privacy Directive 2002/58/EC (as revised by 2009/136/EC) where relevant. 
Despite the acknowledged benefits of cloud computing in both economic and societal terms, this Opinion outlines how the wide scale deployment of cloud computing services can trigger a number of data protection risks, mainly a lack of control over personal data as well as insufficient information with regard to how, where and by whom the data is being processed/sub-processed. These risks need to be carefully assessed by public bodies and private enterprises when they are considering engaging the services of a cloud provider. This Opinion examines issues associated with the sharing of resources with other parties, the lack of transparency of an outsourcing chain consisting of multiple processors and subcontractors, the unavailability of a common global data portability framework and uncertainty with regard to the admissibility of the transfer of personal data to cloud providers established outside of the EEA. Similarly, a lack of transparency in terms of the information a controller is able to provide to a data subject on how their personal data is processed is highlighted in the opinion as matter of serious concern. Data subjects must1 be informed who processes their data for what purposes and to be able to exercise the rights afforded to them in this respect. 
A key conclusion of this Opinion is that businesses and administrations wishing to use cloud computing should conduct, as a first step, a comprehensive and thorough risk analysis. All cloud providers offering services in the EEA should provide the cloud client with all the information necessary to rightly assess the pros and cons of adopting such a service. Security, transparency and legal certainty for the clients should be key drivers behind the offer of cloud computing services. In terms of the recommendations contained in this Opinion, a cloud client’s responsibilities as a controller is highlighted and it is thus recommended that the client should select a cloud provider that guarantees compliance with EU data protection legislation. Appropriate contractual safeguards are addressed in the opinion with the requirement that any contract between the cloud client and cloud provider should afford sufficient guarantees in terms of technical and organizational measures. Also of significance is the recommendation that the cloud client should verify whether the cloud provider can guarantee the lawfulness of any cross-border international data transfers. 
Like any evolutionary process, the rise of cloud computing as a global technological paradigm represents a challenge. This Opinion, as it stands, can be deemed to be an important step in defining the tasks to be assumed in this regard by the data protection community in the upcoming years..

03 January 2012

Dark Clouds

'Government Cloud Computing and the Policies of Data Sovereignty' by Kristina Irion argues that -
Government cloud services are a new development at the intersection of electronic government and cloud computing which holds the promise of rendering government service delivery more effective and efficient. Cloud services are virtual, dynamic and potentially stateless which has triggered governments’ concern about data sovereignty. This paper explores data sovereignty in relation to government cloud services and how national strategies and international policy evolve. It concludes that for countries data sovereignty presents a legal risk which can not be adequately addressed with technology or through contractual arrangements alone. Governments therefore adopt strategies to retain exclusive jurisdiction over government information.
She concludes -
If cloud computing is the next paradigm in computing than governments can not miss this trend and continue to migrate public digital assets to cloud services. Governments find themselves in the dilemma to ensure sovereignty over data residing in the cloud which is virtual, dynamic and potentially stateless. Data sovereignty is an ideal conception of information ownership which compensates for the progressing virtualization of information where digital data is stored and processed remotely. For governments this means:
Government’s control over all virtual public assets, which are not in the public domain, irrespective whether they are stored on own or third parties’ facilities and premises, and which are governed under an effective information assurance framework, including, where appropriate, strategies to retain exclusive jurisdiction over government information.
Countries treat this issue as a legal risk which can not be adequately addressed with technology or through contractual arrangements alone. Hence, in applying their national risk management strategy the countries surveyed (US, UK, Australia, and Canada) restrict cloud solutions for sensitive government information (medium- and high-risk) to their territory which contradicts the cloud technology’s global philosophy.

The call for international policy to remedy the complexity of divergent, and at times conflicting, regulations of different countries pertaining to cloud computing can help to establish a viable commercial environment. International standard-setting may, however, not go far enough to provide a solution to governments’ data sovereignty concerns over transborder flows of government data. From a risk-management point of view the territoriality paradigm which favors national cloud services would preempt any international agreement build on mutual trust.

Besides, the concept of data sovereignty offers a proposition how to strengthen the link between the data owner and the all types of data not limited to the protection of personal information. Cloud computing presents a scenario to argue that it is not enough to update and harmonize existing regulation but to take information ownership rights to a new level.

06 April 2011

Privacy in the cloud

'The Problem of 'Personal Data' in Cloud Computing - What Information is Regulated?' (Queen Mary University of London, School of Law Legal Studies Research Paper No. 75/2011) by W Kuan Hon, Christopher Millard & Ian Walden argues that -
Cloud computing service providers, even those based outside Europe, may become subject to the EU Data Protection Directive's extensive and complex regime purely through their customers' choices, of which they may have no knowledge or control. We consider the definition and application of the EU 'personal data' concept in the context of anonymisation / pseudonymisation, encryption and data fragmentation in cloud computing, arguing that the definition should be based on the realistic risk of identification, and that the applicability of data protection rules should be based on the risk of harm and its likely severity. In particular, the status of encryption and anonymisation / pseudonymisation procedures should be clarified to promote their use as privacy-enhancing techniques; data encrypted and secured to recognised standards should not be considered 'personal data' in the hands of those without access to the decryption key, such as many cloud computing providers; and finally, unlike, for example, social networking sites, Infrastructure as a Service and Platform as a Service providers (and certain Software as a Service providers) offer no more than utility infrastructure services, and may not even know if information processed using their services is 'personal data' (hence, the 'cloud of unknowing'), so it seems inappropriate for such cloud infrastructure providers to become arbitrarily subject to EU data protection regulation due to their customers' choices.
The authors conclude -
We have advanced proposals which we suggest would enable data protection laws to cater for cloud computing and other technological developments in a clearer and more balanced way.

An accountability approach to data protection responsibilities should be taken by raising the threshold inherent in the 'personal data' definition, basing it instead on the realistic risk of identification and considering a continuum or spectrum of parties (depending on the circumstances) who may be processing personal data, each having varying degrees of obligations and liabilities under data protection law, with the risk of identification and risk of harm (and its likely severity) being the key factors. Such an approach should result in lighter, or even no, data protection regulation of passive utility infrastructure cloud providers, while reinforcing the obligation of cloud providers who knowingly and actively process personal data to handle such data appropriately.

More specifically, it is important to clarify the status of encrypted data and anonymised data to ensure that securely-encrypted data are not treated as 'personal data'. The legal status of the encryption or anonymisation procedure, i.e. converting personal data into an encrypted or anonymised state, also needs consideration and clarification.

As for the industry, cloud computing providers, especially infrastructure providers, may wish to consider developing and putting into place measures to minimise the likelihood of their cloud service being regulated inappropriately by EU data protection laws, such as encryption at the user end by default. They may also benefit from providing more transparency on their sharding and other operational procedures, and from continuing work on developing industry standards, such as on encryption of data to be stored in the cloud, including various elements of privacy by design. Such an emphasis on standards, while facilitating a more flexible and pragmatic approach to the regulation of the various actors in the cloud ecosystem, should also help to shift regulatory focus back to protecting the interests of individuals.