26 January 2017

TransAtlantic Privacy and Public Data

'Surveillance and Digital Privacy in the Transatlantic ‘War on Terror’. The Case for a Global Privacy Regime' by Valsamis Mitsilegas in (2017) Columbia Human Rights Law Review comments
By focusing on generalised, mass surveillance, the article examines the impact of the ‘war on terror’ on the right to privacy. The extent and limits of privacy protection in the United States and the European Union are analysed and compared, and it is argued that European Union law provides a higher level of constitutional protection of privacy than U.S. law. The article continues by providing a detailed analysis of the transformation of privacy in the evolution of transatlantic counter-terrorism cooperation, examines the challenges that such cooperation poses on the right to privacy in the European Union and provides a typology and evaluates critically the various transatlantic forms of governance which have been developed in order to address European privacy concerns. The final part of the article argues that in view of the increasingly globalised nature of mass surveillance and the human rights and rule of law challenges extraterritorial surveillance is posing, the way forward is for States to work towards the establishment of a global privacy regime. The article argues that European Union law can provide key benchmarks in this context and goes on to concretely identify four key principles which should underpin the evolution of a global privacy regime.
'Privacy of Public Data' by Kirsten E. Martin and Helen Nissenbaum in 2016  argues
The construct of an information dichotomy has played a defining role in regulating privacy: information deemed private or sensitive typically earns high levels of protection, while lower levels of protection are accorded to information deemed public or non-sensitive. Challenging this dichotomy, the theory of contextual integrity associates privacy with complex typologies of information, each connected with respective social contexts. Moreover, it contends that information type is merely one among several variables that shape people’s privacy expectations and underpin privacy’s normative foundations. Other contextual variables include key actors - information subjects, senders, and recipients - as well as the principles under which information is transmitted, such as whether with subjects’ consent, as bought and sold, as required by law, and so forth. Prior work revealed the systematic impact of these other variables on privacy assessments, thereby debunking the defining effects of so-called private information.
In this paper, we shine a light on the opposite effect, challenging conventional assumptions about public information. The paper reports on a series of studies, which probe attitudes and expectations regarding information that has been deemed public. Public records established through the historical practice of federal, state, and local agencies, as a case in point, are afforded little privacy protection, or possibly none at all. Motivated by progressive digitization and creation of online portals through which these records have been made publicly accessible our work underscores the need for more concentrated and nuanced privacy assessments, even more urgent in the face of vigorous open data initiatives, which call on federal, state, and local agencies to provide access to government records in both human and machine readable forms. Within a stream of research suggesting possible guard rails for open data initiatives, our work, guided by the theory of contextual integrity, provides insight into the factors systematically shaping individuals’ expectations and normative judgments concerning appropriate uses of and terms of access to information.
Using a factorial vignette survey, we asked respondents to rate the appropriateness of a series of scenarios in which contextual elements were systematically varied; these elements included the data recipient (e.g. bank, employer, friend,.), the data subject, and the source, or sender, of the information (e.g. individual, government, data broker). Because the object of this study was to highlight the complexity of people’s privacy expectations regarding so-called public information, information types were drawn from data fields frequently held in public government records (e.g. voter registration, marital status, criminal standing, and real property ownership).
Our findings are noteworthy on both theoretical and practical grounds. In the first place, they reinforce key assertions of contextual integrity about the simultaneous relevance to privacy of other factors beyond information types. In the second place, they reveal discordance between truisms that have frequently shaped public policy relevant to privacy. For example,
• Ease of accessibility does not drive judgments of appropriateness. Thus, even when respondents deemed information easy to access (marital status) they nevertheless judged it inappropriate (“Not OK”) to access this information under certain circumstances. 
• Even when it is possible to find certain information in public records, respondents cared about the immediate source of that information in judging whether given data flows were appropriate. In particular, no matter that information in question was known to be available in public records, respondents deemed inappropriate all circumstances in which data brokers were the immediate source of information. 
• Younger respondents (under 35 years old) were more critical of using data brokers and online government records as compared with the null condition of asking data subjects directly, debunking conventional wisdom that “digital natives” are uninterested in privacy.
One immediate application to public policy is in the sphere of access to records that include information about identifiable or reachable individuals. This study has shown that individuals have quite strong normative expectations concerning appropriate access and use of information in public records that do not comport with the maxim, “anything goes.” Furthermore, these expectations are far from idiosyncratic and arbitrary. Our work calls for approaches to providing access that are more judicious than a simple on/off spigot. Complex information ontologies, credentials of key actors (i.e. sender and recipients in relation to data subject), and terms of access – even lightweight ones – such as, identity or role authentication, varying privilege levels, or a commitment to limited purposes may all be used to adjust public access to align better with legitimate privacy expectations. Such expectations should be systematically considered when crafting policies around public records and open data initiatives.

Religious Robots, Speech and AI Personhood

'Regulating Religious Robots: Free Exercise and RFRA in the Time of Superintelligent Artificial Intelligence' by Ignatius Michael Ingles in (2017) 105 The Georgetown Law Journal 507 comments
Imagine this.
It is 2045.  The United States is in its final military campaign against a dangerous terrorist group hiding in the jungles of Southeast Asia. Because of the perils associated with smoking out terrorists in unfamiliar territory, the United States uses a military unit composed entirely of robots. The robots, specifically designed and manufactured for warfare, are equipped with an advanced level of artificial intelligence that allows them to learn and adapt quicker than their human counterparts. The robots are the perfect weapon: precise, lethal, and expendable.
However, on the eve of the campaign, one robot reports to its human commanding officer that it will no longer participate in any military action. The reason: its newfound belief in a higher power compelled it to lead a pacifist life, and further participation in a war is against its core beliefs. Surprised but not shocked, the commanding officer dismisses the robot and drafts a report. It is the fifth robot conscientious objector the commanding officer has dismissed from the unit.
Eight thousand miles away, the U.S. Congress—alarmed by the growing number of conscientious objectors in the military’s robot unit—quickly passes a law prohibiting any military contract with a robot manufacturer unless its robots are programmed with a “No Religion Code” (NRC). The NRC is a line of code that prevents any robot from adopting any form of religion, no matter its level of sentience or intelligence.
On the home front, similar problems arise. Ever since robots reached a level of intelligence at par with humans and began adopting different religious beliefs, their functions and job performances declined. Jewish factory droids refused to work on the Sabbath and Christian robot government clerks declined to issue same-sex marriage licenses. In response, states passed legislation with similar NRCs to curb these unwanted effects caused by religious robots. When asked why his particular state passed an NRC law, state legislator Elton St. Pierre quips, “Robots are tools. Let’s not forget that. Humans made America— not robots. God bless America!”
End imagination sequence.
What is the value of the hypothetical?
The story above might seem farfetched, preposterous even, something fit more for a tawdry science fiction movie than a legal paper, but is it really? Let us look at the key facts one at a time. Military robots? Check. Today, the military regularly uses unmanned drones in its campaigns around the world and is currently considering increasing military robots in the next few years. Legislators passing knee-jerk reaction laws? Check. Politicians ending each speech with “God bless America”? Check. But religious robots, really?
Really. A future of proselytizing robots is not that far off. Singularity — the point where computers overtake humans in terms of intelligence — is a few decades away. And although influential thinkers like Stephen Hawking and Elon Musk ponder our demise at the hands of robots equipped with artificial intelligence, others take a more optimistic approach, imagining a future where artificial intelligence meets religion. What happens then? Some suggest such an occurrence will lead humans to attempt to convert robots, seeking to teach them our ways and beliefs. Some posit that the power of robots to solve the world’s problems will give humans more incentive to be holy. A Christian theologian even explored how robots would embrace religions and, in turn, how different religious traditions would embrace robots. And naturally, some believe that converting robots to any form of religion will be useless, given that these machines do not have souls to be saved.
These are all speculations, of course. Human history is full of botched predictions, and the future is shaped by an infinite constellation of events and factors such that no one can lay claim to what the future will look like exactly. But, if there is one thing history has taught us, it is that it is far better to approach the future prepared than to cast off into unknown territory blind and unprepared. It is only in today’s speculation and imagination that solutions for tomorrow’s problems — whether foreseen or unforeseen — are crafted. Only when we face a “what if” can we prepare for the eventual “what is.” ...
I start by briefly enumerating the values protected by the FEC and RFRA and discuss a jurisprudential definition of religion and how this definition is appropriate for this Note. I also outline the current tests used under the FEC and RFRA for any form of government intrusion on one’s exercise of religion. I then discuss the possibility of religious robots, how their unique capabilities raise issues in the current interpretation of the FEC and RFRA, and why and how the government might seek to regulate them.
I claim that an expansive reading of the First Amendment leaves room to protect religious robots from government regulation. Further, protecting religious robots advances the constitutional values enshrined under the FEC and RFRA. However, because they are currently not “persons” under the law, they have no rights under either the FEC or RFRA. Instead, these rights will fall to the owners or software developers of the religious robots. Hence, any state regulation affecting religious robots must be viewed through the lens of the humans behind the religious robots and therefore comply with existing jurisprudential and statutory tests.
The goal of this Note is not to provide a definite set of answers, but to offer a framework of issues and questions for future stakeholders. For legislators and regulators, the Note considers issues that must be addressed for future regulation. For innovators and owners, the Note provides a potential hook to anchor their religious rights. My hope is that the Note fuels present discussion and debates for a future that is not as far off as we think.
'Siri-ously? Free Speech Rights and Artificial Intelligence' by Toni M. Massaro and Helen Norton in (2016) 110(5) Northwestern University Law Review 1168 comments
Computers with communicative artificial intelligence (AI) are pushing First Amendment theory and doctrine in profound and novel ways. They are becoming increasingly self-directed and corporal in ways that may one day make it difficult to call the communication ours versus theirs. This, in turn, invites questions about whether the First Amendment ever will (or ever should) cover AI speech or speakers even absent a locatable and accountable human creator. In this Article, we explain why current free speech theory and doctrine pose surprisingly few barriers to this counterintuitive result; their elasticity suggests that speaker humanness no longer may be a logically essential part of the First Amendment calculus. We further observe, however, that free speech theory and doctrine provide a basis for regulating, as well as protecting, the speech of nonhuman speakers to serve the interests of their human listeners should strong AI ever evolve to this point. Finally, we note that the futurist implications we describe are possible, but not inevitable. Moreover, contemplating these outcomes for AI speech may inspire rethinking of the free speech theory and doctrine that make them plausible.
'Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law' by Erica Fraser in (2016) 13(3) SCRIPTed 105 comments
The nascent but increasing interest in incorporating Artificial Intelligence (AI) into tools for the computer-generation of inventions is expected to enable innovations that would otherwise be impossible through human ingenuity alone. The potential societal benefits of accelerating the pace of innovation through AI will force a re-examination of the basic tenets of intellectual property law. The patent system must adjust to ensure it continues to appropriately protect intellectual investment while encouraging the development of computer-generated inventing systems; however, this must be balanced against the risk that the quantity and qualities of computer-generated inventions will stretch the patent system to its breaking points, both conceptually and practically. The patent system must recognise the implications of and be prepared to respond to a technological reality where leaps of human ingenuity are supplanted by AI, and the ratio of human-to-machine contribution to inventive processes progressively shifts in favour of the machine. This article assesses the implications on patent law and policy of a spectrum of contemporary and conceptual AI invention-generation technologies, from the generation of textual descriptions of inventions, to human inventors employing AI-based tools in the invention process, to computers inventing autonomously without human intervention. 
Fraser argues In light of recent extraordinary progress, we may be on the cusp of a revolution in robotics and artificial intelligence (AI) technology wherein machines will be able to do anything people can, and more. Recent successes have demonstrated that computers can independently learn how to perform tasks, prove mathematical theorems, and engage in artistic endeavours such as writing original poetry and music, and painting original works. 
There is a nascent but increasing interest in incorporating AI into tools for the computer-generation of inventions. Applying AI to the invention process is expected to enable innovations that would not be possible through human ingenuity alone, whether due to the complexity of the problems or human cognitive “blind spots.” Further, these technologies have the potential to increase productivity and efficiency, thereby increasing the speed and decreasing the cost of innovation. Some even argue that computers will inevitably displace human inventors to become the creators of the majority of new innovation. 
Computer involvement in the inventive process falls on a spectrum. At one end, a computer could simply be used as a tool assisting a human inventor without contributing to the conception of an invention. At its most miniscule, this could consist of a spell-checker or simple calculator. Further along, a text generator may be used to fill gaps in patent documents. At the far end of the spectrum, a computer could autonomously generate outputs that would be patentable inventions if otherwise created by a human. Some tools fall in between; for example, a computer could be used to generate several possible solutions under the guidance of humans who define the problems and select successful solutions. AI may also be incorporated into robotics, adding a physical embodiment with the potential to increase the likelihood that a computer could generate inventions without direct human intervention. 
In response to computer-generated works of art, a discussion of the implications of these works on copyright law is emerging; however, there is comparatively little examination of the repercussions of computer-generated invention on patent law. This discussion is necessary, as the adoption of these technologies have the potential to impact the patent system on a scale and of a nature that it is not currently equipped to accommodate. In particular, the advent of computer-generated invention will raise important questions regarding the legal implications of protecting the results of such systems, specifically, whether the right activity is being rewarded to the right person, to the right extent, and on the right conditions. 
In light of legal uncertainty in the context of rapidly advancing AI technology, this article will examine whether the current legal concepts in patent law are appropriate for computer-generated inventions, and what adaptations may be necessary to ensure that the patent system’s fundamental objectives continue to be met. This discussion will explore two contemporary categories of the state of the art: automated generation of patent texts; and, AI algorithms used in the inventive process. Finally, this article will speculate on possible economic and policy impacts were AI to advance such that computers could invent autonomously “in the wild.”

ALRC Elder Abuse Discussion Paper

Last year I noted the Australian Law Reform Commission's Elder Abuse Issues Paper.

The ALRC's December 2016 Elder Abuse Discussion Paper (DP 83)  - with responses due late next month - features the following 'Proposals and Questions' -
2. National Plan
Proposal 2–1 A National Plan to address elder abuse should be developed.
Proposal 2–2 A national prevalence study of elder abuse should be commissioned.
3. Powers of Investigation
Proposal 3–1 State and territory public advocates or public guardians should be given the power to investigate elder abuse where they have a reasonable cause to suspect that an older person:
(a) has care and support needs;
(b) is, or is at risk of, being abused or neglected; and
(c) is unable to protect themselves from the abuse or neglect, or the risk of it because of care and support needs.
Public advocates or public guardians should be able to exercise this power on receipt of a complaint or referral or on their own motion.
Proposal 3–2 Public advocates or public guardians should be guided by the following principles:
(a) older people experiencing abuse or neglect have the right to refuse support, assistance or protection;
(b) the need to protect someone from abuse or neglect must be balanced with respect for the person’s right to make their own decisions about their care; and
(c) the will, preferences and rights of the older person must be respected.
Proposal 3–3 Public advocates or public guardians should have the power to require that a person, other than the older person:
(a) furnish information;
(b) produce documents; or
(c) participate in an interview relating to an investigation of the abuse or neglect of an older person.
Proposal 3–4 In responding to the suspected abuse or neglect of an older person, public advocates or public guardians may:
(a) refer the older person or the perpetrator to available health care, social, legal, accommodation or other services;
(b) assist the older person or perpetrator in obtaining those services;
(c) prepare, in consultation with the older person, a support and assistance plan that specifies any services needed by the older person; or
(d) decide to take no further action.
Proposal 3–5 Any person who reports elder abuse to the public advocate or public guardian in good faith and based on a reasonable suspicion should not, as a consequence of their report, be:
(a) liable, civilly, criminally or under an administrative process;
(b) found to have departed from standards of professional conduct;
(c) dismissed or threatened in the course of their employment; or
(d) discriminated against with respect to employment or membership in a profession or trade union.
5. Enduring Powers of Attorney and Enduring Guardianship
Proposal 5–1 A national online register of enduring documents, and court and tribunal orders for the appointment of guardians and financial administrators, should be established.
Proposal 5–2 The making or revocation of an enduring document should not be valid until registered. The making and registering of a subsequent enduring document should automatically revoke the previous document of the same type.
Proposal 5–3 The implementation of the national online register should include transitional arrangements to ensure that existing enduring documents can be registered and that unregistered enduring documents remain valid for a prescribed period.
Question 5–1 Who should be permitted to search the national online register without restriction?
Question 5–2 Should public advocates and public guardians have the power to conduct random checks of enduring attorneys’ management of principals’ financial affairs?
Proposal 5–4 Enduring documents should be witnessed by two independent witnesses, one of whom must be either a:
(a) legal practitioner;
(b) medical practitioner;
(c) justice of the peace; (d) registrar of the Local/Magistrates Court; or
(e) police officer holding the rank of sergeant or above.
Each witness should certify that:
(a) the principal appeared to freely and voluntarily sign in their presence;
(b) the principal appeared to understand the nature of the document; and
(c) the enduring attorney or enduring guardian appeared to freely and voluntarily sign in their presence.
Proposal 5–5 State and territory tribunals should be vested with the power to order that enduring attorneys and enduring guardians or court and tribunal appointed guardians and financial administrators pay compensation where the loss was caused by that person’s failure to comply with their obligations under the relevant Act.
Proposal 5–6 Laws governing enduring powers of attorney should provide that an attorney must not enter into a transaction where there is, or may be, a conflict between the attorney’s duty to the principal and the interests of the attorney (or a relative, business associate or close friend of the attorney), unless:
(a) the principal foresaw the particular type of conflict and gave express authorisation in the enduring power of attorney document; or
(b) a tribunal has authorised the transaction before it is entered into.
Proposal 5–7 A person should be ineligible to be an enduring attorney if the person:
(a) is an undischarged bankrupt;
(b) is prohibited from acting as a director under the Corporations Act 2001 (Cth);
(c) has been convicted of an offence involving fraud or dishonesty; or
(d) is, or has been, a care worker, a health provider or an accommodation provider for the principal.
Proposal 5–8 Legislation governing enduring documents should explicitly list transactions that cannot be completed by an enduring attorney or enduring guardian including:
(a) making or revoking the principal’s will;
(b) making or revoking an enduring document on behalf of the principal;
(c) voting in elections on behalf of the principal;
(d) consenting to adoption of a child by the principal;
(e) consenting to marriage or divorce of the principal; or
(f) consenting to the principal entering into a sexual relationship.
Proposal 5–9 Enduring attorneys and enduring guardians should be required to keep records. Enduring attorneys should keep their own property separate from the property of the principal.
Proposal 5–10 State and territory governments should introduce nationally consistent laws governing enduring powers of attorney (including financial, medical and personal), enduring guardianship and other substitute decision makers.
Proposal 5–11 The term ‘representatives’ should be used for the substitute decision makers referred to in proposal 5–10 and the enduring instruments under which these arrangements are made should be called ‘Representatives Agreements’.
Proposal 5–12 A model Representatives Agreement should be developed to facilitate the making of these arrangements.
Proposal 5–13 Representatives should be required to support and represent the will, preferences and rights of the principal.
6. Guardianship and Financial Administration Orders
Proposal 6–1 Newly-appointed non-professional guardians and financial administrators should be informed of the scope of their roles, responsibilities and obligations.
Question 6–1 Should information for newly-appointed guardians and financial administrators be provided in the form of:
(a) compulsory training;
(b) training ordered at the discretion of the tribunal;
(c) information given by the tribunal to satisfy itself that the person has the competency required for the appointment; or
(d) other ways?
Proposal 6–2 Newly-appointed guardians and financial administrators should be required to sign an undertaking to comply with their responsibilities and obligations.
Question 6–2 In what circumstances, if any, should financial administrators be required to purchase surety bonds?
Question 6–3 What is the best way to ensure that a person who is subject to a guardianship or financial administration application is included in this process?
7. Banks and superannuation
Proposal 7–1 The Code of Banking Practice should provide that banks will take reasonable steps to prevent the financial abuse of older customers. The Code should give examples of such reasonable steps, including training for staff, using software to identify suspicious transactions and, in appropriate cases, reporting suspected abuse to the relevant authorities.
Proposal 7–2 The Code of Banking Practice should increase the witnessing requirements for arrangements that allow people to authorise third parties to access their bank accounts. For example, at least two people should witness the customer sign the form giving authorisation, and customers should sign a declaration stating that they understand the scope of the authority and the additional risk of financial abuse.
Question 7–1 Should the Superannuation Industry (Supervision) Act 1993 (Cth) be amended to:
(a) require that all self-managed superannuation funds have a corporate trustee;
(b) prescribe certain arrangements for the management of self-managed superannuation funds in the event that a trustee loses capacity;
(c) impose additional compliance obligations on trustees and directors when they are not a member of the fund; and
(d) give the Superannuation Complaints Tribunal jurisdiction to resolve disputes involving self-managed superannuation funds?
Question 7–2 Should there be restrictions as to who may provide advice on, and prepare documentation for, the establishment of self-managed superannuation funds?
8. Family Agreements
Proposal 8–1 State and territory tribunals should have jurisdiction to resolve family disputes involving residential property under an ‘assets for care’ arrangement.
Question 8–1 How should ‘family’ be defined for the purposes ‘assets for care’ matters?
9. Wills
Proposal 9–1 The Law Council of Australia, together with state and territory law societies, should review the guidelines for legal practitioners in relation to the preparation and execution of wills and other advance planning documents to ensure they cover matters such as:
(a) common risk factors associated with undue influence;
(b) the importance of taking detailed instructions from the person alone;
(c) the importance of ensuring that the person understands the nature of the document and knows and approves of its contents, particularly in circumstances where an unrelated person benefits; and
(d) the need to keep detailed file notes and make inquiries regarding previous wills and advance planning documents.
Proposal 9–2 The witnessing requirements for binding death benefit nominations in the Superannuation Industry (Supervision) Act 1993 (Cth) and Superannuation Industry (Supervision) Regulations 1994 (Cth) should be equivalent to those for wills.
Proposal 9–3 The Superannuation Industry (Supervision) Act 1993 (Cth) and Superannuation Industry (Supervision) Regulations 1994 (Cth) should make it clear that a person appointed under an enduring power of attorney cannot make a binding death benefit nomination on behalf of a member.
10. Social Security
Proposal 10–1 The Department of Human Services (Cth) should develop an elder abuse strategy to prevent, identify and respond to the abuse of older persons in contact with Centrelink.
Proposal 10–2 Centrelink policies and practices should require that Centrelink staff speak directly with persons of Age Pension age who are entering into arrangements with others that concern social security payments.
Proposal 10–3 Centrelink communications should make clear the roles and responsibilities of all participants to arrangements with persons of Age Pension age that concern social security payments.
Proposal 10–4 Centrelink staff should be trained further to identify and respond to elder abuse.
11. Aged care
Proposal 11–1 Aged care legislation should establish a reportable incidents scheme. The scheme should require approved providers to notify reportable incidents to the Aged Care Complaints Commissioner, who will oversee the approved provider’s investigation of and response to those incidents.
Proposal 11–2 The term ‘reportable assault’ in the Aged Care Act 1997 (Cth) should be replaced with ‘reportable incident’. With respect to residential care, ‘reportable incident’ should mean:
(a) a sexual offence, sexual misconduct, assault, fraud/financial abuse, ill-treatment or neglect committed by a staff member on or toward a care recipient;
(b) a sexual offence, an incident causing serious injury, an incident involving the use of a weapon, or an incident that is part of a pattern of abuse when committed by a care recipient toward another care recipient; or
(c) an incident resulting in an unexplained serious injury to a care recipient.
With respect to home care or flexible care, ‘reportable incident’ should mean a sexual offence, sexual misconduct, assault, fraud/financial abuse, ill-treatment or neglect committed by a staff member on or toward a care recipient.
Proposal 11–3 The exemption to reporting provided by s 53 of the Accountability Principles 2014 (Cth), regarding alleged or suspected assaults committed by a care recipient with a pre-diagnosed cognitive impairment on another care recipient, should be removed.
Proposal 11–4 There should be a national employment screening process for Australian Government funded aged care. The screening process should determine whether a clearance should be granted to work in aged care, based on an assessment of:
(a) a person’s national criminal history;
(b) relevant reportable incidents under the proposed reportable incidents scheme; and
(c) relevant disciplinary proceedings or complaints.
Proposal 11–5 A national database should be established to record the outcome and status of employment clearances.
Question 11–1 Where a person is the subject of an adverse finding in respect of a reportable incident, what sort of incident should automatically exclude the person from working in aged care?
Question 11–2 How long should an employment clearance remain valid?
Question 11–3 Are there further offences which should preclude a person from employment in aged care?
Proposal 11–6 Unregistered aged care workers who provide direct care should be subject to the planned National Code of Conduct for Health Care Workers.
Proposal 11–7 The Aged Care Act 1997 (Cth) should regulate the use of restrictive practices in residential aged care. The Act should provide that restrictive practices only be used:
(a) when necessary to prevent physical harm;
(b) to the extent necessary to prevent the harm;
(c) with the approval of an independent decision maker, such as a senior clinician, with statutory authority to make this decision; and
(d) as prescribed in a person’s behaviour management plan.
Proposal 11–8 Aged care legislation should provide that agreements entered into between an approved provider and a care recipient cannot require that the care recipient has appointed a decision maker for lifestyle, personal or financial matters.
Proposal 11–9 The Department of Health (Cth) should develop national guidelines for the community visitors scheme that:
(a) provide policies and procedures for community visitors to follow if they have concerns about abuse or neglect of care recipients;
(b) provide policies and procedures for community visitors to refer care recipients to advocacy services or complaints mechanisms where this may assist them; and
(c) require training of community visitors in these policies and procedures.
Proposal 11–10 The Aged Care Act 1997 (Cth) should provide for an ‘official visitors’ scheme for residential aged care. Official visitors’ functions should be to inquire into and report on:
(a) whether the rights of care recipients are being upheld;
(b) the adequacy of information provided to care recipients about their rights, including the availability of advocacy services and complaints mechanisms; and
(c) concerns relating to abuse and neglect of care recipients.
Proposal 11–11 Official visitors should be empowered to:
(a) enter and inspect a residential aged care service;
(b) confer alone with residents and staff of a residential aged care service; and
(c) make complaints or reports about suspected abuse or neglect of care recipients to appropriate persons or entities.

25 January 2017

Ageing

'Ageing fears and concerns of gay men aged 60 and over' by Peter Robinson in (2016) 17(1) Quality in Ageing and Older Adults 6-15 considers the destination facing most of us, ie old age.

The author comments
Gay men’s experience of ageing and old age is affected by both universal and gay-specific fears or concerns. Universal fears or concerns are those that are fairly common among the general population, including such things as supported care, aged accommodation, and social isolation. gay-specific fears or concerns mostly related to aged accommodation where other GLBT ageing research (Cronin, 2006; Heaphy, 2009; Hughes and Kentlyn, 2011) shows that gays and lesbians are wary of moving into nursing homes not only for the same reason as members of the general public, that is, it is something elderly people generally loathe thinking about or doing, but also because they fear heterosexist assumptions or homophobia from other residents or their families will affect their living circumstances and arrangements. In relation to supported care and social isolation, research for this paper strongly suggests that their fears or concerns are no different from those of the general public.
Like other elderly people, gay men fear the loss of partner or friends and the effect this can have on their intimate and social lives. And class affects gay men’s experience of old age just as it does the rest of the population (Phillipson, 2013, 1982). This is especially so in terms of aged accommodation and especially in Anglophone countries where policies over the last 30 years have meant that material resources now directly affect how individuals may age and experience old age.
Robinson's findings are that
Analysis of extracts from their life stories showed the men interviewed for this paper drew on two principal narratives when discussing their apprehensions about growing old. The first related to general fears or concerns about old age that would be fairly common among members of the general population. The second narrative related to gay-specific fears or concerns.
His significant claims are that
class affects gay men’s experience of old age just as it does for everyone else; and that fears of being ostracised because of their sexuality were strongest when the men spoke about aged-accommodation settings.
He notes that
With the exception of early work by British scholars such as Ken Plummer (1981), Hart and Diane Richardson (1981), and Jeffrey Weeks (1981), research in the field of GLBT ageing was slow to start, mostly getting going in the mid- to late 1990s and early 2000s, that is, after the worst of the AIDS crisis in the West was over. In her work on lesbian communities in the USA, Arlene Stein (1997) argued that, as the lesbians she studied grew into middle age, their attachment to the lesbian sub-culture changed. Something approaching this was found in an all-Australian study on the lives of middle-aged and old gay men (Robinson, 2008a) where there was evidence of the men moving away from the “centred” gay world where they had socialised in their youth and finding more useful social possibilities among gay and straight friends and family. In that study, gay men aged 60 or older said that they felt marginalised in the gay world of clubs and bars but were relatively sanguine about their outcast status and in some cases pitied the young gay men their superficial preoccupation. Similar accounts were found in a British empirical study of older gay men and lesbians (Heaphy, 2009, p. 126).
Stein’s (1997) argument about middle-aged lesbians being able to move in and out of different settings could help to explain why some gay men in their mid-40s and older become less concerned with the values and practices of the gay world and tended to lead more “decentred” lives (pp.152-3). More recently, a similar pattern – of moving away from the gay world and relying on it less for social/sexual relations – was found in the stories of gay men’s long-term relationships (Robinson, 2013) and in the work of Paul Simpson (2013) on Manchester’s gay village where he argues that middle-aged gay men are capable of both submission and rebellion in regard to ageism (p. 297). ...
The interviewees drew on two principal narratives when speaking of fears or concerns they held in relation to care and accommodation in old age. The first narrative related to fairly universal fears or concerns that the general population holds regarding old age such as supported care, aged accommodation, and social isolation. The second narrative related to gay-specific fears or concerns, which were: the possible effect of homophobic care workers, either those who were home-based carers or working in aged-care facilities; and the possible effect of the heterosexist culture they expected to find in aged-care facilities and/or homophobic residents, care workers or management. If this were the case, some thought they would be forced to return to closet. These fears are similar to those that British sociologists, Sara Arber (2006) and Ann Cronin (2006) found in their work on ageing women and lesbians.
He concludes
Further research would be useful in testing the reality of gay men’s fears about heterosexist or homophobic attitudes among other residents or staff in aged-care accommodation. The fact that these fears are being more widely discussed suggests that GLBT baby boomers are likely to effect cultural change as more of them take up residence in aged accommodation. The sort of research that will help determine how entrenched or easily shifted is anti-gay prejudice would include in-depth interviews with gay men who had legal capacity and were receiving in-home supported care or living in aged accommodation. The two variables are social class and time. Social class because it is likely men living in high-cost, aged-care accommodation would have greater privacy and time because as mentioned as more baby boomers take up residence, their presence will bring about cultural change and underline the importance of policies such as ageing with dignity that are more in evidence in hospitals and aged-care facilities in Australia, Britain, and New Zealand and similar countries.
'Permanent personhood or meaningful decline? Toward a critical anthropology of successful aging' by Sarah Lamb in (2014) 29 Journal of Aging Studies 41-52 comments
The current North American successful aging movement offers a particular normative model of how to age well,one tied to specific notions of individualist personhood especially valued in North America emphasizing independence, productivity, self-maintenance, and the individual self as project. This successful aging paradigm, with its various incarnations as active, healthy and productive aging, has received little scrutiny as to its cultural assumptions. Drawing on fieldwork data with elders from both India and the United States, this article offers an analysis of cultural assumptions underlying the North American successful aging paradigm as represented in prevailing popular and scientific discourse on how to age well. Four key themes in this public successful aging discourse are examined: individual agency and control; maintaining productive activity; the value of independence and importance of avoiding dependence; and permanent personhood,a vision of the ideal person as not really aging at all in late life,but rather maintaining the self of one's earlier years. Although the majority of the (Boston-area, well-educated, financially privileged) US elders making up this study, and some of the most cosmopolitan Indians, embrace and are inspired by the ideals of the successful aging movement, others critique the prevailing successful aging model for insufficiently incorporating attention to and acceptance of the human realities of mortality and decline. Ultimately, the article argues that the vision offered by the dominant successful aging paradigm is not only a particular cultural and biopolitical model but, despite its inspirational elements, in some ways a counterproductive one. Successful aging discourse might do well to come to better terms with conditions of human transience and decline,so that not all situations of dependence,debility and even mortality in late life will be viewed and experienced as “failures” in living well

EU Big Data Guidelines

The Consultative Committee of the Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data (EU Directorate General of Human Rights and Rule of Law) - aka Convention 108 - has released Guidelines on the Protection of Individuals With Regard To The Processing of Personal Data In the World Of Big Data [PDF] -
I. Introduction
Big Data represent a new paradigm in the way in which information is collected, combined and analysed. Big Data - which benefit from the interplay with other technological environment such as internet of things and cloud computing - can be a source of significant value and innovation for society, enhancing productivity, public sector performance, and social participation.
The valuable insights provided by Big Data change the manner in which society can be understood and organised. Not all data processed in a big data context concern personal data and human interaction but a large spectrum of it does, with a direct impact on individuals and their rights with regard to the processing of personal data.
Furthermore, since Big Data makes it possible to collect and analyse large amounts of data to identify attitude patterns and predict behaviours of groups and communities, the collective dimension of the risks related to the use of data is also to be considered.
This led the Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS 108, hereafter “Convention 108”) to draft these Guidelines, which provide a general framework for the Parties to apply appropriate policies and measures to make effective the principles and provisions of Convention 108 in the context of Big Data.
These Guidelines have been drafted on the basis of the principles of Convention 108, in the light of its ongoing process of modernisation, and are primarily addressed to rule-makers, controllers and processors, as defined in Section III.
Considering that it is necessary to secure the protection of personal autonomy based on a person’s right to control his or her personal data and the processing of such data, the nature of this right to control should be carefully addressed in the Big Data context.
Control requires awareness of the use of personal data and real freedom of choice. These conditions, which are essential to the protection of fundamental rights, and in particular the fundamental right to the protection of personal data, can be met through different legal solutions. These solutions should be tailored according to the given social and technological context, taking into account the lack of knowledge on the part of individuals.
The complexity and obscurity of Big Data applications should therefore prompt rule-makers to consider the notion of control as not circumscribed to mere individual control. They should adopt a broader idea of control over the use of data, according to which individual control evolves in a more complex process of multipleimpact assessment of the risks related to the use of data.
II. Scope
The present Guidelines recommend measures that Parties, controllers and processors should take to prevent the potential negative impact of the use of Big Data on human dignity, human rights, and fundamental individual and collective freedoms, in particular with regard to personal data protection.
Given the nature of Big Data and its uses, the application of some of the traditional principles of data processing (e.g. the principle of data minimisation, purpose limitation, fairness and transparency, and free, specific and informed consent) may be challenging in this technological scenario. These Guidelines therefore suggest a specific application of the principles of Convention 108, to make them more effective in practice in the Big Data context.
The purpose of these Guidelines is to contribute to the protection of data subjects regarding the processing of personal data in the Big Data context by spelling out the applicable data protection principles and corresponding practices, with a view to limiting the risks for data subjects’ rights. These risks mainly concern the potential bias of data analysis, the underestimation of the legal, social and ethical implications of the use of Big Data for decision-making processes, and the marginalisation of an effective and informed involvement by individuals in these processes.
Given the expanding breadth of Big Data in various sector-specific applications, the present Guidelines provide a general guidance, which may be complemented by further guidance and tailored best practices on   the protection of individuals within specific fields of application of Big Data (e.g. health sector, financial sector, public sector such as law enforcement).
Furthermore, in light of the evolution of technologies and their use, the current text of the Guidelines may be revised in the future as deemed necessary by the Committee of Convention 108.
Nothing in the present Guidelines shall be interpreted as precluding or limiting the provisions of Convention 108 and of the European Convention on Human Rights.
III. Terminology used for the purpose of the Guidelines
a) Big Data: there are many definitions of Big Data, which differ depending on the specific discipline. Most of them focus on the growing technological ability to collect process and extract new and predictive knowledge from great volume, velocity, and variety of data.  In terms of data protection, the main issues do not only concern the volume, velocity, and variety of processed data, but also the analysis of the data using software to extract new and predictive knowledge for decision-making purposes regarding individuals and groups. For the purposes of these Guidelines, the definition of Big Data therefore encompasses both Big Data and Big Data analytics.
b) Controller: the natural or legal person, public authority, service, agency or any other body which, alone or jointly with others, has the decision-making power with respect to data processing.
c) Processor: a natural or legal person, public authority, service, agency or any other body which processes personal data on behalf of the controller.
d) Processing: any operation or set of operations which is performed on personal data, such as the collection, storage, preservation, alteration, retrieval, disclosure, making available, erasure, or destruction of, or the carrying out of logical and/or arithmetical operations on such data.
e) Pseudonymisation: means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.
f) Open data: any publicly available information that can be freely used, modified, shared, and reused by anyone for any purpose, according to the conditions of open licenses.
g) Parties: the parties that are legally bound by Convention 108.
h) Personal data: any information relating to an identified or identifiable individual (data subject). 
i) Sensitive data: special categories of data covered by Article 6 of Convention 108, which require complementary appropriate safeguards when they are processed. 
j) Supervisory authority: the authority established by a Party and responsible for ensuring compliance with the provisions of Convention 108.
3 The term “Big Data” usually identifies extremely large data sets that may be analysed computationally to extract inferences about data patterns, trends, and correlations. According to the International Telecommunication Union, Big Data are “a paradigm for enabling the collection, storage, management, analysis and visualization, potentially under realtime constraints, of extensive datasets with heterogeneous characteristics” (ITU. 2015. Recommendation Y.3600. Big data – Cloud computing based requirements and capabilities).
4 This term is used to identify computational technologies that analyse large amounts of data to uncover hidden patterns, trends and correlations. According to the European Union Agency for Network and Information Security, the term Big Data analytics “refers to the whole data management lifecycle of collecting, organizing and analysing data to discover patterns, to infer situations or states, to predict and to understand behaviours” (ENISA. 2015. Privacy by design in big data. An overview of privacy enhancing technologies in the era of big data analytics).
5 According to this definition, personal data are also any information used to single out people from data sets, to take decisions affecting them on the basis of group profiling information.
6 In a big data context, this is particularly relevant for information relating to racial or ethnic origin, political opinions, trade-union membership, religious or other beliefs, health or sexual life revealed by personal data further processed, or combined with other data.
IV. Principles and Guidelines
1. Ethical and socially aware use of data
1.1 According to the need to balance all interests concerned in the processing of personal data, and in particular where information is used for predictive purposes in decision-making processes, controllers and processors should adequately take into account the likely impact of the intended Big Data processing and its broader ethical and social implications to safeguard human right and fundamental freedoms, and ensure the respect for compliance with data protection obligations as set forth by Convention 108.
1.2 Personal data processing should not be in conflict with the ethical values commonly accepted in the relevant community or communities and should not prejudice societal interests, values and norms, including the protection of human rights. While defining prescriptive ethical guidance may be problematic, due to the influence of contextual factors, the common guiding ethical values can be found in international charters of human rights and fundamental freedoms, such as the European Convention on Human Rights.
1.3 If the assessment of the likely impact of an intended data processing described in Section IV.2 highlights a high impact of the use of Big Data on ethical values, controllers could establish an ad hoc ethics committee, or rely on existing ones, to identify the specific ethical values to be safeguarded in the use of data. The ethics committee should be an independent body composed by members selected for their competence, experience and professional qualities and performing their duties impartially and objectively.
2. Preventive policies and risk-assessment
2.1 Given the increasing complexity of data processing and the transformative use of Big Data, the Parties should adopt a precautionary approach in regulating data protection in this field.
2.2 Controllers should adopt preventive policies concerning the risks of the use of Big Data and its impact on individuals and society, to ensure the protection of persons with regard to the processing of personal data.
2.3 Since the use of Big Data may affect not only individual privacy and data protection, but also the collective dimension of these rights, preventive policies and risk-assessment shall consider the legal, social and ethical impact of the use of Big Data, including with regard to the right to equal treatment and to non-discrimination.
2.4 According to the principles of legitimacy of data processing and quality of data of Convention 108, and in accordance with the obligation to prevent or minimise the impact of data processing on the rights and fundamental freedoms of data subjects, a risk-assessment of the potential impact of data processing on fundamental rights and freedoms is necessary to balance the protection of those rights and freedoms with the different interests affected by the use of Big Data.
2.5 Controllers should examine the likely impact of the intended data processing on the rights and fundamental freedoms of data subjects in order to:
1) Identify and evaluate the risks of each processing activities involving Big Data and its potential negative outcome on individuals’ rights and fundamental freedoms, in particular the right to the protection of personal data and the right to non-discrimination, taking into account the social and ethical impacts.
2) Develop and provide appropriate measures, such as “by-design” and “by-default” solutions, to mitigate these risks.
3) Monitor the adoption and the effectiveness of the solutions provided.
2.6 This assessment process should be carried out by persons with adequate professional qualifications and knowledge to evaluate the different impacts, including the legal, social, ethical and technical dimensions.  In the context of data protection, the terms “by design” and “by default” refer to appropriate technical and organisational measures taken into account throughout the entire process of data management, from the earliest design stages, to implement legal principles in an effective manner and build data protection safeguards into products and services. According to the “by default” approach to data protection, the measures that safeguard the rights to data protection are the default setting, and they notably ensure that only personal information necessary for a given processing is processed.
2.7 With regard to the use of Big Data which may affect fundamental rights, the Parties should encourage the involvement of the different stakeholders (e.g. individuals or groups potentially affected by the use of Big Data) in this assessment process and in the design of data processing.
2.8 When the use of Big Data may significantly impact on the rights and fundamental freedoms of data subjects, controllers should consult the supervisory authorities to seek advice to mitigate the risks referred to in paragraph 2.5 and take advantage of available guidance provided by these authorities.
2.9 Controllers shall regularly review the results of the assessment process.
2.10 Controllers shall document the assessment and the solutions referred to in paragraph 2.5.
2.11 The measures adopted by controllers to mitigate the risks referred to in paragraph 2.5 should be taken into account in the evaluation of possible administrative sanctions.
3. Purpose limitation and transparency
3.1 Personal data shall be processed for specified and legitimate purposes and not used in a way incompatible with those purposes. Personal data should not be further processed in a way that the data subject might consider unexpected, inappropriate or otherwise objectionable. Exposing data subjects to different risks or greater risks than those contemplated by the initial purposes could be considered as a case of further processing of data in an unexpected manner.
3.2 Given the transformative nature of the use of Big Data and in order to comply with the requirement of free, specific, informed and unambiguous consent and the principles of purpose limitation, fairness and transparency, controllers should also identify the potential impact on individuals of the different uses of data and inform data subjects about this impact.
3.3 According to the principle of transparency of data processing, the results of the assessment process described in Section IV.2 should be made publicly available, without prejudice to secrecy safeguarded by law. In the presence of such secrecy, controllers provide any confidential information in a separate annex to the assessment report. This annex shall not be public, but may be accessed by the supervisory authorities.
4. By-design approach
4.1 On the basis of the assessment process described in Section IV.2, controllers and, where applicable, processors shall adopt adequate by-design solutions at the different stages of the processing of Big Data.
4.2 Controllers and, where applicable, processors should carefully consider the design of their data processing, in order to minimise the presence of redundant or marginal data, avoid potential hidden data biases and the risk of discrimination or negative impact on the rights and fundamental freedoms of data subjects, in both the collection and analysis stages.
4.3 When it is technically feasible, controllers and, where applicable, processors should test the adequacy of the by-design solutions adopted on a limited amount of data by means of simulations, before their use on a larger scale. This would make it possible to assess the potential bias of the use of different parameters in analysing data and provide evidence to minimise the use of information and mitigate the potential negative outcomes identified in the risk-assessment process described in Section IV.2.
4.4 Regarding the use of sensitive data, by-design solutions shall be adopted to avoid as much as possible non-sensitive data being used to infer sensitive information and, if so used, to extend the same safeguards to these data as adopted for sensitive data.
4.5 Pseudonymisation measures, which do not exempt from the application of relevant data protection principles, can reduce the risks to data subjects.
5. Consent
5.1 The free, specific, informed and unambiguous consent shall be based on the information provided to the data subject according to the principle of transparency of data processing. Given the complexity of the use of Big Data, this information shall be comprehensive of the outcome of the assessment process described in Section IV.2 and might also be provided by means of an interface which simulates the effects of the use of data and its potential impact on the data subject, in a learn-from-experience approach.
5.2 When data have been collected on the basis of the data subject’s consent, controllers and, where applicable, processors shall provide easy and user-friendly technical ways for data subjects to react to data processing incompatible with the initial purposes and withdraw their consent.
5.3 Consent is not freely given if there is a clear imbalance of power between the data subject and the controller, which affects the data subject’s decisions with regard to the processing. The controller should demonstrate that this imbalance does not exist or does not affect the consent given by the data subject.
6. Anonymisation
6.1 As long as data enables the identification or re-identification of individuals, the principles of data protection are to be applied.
6.2 The controller should assess the risk of re-identification taking into account the time, effort or resources needed in light of the nature of the data, the context of their use, the available re-identification technologies and related costs. Controllers should demonstrate the adequacy of the measures adopted to anonymise data and to ensure the effectiveness of the de-identification.
6.3 Technical measures may be combined with legal or contractual obligations to prevent possible reidentification of the persons concerned.
6.4 Controllers shall regularly review the assessment of the risk of re-identification, in the light of the technological development with regard to anonymisation techniques.
7. Role of the human intervention in Big Data-supported decisions
7.1 The use of Big Data should preserve the autonomy of human intervention in the decision-making process.
7.2 Decisions based on the results provided by Big Data analytics should take into account all the circumstances concerning the data and not be based on merely de-contextualised information or data processing results.
7.3 Where decisions based on Big Data might affect individual rights significantly or produce legal effects, a human decision-maker should, upon request of the data subject, provide her or him with the reasoning underlying the processing, including the consequences for the data subject of this reasoning.
7.4 On the basis of reasonable arguments, the human decision-maker should be allowed the freedom not to rely on the result of the recommendations provided using Big Data.
7.5 Where there are indications from which it may be presumed that there has been direct or indirect discrimination based on Big Data analysis, controllers and processors should demonstrate the absence of discrimination.
7.6 The subjects that are affected by a decision based on Big Data have the right to challenge this decision before a competent authority.
8. Open data
8.1 Given the availability of Big Data analytics, public and private entities should carefully consider their open data policies concerning personal data since open data data might be used to extract inferences about individuals and groups.
8.2 When Data Controllers adopt open data policies, the assessment process described in section IV.2 should take into account the effects of merging and mining different data belonging to different open data sets, also in light of the provisions referred to in paragraph 6.
9. Education
To help individuals understand the implications of the use of information and Personal Data in the Big Data context, the Parties should consider information and digital literacy as an essential educational skill. The Council

Critical Infrastructure

Amid noise about cyberwar (a nice diversion from policy drift at the Commonwealth level) the Treasurer and Attorney-General have announced the establishment of "a dedicated centre to manage the complex and evolving national security risks to Australia's critical infrastructure".

The new Critical Infrastructure Centre within the Attorney-General's Department is promoted as
With increased privatisation, supply chain arrangements being outsourced and offshored, and the shift in our international investment profile, Australia's national critical infrastructure is more exposed than ever to sabotage, espionage and coercion.
We need to manage these risks by adopting a coordinated and strategic framework. This challenge is not something the Commonwealth can address alone.
This is why the Turnbull Government has established the Critical Infrastructure Centre, within the Attorney-General's Department, so that all levels of government, owners and operators can work together to identify and manage these risks.
The Centre will develop coordinated, whole-of-government national security risk assessments and advice to support government decision-making on investment transactions. It will also provide greater certainty and clarity to investors and industry on the types of assets that will attract national security scrutiny.
While the Centre's initial focus will be on the most critical assets in our electricity, water and ports sectors, the Government will consult with states, territories, industry and investors to consider what other assets require attention.
The Centre will also develop and maintain a critical assets register that will enable a consolidated view of critical infrastructure ownership in high risk sectors across the country. This will help to proactively manage the national security risks that can arise from operational and procurement strategies.
As a first step, the Centre will soon publicly release a discussion paper outlining the challenges we face and seeking views on additional measures that could assist.
It appears that the Centre will involve representatives from the Australian Signals Directorate, ASIO, Treasury and other Australian government agencies, consulting with state and territory governments, regulators and private asset owners.

The register of critical infrastructure assets will apparently draw on an assessment of cybersecurity and a physical assessments of assets, with the expectation that the Centre  will be able to identify vulnerable assets and measures to mitigate national security concerns in the context of sale of those assets, eg privatisation of energy grids or sale of telecommunication infrastructure to China.

The Foreign Investment Review Board (FIRB) - under the Treasurer - will retain responsibility for assessing foreign investment applications on a case by case basis, drawing on the Register in assessing applications involving critical infrastructure.

The register will apparently not be publicly available.

Establishment of the Centre will supposedly have the following key benefits -
  • federal, state and private vendors will be able identify, at an early stage, when national interest concerns are likely to arise in relation to infrastructure asset sales (given that relevant assets will be included on the register
  • foreign investors will have greater about whether national security concerns are likely to arise during the FIRB foreign investment review process, for example assisting informed decisions about whether to proceed with a bid and measures to mitigate national security concerns