12 September 2019

Liability and the Reasonable Computer

Should the reasonable compurter join the reasonable man (aka the Man who rides on the Clapham Omnibus, Man on the Bondi Tram or Woman in the Bondi Taxi)?

'The Reasonable Computer: Disrupting the Paradigm of Tort Liability' by Ryan Abbott in (2018) 86 George Washington Law Review comments
Artificial intelligence is part of our daily lives. Whether working as chauffeurs, accountants, or police, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality involving Tesla's autonomous driving software is just one example in a long series of "computer- generated torts.," 
Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. 
Under current legal frameworks, suppliers of computer tortfeasors are likely strictly responsible for their harms. This Article argues that where a supplier can show that an autonomous computer, robot, or machine is safer than a reasonable person, the supplier should be liable in negligence rather than strict liability. The negligence test would focus on the computer's act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would incentivize automation when doing so would reduce accidents, and it would continue to reward sup- pliers for improving safety. 
More importantly, principles of harm avoidance suggest that once com- puters become safer than people, human tortfeasors should no longer be mea- sured against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be judged against computers. To appropriate the immortal words of Justice Holmes, we are all "hasty and awkward" compared to the reasonable computer.
Abbott argues
An automation revolution is coming, and it is going to be hugely disruptive.' Ever cheaper, faster, and more sophisticated computers are able to do the work of people in a wide variety of fields and on an unprecedented scale. They may do this at a fraction of the cost of existing workers, and in some instances, they already outperform their human competition. Today's automation is not limited to manual labor; modem machines are already diagnosing disease, conducting legal due diligence, and providing translation services. For better or worse, automation is the way of the future-the economics are simply too compelling for any other outcome. But what of the injuries these automatons will inevitably cause? What happens when a machine fails to diagnose a cancer, ignores an incriminating email, or inadvertently starts a war? How should the law respond to computer-generated torts? 
Tort law has answers to these questions based on a system of common law that has evolved over centuries to deal with unintended harms., The goals of this body of law are many: to reduce accidents, promote fairness, provide a peaceful means of dispute resolution, real- locate and spread losses, promote positive social values, and so forth.9 Whether tort law is the best means for achieving all of these goals is debatable, but jurists are united in considering accident reduction as one of the central, if not the primary, aims of tort law. By creating a framework for loss shifting from injured victims to tortfeasors, tort law deters unsafe conduct." A purely financially motivated rational actor will reduce potentially harmful activity to the extent that the cost of accidents exceeds the benefits of the activity. This liability framework has far-reaching and sometimes complex impacts on behavior. It can either accelerate or impede the introduction of new technologies. 
Most injuries people cause are evaluated under a negligence stan- dard where unreasonable conduct establishes liability. When computers cause the same injuries, however, a strict liability standard applies. This distinction has financial consequences and a corresponding impact on the rate of technology adoption. It discourages automation, because machines incur greater liability than people. It also means that in cases where automation will improve safety, the current framework to prevent accidents now has the opposite effect. 
This Article argues that the acts of autonomous computer tortfeasors should be evaluated under a negligence standard, rather than a strict liability standard, in cases where an autonomous computer is occupying the position of a reasonable person in the traditional negligence paradigm and where automation is likely to improve safety. For the purposes of ultimate financial liability, the computer's supplier (e.g., manufacturers and retailers) should still be responsible for satisfying judgments under standard principles of product liability law. 
This Article employs a functional approach to distinguish an au- tonomous computer, robot, or machine from an ordinary product. Society's relationship with technology has changed. Computers are no longer just inert tools directed by individuals. Rather, in at least some instances, computers are given tasks to complete and determine for themselves how to complete those tasks. For instance, a person could instruct a self-driving car to take them from point A to point B, but would not control how the machine does so. By contrast, a person driving a conventional vehicle from point A to point B controls how the machine travels. This distinction is analogous to the distinction be- tween employees and independent contractors, which centers on the degree of control and independence." As this Article uses such terms, autonomous machines or computer tortfeasors control the means of completing tasks, regardless of their programming. 
The most important implication of this line of reasoning is that just as computer tortfeasors should be compared to human tortfeasors, so too should humans be compared to computers. Once computers become safer than people and practical to substitute, com- puters should set the baseline for the new standard of care. This means that human defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done. In time, as computers come to increasingly outperform people, this rule would mean that someone's best efforts would no longer be sufficient to avoid liability. It would not mandate automation in the interests of freedom and autonomy, but people would engage in certain activities at their own peril. Such a rule is entirely consistent with the ratio- nale for the objective standard of the reasonable person, and it would benefit the general welfare. Eventually, the continually improving "reasonable computer" standard should even apply to computer tortfeasors, such that computers will be held to the standard of other computers. By this time, computers will cause so little harm that the primary effect of the standard would be to make human tortfeasors essentially strictly liable for their harms. 
This Article uses self-driving cars as a case study to demonstrate the need for a new torts paradigm. There is public concern over the safety of self-driving cars, but a staggering ninety-four percent of crashes involve human error. These contribute to over 37,000 fatalities a year in the United States at a cost of about $242 billion.  Automated vehicles may already be safer than human drivers, but if not, they will be soon. Shifting to negligence would accelerate the adoption of driverless technologies, which, according to a report by the consulting firm McKinsey and Company, may otherwise not be wide- spread until the middle of the century. 
Automated vehicles may be the most prominent and disruptive upcoming example of robots changing society, but this analysis applies to any context with computer tortfeasors. For instance, IBM's flagship artificial intelligence system, Watson, is working with clinicians at Me morial Sloan Kettering to analyze patient medical records and provide evidence-based cancer treatment options. It even provides supporting literature to human physicians to support its recommendations. Like self-driving cars, Watson does not need to be perfect to improve safety - it just needs to be better than people. In that respect, the bar is unfortunately low. Medical error is one of the leading causes of death. A 2016 study in the British Medical Journal reported that it is the third leading cause of death in the United States, ranking just behind cardiovascular disease and cancer. Some companies already claim their artificial intelligence systems outperform doctors, and that claim is not hard to swallow. Why should a computer not be able to outperform doctors when the computer can access the entire wealth of medical literature with perfect recall, benefit from the experience of directly having treated millions of patients, and be immune to fatigue? 
This Article is divided into three Parts. Part I provides background on the historical development of injuries caused by machines and how the law has evolved to address these harms. It discusses the role of tort law in injury prevention and the development of negli- gence and strict product liability. Part II argues that while some forms of automation should prevent accidents, tort law may act as a deterrent to adopting safer technologies. To encourage automation and improve safety, this Part proposes a new categorization of "computer-generated torts" for a subset of machine injuries. This would apply to cases in which an autonomous computer, robot, or machine is occupying the position of a reasonable person in the traditional negligence paradigm and where automation is likely to improve safety. This Part contends that the acts of computer tortfeasors should be evaluated under a negligence standard rather than under principles of product liability, and it goes on to propose rules for implementing the system. 
Finally, Part III argues that once computer operators become safer than people and automation is practical, the "reasonable computer" should become the new standard of care. It explains how this standard would work, argues the reasonable computer standard works better than a reasonable person using an autonomous machine, and consid- ers when the standard should apply to computer tortfeasors. At some point, computers will be so safe that the standard's most significant effect would be to internalize the cost of accidents on human tortfeasors. 
This Article is focused on the effects of automation on accidents, but automation implicates a host of social concerns. It is important that policymakers act to ensure that automation benefits everyone. Automation may increase productivity and wealth, but it may also contribute to unemployment, financial disparities, and decreased social mobility. These and other concerns are certainly important to consider in the automation discussion, but tort liability may not be the best mechanism to address every issue related to automation

Obscurity

'The ‘Right to be Forgottenʼ Online within G20 Statutory Data Protection Frameworks' by David Erdos and Krzysztof Garstka comments
Although it is the EU’s General Data Protection Regulation and the Google Spain judgment which has brought the concept of the ʻright to be forgottenʼ online to the fore, this paper argues that its basic underpinnings are present in the great majority of G20 statutory frameworks. Whilst China, India, Saudi Arabia and the United States remain exceptional cases, fifteen out of nineteen (almost 80%) of G20 countries now have fully-fledged statutory data protection laws. By default, almost all of these laws empower individuals to challenge the continued dissemination of personal data not only when such data may be inaccurate but also on wider legitimacy grounds. Moreover, eleven of these countries have adopted statutory ʻintermediaryʼ shields which could help justify why certain online platforms may be required to respond to well-founded ex post challenges even if they lack most ex ante duties here. Nevertheless, the precise scope of many data protection laws online remains opaque and the relationship between such laws and freedom of expression is often unsatisfactory. Despite this, it is argued that G20 countries and G20 Data Protection Authorities should strive to achieve proportionate and effective reconciliation between online freedom of expression and ex post data protection claims, both through careful application of existing law and ultimately through and under new legislative initiatives.

Relators and crimes against humanity

In Daniel Taylor v Attorney-General of the Commonwealth [2019] HCA 30 the High Court has responded to questions stated in a special case of interest to people concerned with crimes against humanity.

A majority of the Court held that it was unnecessary to answer the substantive questions stated in the the case in order to determine the plaintiff's entitlement to relief. The basis was that s 268.121(2) of the Criminal Code (Cth) precludes private prosecution of an offence against Div 268 of the Criminal Code.

Under s 13(a) of the Crimes Act 1914 (Cth) any person may institute proceedings for the commitment for trial of a person in respect of an indictable offence against the law of the Commonwealth, unless the contrary intention appears in the Act creating the offence. Section 268.121(1) of the Criminal Code provides that proceedings for an offence against Div 268 of the Criminal Code must not be commenced without the written consent of the Attorney-General of the Commonwealth. Section 268.121(2) of the Criminal Code provides that an offence against Div 268 "may only be prosecuted in the name of the Attorney-General".

 On 16 March 2018 Taylor lodged a charge-sheet and draft summons at the Melbourne Magistrates' Court alleging that Aung San Suu Kyi (Minister for the Office of the President and Foreign Minister of the Republic of the Union of Myanmar) had committed a crime against humanity in contravention of s 268.11 of the Criminal Code, a Commonwealth indictable offence. The offence appears in Div 268 of the Criminal Code and is unable to be heard and determined summarily.

The lodgement purportedly relied on s 13(a) of the Crimes Act.

On the same day Taylor requested the Commonwealth Attorney-General's  consent under s 268.121(1) of the Criminal Code to the commencement of the prosecution. The Attorney-General did not consent. On 23 March 2018 Taylor accordling commenced a proceeding against the Attorney-General in the original jurisdiction of the High Court, seeking to quash the decision not to consent to the commencement of the prosecution and to compel the Attorney-General to reconsider the request for consent.

In June this year a majority of the Court held that, by providing that an offence against Div 268 of the Criminal Code "may only be prosecuted in the name of the Attorney‑General", s 268.121(2) of the Criminal Code provides a contrary intention for the purpose of s 13(a) of the Crimes Act so as to preclude the private prosecution of an offence against that Division.

A majority of the Court held that the Attorney‑General's decision to refuse consent to the plaintiff's proposed prosecution of Suu Kyi was the only decision legally open. As a result the relief sought by the plaintiff could only be refused.

Edelman J states
I would have allowed this special case to progress to further hearing past the preliminary issue. A relator prosecution brought in the name of the Attorney-General, and controlled by the Attorney-General, is a prosecution "in the name of the Attorney-General". The particular international context in which Div 268 was enacted is consistent with this conclusion.
The specific questions and answers were
1.  Is the defendant's decision to refuse to consent under s 268.121 of the Criminal Code (Cth) to the prosecution of Ms Suu Kyi insusceptible of judicial review on the grounds raised in the amended application? 
Answer: Unnecessary to answer. 
2. If "no" to question 1, did the defendant make a jurisdictional error in refusing consent under s 268.121 of the Criminal Code to the prosecution of Ms Suu Kyi on the ground that Australia was obliged under customary international law to afford an incumbent foreign minister absolute immunity from Australia's domestic criminal jurisdiction (the asserted immunity) for one or more of the following reasons:
a. Under customary international law as at the date of the defendant's decision, the asserted immunity did not apply in a domestic criminal prosecution in respect of crimes defined in the Rome Statute? 
b. By reason of: the declaration made by Australia upon ratifying the Rome Statute; Australia's treaty obligations under the Rome Statute; and/or the enactment of the International Criminal Court Act 2002 (Cth) and the International Criminal Court (Consequential Amendments) Act 2002 (Cth), the obligations assumed by Australia under international law were such that the defendant was not entitled to refuse, on the basis of the asserted immunity, to consent to the domestic prosecution of Ms Suu Kyi in respect of crimes defined in the Rome Statute? 
c. By reason of: the declaration made by Australia upon ratifying the Rome Statute; Australia's treaty obligations under the Rome Statute; the enactment of the International Criminal Court Act and the International Criminal Court (Consequential Amendments) Act; and/or the Diplomatic Privileges and Immunities Act 1967 (Cth), the Consular Privileges and Immunities Act 1972 (Cth) and the Foreign States Immunities Act 1985 (Cth), the defendant was not entitled under Australian domestic law to refuse, on the basis of the asserted immunity, to consent to the domestic prosecution of Ms Suu Kyi in respect of crimes defined in the Rome Statute? 
Answer: Does not arise. 
3. If "no" to question 1, did the defendant make a jurisdictional error in refusing consent to the prosecution of Ms Suu Kyi on the ground that he failed to afford the plaintiff procedural fairness? 
Answer: Does not arise. 
4. What relief, if any, should be granted? 
Answer: None. The amended application should be dismissed with costs. 
5. Who should bear the costs of the special case? 
Answer: The plaintiff.

Algorithmics

'Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms' by Jake Goldenfein in Office of the Victorian Information Commissioner (ed), Closer to the Machine: Technical, Social, and Legal aspects of AI (Office of the Victorian Information Commissioner, 2019) comments
There has been a great deal of research on how to achieve algorithmic accountability and transparency in automated decision-making systems - especially for those used in public governance. However, good accountability in the implementation and use of automated decision-making systems is far from simple. It involves multiple overlapping institutional, technical, and political considerations, and becomes all the more complex in the context of machine learning based, rather than rule based, decision systems. This chapter argues that relying on human oversight of automated systems, so called ‘human-in-the-loop’ approaches, is entirely deficient, and suggests addressing transparency and accountability during the procurement phase of machine learning systems - during their specification and parameterisation - is absolutely critical. In a machine learning based automated decision system, the accountability typically associated with a public official making a decision has already been displaced into the actions and decisions of those creating the system - the bureaucrats and engineers involved in building the relevant models, curating the datasets, and implementing a system institutionally. But what should those system designers be thinking about and asking for when specifying those systems? 
There are a lot of accountability mechanisms available for system designers to consider, including new computational transparency mechanisms, ‘fairness’ and non-discrimination, and ‘explainability’ of decisions. If an official specifies for a system to be transparent, fair, or explainable, however, it is important that they understand the limitations of such a specification in the context of machine learning. Each of these approaches is fraught with risks, limitations, and the challenging political economy of technology platforms in government. Without understand the complexities and limitations of those accountability and transparency ideas, they risk disempowering public officials in the face of private industry technology vendors, who use trade secrets and market power in deeply problematic ways, as well as producing deficient accountability outcomes. This chapter therefore outlines the risks associated with corporate cooption of those transparency and accountability mechanisms, and suggests that significant resources must be invested in developing the necessary skills in the public sector for deciding whether a machine learning system is useful and desirable, and how it might be made as accountable and transparent as possible.

08 September 2019

Borders

'The many lives of border automation: Turbulence, coordination and care' by Debbie Lisle and Mike Bourne in (2019) Social Studies of Science comments
Automated borders promise instantaneous, objective and accurate decisions that efficiently filter the growing mass of mobile people and goods into safe and dangerous categories. We critically interrogate that promise by looking closely at how UK and European border agents reconfigure automated borders through their sense-making activities and everyday working practices. We are not interested in rehearsing a pro- vs. anti-automation debate, but instead illustrate how both positions reproduce a powerful anthropocentrism that effaces the entanglements and coordinations between humans and nonhumans in border spaces. Drawing from fieldwork with customs officers, immigration officers and airport managers at a UK and a European airport, we illustrate how border agents navigate a turbulent ‘cycle’ of automation that continually overturns assumed hierarchies between humans and technology. The coordinated practices engendered by institutional culture, material infrastructures, drug loos and sniffer dogs cannot be captured by a reductive account of automated borders as simply confirming or denying a predetermined, data-driven in/out decision.
The authors argue
 Since the first e-gates were deployed at Faro and Manchester airports in 2008 (Foreign & Commonwealth Office, Home Office and Border Force, 2017; Frontex, 2014), air, land and sea borders in Europe and the UK have been shaped by an intense drive towards automation. As part of the European Union (EU)’s 2013 Smart Borders package, millions of Euro have been invested in technology projects such as ‘ABC4EU’ and ‘FASTPASS’, which use e-gates to bring together e-passports, ‘live’ biometrics (e.g. photographs) and pre-existing databases (e.g. the Registered Travellers Programme) (ABC4EU, 2019; FASTPASS, 2019; see also iBorderCtrl, 2019). Similar technologies have been rolled out in the UK: By the end of 2017 there were 239 e-gates in operation in all major UK airports (Foreign & Commonwealth Office, Home Office and Border Force, 2017). Globally, the market for Automated Border Control e-gates and kiosks alone is expected to grow to $1.58 billion by 2023 (MarketsandMarkets.com, 2017). This drive towards automation is constituted by two interrelated modes of filtering: (i) the databases and sophisticated algorithms capable of gathering, analysing and comparing massive amounts of data on the mobility of people and goods, and (ii) the technologies used in border spaces that translate pre-emptively generated data to make an instantaneous in/out border decision. The widespread embrace of border automation in the UK and EU is underscored by a powerful fantasy that integrates these two modes of filtering: A perfect in/out decision is produced when the algorithms pre-emptively construct a data double that is ‘safe’ or ‘dangerous’, and the automated technology at the border (e.g. the e-gate or the handheld scanner) either confirms or denies that identity. Amidst growing volumes of passengers and freight, the allure of automation emerges as the perfect resolution of tensions between mobility and security. This fantasy of border automation rests on three major claims. First, automated border decisions are instantaneous: Unlike human border guards who struggle to decide within an average of 12 seconds (Frontex and Ferguson, 2014), automated borders draw on the pre-emptive data collection and analytics to produce in/out decisions in a fraction of a second, thereby increasing the convenience of border crossing for predesignated travellers, baggage and freight. Second, border automation enhances the accuracy of decisions because they attach specific pregiven information harvested from large databases to specific bodies in specific sites. In other words, automated borders are the final confirmation that the bodies, bags and boxes in front of them align with the information in the databases. The accuracy provided by automated borders is guaranteed by the certainty and reproducibility of the data driving the decision. Data is stored and can therefore be accessed, rechecked and consulted to identify novel patterns that can aid prediction and ‘future proof’ the border. And finally, automated border decisions are objective and neutral: Because they draw on the algorithmic processing of huge amounts of data, they avoid the biases, prejudices and irritations of human border guards. In this sense, automated borders respect the rights of ‘safe’ persons (because they are not falsely identified), ‘safe’ goods (because they can proceed uninterrupted to their destination), and even suspect persons (because immigration forces have time to plan a humane arrest) (European Commission, 2016a). 
Drawing from a multisited and multinational ethnographic study that ran from 2014 to 2016, this article explores the extent to which this powerful fantasy of automation shapes (or indeed, doesn’t shape) the everyday practices and sense-making of our informants: customs officers, immigration officers and airport managers. We critically reflect on how these border agents at a major UK airport and a medium-sized European airport make sense of and interact with the automated technologies put in place to supposedly make their jobs of ‘bordering’ more efficient, accurate and objective. We know from critical border studies and critical security studies that the prevailing fantasy of automation reproduces a problematic anthropocentric landscape in which human operators are separated from the inert technologies they use for bordering (Glouftsios, 2017; Leese, 2015; Schouten, 2014; Sohn, 2016). We are interested in how that anthropocentrism is articulated and troubled in the sense-making and working practices of those using automated borders. In this article, we develop two related questions. First, we explore the extent to which the anthropocentrism underscoring this powerful fantasy of automation operates as a regulative ideal, how it governs the behaviours, practices, relations and imaginaries of those managing automated borders. Here, we build on Allen and Vollmer’s (2018) study of how UK border managers carefully traffic between believing in the promises of border technologies and being deeply suspicious of the machine’s ability to ‘read’ humans. We are particularly interested in the extent to which border agents feel trapped inside a ‘pro-automation’ vs. ‘anti-automation’ debate that forces them to staunchly defend either technology or humans. Certainly, we pay attention to how border agents often unthinkingly reproduce these polarised positions, though we are more interested in how they carefully recognise and acknowledge the limitations of such pregiven positions as they make sense of automation. Indeed, our interviews and observations revealed a great deal of anxiety over who or what is actually making the in/out border decision, and who or what is the best agent to do so. These moments of doubt and uncertainty, often expressed through frustration, loss and lament, lead to our second question, which engages the new working practices emerging as border agents work with, around and in proximity to automated borders. We are particularly interested in the coordinated actions, unexpected improvisations and creative work-arounds that are developing between humans, machines, and other nonhumans. To get a meaningful picture of these new practices, we telescope out from the specific automated technologies of the border to focus on the wider entanglements that are shaping supposedly ‘clean’ in/out border decisions. Through our interviews and observations, we uncovered a complex and expansive understanding of automation, which exceeds the simple and unidirectional flow from pre-emptive data-based filtering to the automated border technology that simply confirms or denies a pregiven decision. Here, we draw from critical work detailing the deterritorialised nature of borders, such as de Goede’s (2018) analysis of the ‘chains of translation’ that constitute the governing of suspicious financial transactions, and Jeandesboz’s (2016) account of the ‘chains of association’ that constitute border policy-making (see also Parker and Adler-Nissen, 2012; Popescu, 2015b). Thinking about automated borders through this radically deterritorialised landscape is important because it creates more space to consider questions of agency. Not only are airport managers and customs and immigration officers repositioned as active agents using technologies in creative, surprising and inventive ways, but the supposedly ‘inert’ technologies of bordering are understood as entities acting, exerting force and directly shaping in/out border decisions in ways that exceed a simple confirmation or denial of a pregiven decision. As our interviews and observations reveal, the multiple relations and attachments between these agents are producing new coordinated practices around automated borders that often confound the deep anthropocentrism underscoring the fantasy of automation
They conclude
The prevailing pro/anti debate over border automation would have us believe that in/out border decisions are the result of either superior technologies capable of translating pregiven data with more speed, accuracy and objectivity, or superior human capabilities such as intuition, experience and tradecraft offering more relevant translations of pregiven data in specific situations. But these two narratives share a crucial assumption: that proper, robust and reliable in/out border decisions come primarily from single actors – either automated technologies or sophisticated human agents. This article contests that deeply reductive ontology and looks instead at what kind of sense-making and working practices emerge when we approach border automation through a lens of entanglement. Our observations and interviews at two airports revealed a complex set of coordinated practices between some expected humans and machines (e.g. immigration officers and e-gates), as well as some unexpected other nonhuman actors (e.g. parking spaces, packing tape, sniffer dogs, cement walls, shit). We came to understand automated borders not as a single moment of decision where an e-gate or e-manifest confirms or denies entry based on pregiven data, but rather as an elongated set of coordinated practices that are irreducible to either human or technology. To be sure, there is much more research to be done on how these practices emerge and transform. For example, what kind of automated border appears in the dedicated training sessions for specific technologies, or the professional mentoring structures that sustain its use? What kind of coordinated practices emerge around the care, maintenance, fixing and cleaning of automated border technologies? And if our turbulent cycle of automation operates across airport space, what are the different intensities operative in each sector? Our purpose in reframing automated borders through their constitutive entanglements and emerging practices of coordination, is to reveal the profound contingency of in/out border decisions, no matter how automated those decisions purport to be. The insights we gleaned from our interviews and observations helped us to contest the isolation, instrumentality and purity of automated borders, and foreground the congregation of agents and multiplicity of ‘situated actions’ that are enrolled in these seemingly simple in/out decisions.

Disability

Disabled People's Organisations Australia - a coalition body encompassing People with Disability Australia (PWDA), Women With Disabilities Australia (WWDA), National Ethnic Disability Alliance (NEDA) and First Peoples Disability Network (Australia) (FPDN) - has released Joint Position Statement: A call for a rights-based framework for sexuality in the NDIS.

 The Statement notes that
 The UN Convention on the Rights of Persons with Disabilities (CRPD) was ratified by Australia in 2008, stating that governments have an obligation to ensure that people with disability can enjoy rich and fulfilling lives equal to others in society.
The National Disability Insurance Scheme (NDIS) is underpinned by the rights enshrined in the CRPD, using a person-centred approach. It is designed to provide access to supports that are deemed “reasonable and necessary” to ensure people with disability are fully supported to live “ordinary” lives, equal to the rest of the community.
When launched, it was widely declared that “no person would be worse off.” The Australian Government promised that state and territory funded supports for people with disability would be maintained as people transitioned into the NDIS.
It goes on to note
Historically people with disability have been subjected to societal beliefs that we are either asexual or hypersexual, while constantly being denied full autonomy over our own bodies.
The NDIS have further perpetuated this stigma by failing to develop or produce a clear and comprehensive sexuality policy for NDIS participants that encompasses and supports individual sexual needs and goals at all life and development stages.
A sexuality policy should be positively framed and place sex, sexuality and relationships within the context of disability supports. The policy should include a broad range of goals an NDIS participant may seek to include in their NDIS plan. These goals might include: appropriate disability-inclusive sexuality and relationships education; information and resources to support individual learning needs; support for dating and social sexual engagements; access to adaptive sex toys; access to sex therapy or utilising sexual services from sex workers.
Our concern
We are concerned about the absence of a comprehensive policy framework on sexuality. A comprehensive NDIS policy recognises, encompasses and supports the types and range of professional support some people with disability may need to use to express their sexuality, and to have the opportunity for fulfilling sexual experiences in life.
The benefits of sexual expression for people with disability
People with disability choose to date, have casual sexual partners, enjoy loving partnerships, choose celibacy, marry, and decide to be parents or not, just like others in the community. There are also a myriad ways people with disability can enjoy sexual expression.
The benefits of fulfilling sexual needs and goals can positively contribute to the overall quality of life and self-esteem for individuals, as well as meeting a range of other emotional, psychological, physical and social needs.
Additionally, some people with disability are in need of specific support to learn about their sexuality and sexual capacity after a significant injury, illness or sexual assault, increase their experience, knowledge and acceptance about changes in their own bodies and abilities, and to gain confidence and social skills to enjoy a positive sexuality.
Why support is needed
The professional services of a wide range of educators, including allied health professionals and sex therapists, can play an integral role in supporting an individual’s capacity to develop life skills necessary to engage in healthy and consenting sexual and romantic relationships.
Professional[1] and ethical codes of conduct[2] clearly state that sex therapists[3] are not allowed to touch their patients and clients in an intimate or sexual manner.[4]
This is in contrast to sex workers who can and do provide mutually consenting physical contact. While accessing services of sex workers may not be for everyone, this option should not be denied nor dismissed on the basis of disability, or the moral beliefs of third parties.
Sex workers, especially within Australia, have already been recognised as being able to provide professional sexual services for a wide range of people with disability. Their skill-set can complement sex education and sex therapy and allow an individual to practice, experience and enjoy a range of activities in a safe and supportive environment.[5]
Giving people with disability the right to exercise choice and control over the supports they need to achieve the goals they’ve identified is the primary objective the community expects the NDIS to deliver on.
Our call for change
The Commonwealth Government and the National Disability Insurance Agency (NDIA) are out of touch with current practices, and are unaware of the high levels of community support for people with disability to exercise choice and control over the supports we need to achieve the goals we identify.
Previously, state-based disability financial support systems allowed for people with disability to access sexual services according to their individual needs and goals.[6] Our sexual autonomy was supported through clear policy and procedures.
We were not meant to be worse off under the NDIS but this is one area where we are.
We call on the NDIA to develop a comprehensive sexuality policy to continue reasonable and necessary support for sexual expression through NDIS funding.