15 February 2019

Ends

'Artificial Intelligence Based Suicide Prediction' by Jason Marks in Yale Journal of Health Policy, Law, and Ethics (Forthcoming) comments 
 Suicidal thoughts and behaviors are an international public health concern contributing to 800,000 annual deaths and up to 25 million nonfatal suicide attempts. In the United States, suicide rates have increased steadily for two decades reaching 47,000 per year and surpassing annual motor vehicle deaths. This trend has prompted government agencies, healthcare systems, and multinational corporations to invest in tools that use artificial intelligence to predict and prevent suicide. This article is the first to describe the full landscape of these tools, the laws that apply to their operation, and the under explored risks they pose to patients and consumers. 
AI-based suicide prediction is developing along two separate tracks: In “medical suicide prediction,” AI analyzes data from patient medical records; In “social suicide prediction,” AI analyzes consumer behavior derived from social media, smartphone apps, and the Internet of Things. Because medical suicide prediction occurs within the healthcare system, it is governed by laws such as the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy; regulations such as the Federal Common Rule, which protects the safety of human research subjects; and general principles of medical ethics such as autonomy, beneficence, and justice. Moreover, medical suicide prediction methods are published in peer-reviewed academic journals. In contrast, social suicide prediction typically occurs outside the healthcare system where it is almost completely unregulated, and corporations often maintain their prediction methods as proprietary trade secrets. Due to this lack of transparency, little is known about their safety or effectiveness. Nevertheless, unlike medical suicide prediction, which is primarily experimental, social suicide prediction is deployed globally to affect people’s lives every day. 
Though AI-based suicide prediction may improve our understanding of suicide while potentially saving lives, it raises many risks that have been under explored. The risks include stigmatization of people with mental illness, the transfer of sensitive health data to third-parties such as advertisers and data brokers, unnecessary involuntary confinement, violent confrontations with police, exacerbation of mental health conditions, and paradoxical increases in suicide risk. After describing these risks, the article presents a policy framework for promoting safe, effective, and fair AI-based suicide predictions. The framework could be adopted voluntarily by companies that make suicide predictions or serve as a foundation for regulation in the US and abroad.
'Abolishing the Suicide Rule' by Alex B Long in (2019) 115(4) Northwestern University Law Review comments
Suicide is increasingly recognized as a public health issue. There are over 40,000 suicides a year in the US, making suicide the tenth-leading cause of death in the country. But societal attitudes on the subject remain decidedly mixed. Suicide is often closely linked to mental illness, a condition that continues to involve stigma and often triggers irrational fears and misunderstanding. For many, suicide remains an immoral act that flies in the face of strongly held religious principles. In some ways, tort law’s treatment of suicide mirrors the conflicting societal views regarding suicide. Tort law has long been reluctant to permit recovery in a wrongful death action from a defendant who is alleged to have caused the suicide of the decedent. In many instances, courts apply a strict rule of causation in suicide cases that has actually been dubbed ‘the suicide rule’ in one jurisdiction. While reluctance to assign liability to defendants whose actions are alleged to have resulted in suicide still remains the norm in negligence cases, there has been a slight trend among court decisions away from singling out suicide cases for special treatment and toward an analytical framework that more closely follows traditional tort law principles. This Article argues that this trend is to be encouraged and that it is time for courts to largely abandon the special rules that have developed in suicide cases that treat suicide as a superseding cause of a decedent’s death.