13 June 2018

AI Crime and Ethics

'Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions' by Thomas King, Nikita Aggarwal, Mariarosaria Taddeo and Luciano Floridi comments
 Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which we term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems, and a possible solution space.
In the UK the Department for Digital, Culture, Media and Sport has released a short consultation paper regarding a new national Centre for Data Ethics and Innovation.

It states
● The use of data and artificial intelligence (AI) is set to enhance our lives in powerful and positive ways. ​We want the UK to be at the forefront of global efforts to harness data and artificial intelligence as a force for good. 
● For this, our businesses, citizens and public sector need clear rules and structures that enable safe and ethical innovation in data and AI. ​The UK already benefits from well established and robustly enforced personal data laws, as well as wider regulations that guide how data driven activities and sectors can operate. 
● However, advances in the ways we use data are giving rise to new and sometimes unfamiliar economic and ethical issues. ​We need to make sure we have the governance in place to address these rapidly evolving issues, otherwise we risk losing confidence amongst the public and holding businesses back from valuable innovation. 
● This is why we are establishing a new Centre for Data Ethics and Innovation.​ It will identify the measures needed to strengthen and improve the way data and AI are used and regulated. This will include articulating best practice and advising on how we address potential gaps in regulation. The Centre will not, itself, regulate the use of data and AI - its role will be to help ensure that those who govern and regulate the use of data across sectors do so effectively. 
● The Centre will operate by drawing on evidence and insights from across regulators, academia, the public and business.​ It will translate these into recommendations and actions that deliver direct, real world impact on the way that data and AI is used. The Centre will have a unique role in the landscape, acting as the authoritative source of advice to government on the governance of data and AI. 
● Across its work, the Centre will seek to deliver the best possible outcomes for society from the use of data and AI. ​This includes supporting innovative and ethical uses of data and AI. These objectives will be mutually reinforcing: by ensuring data and AI are used ethically, the Centre will promote trust in these technologies, which will in turn help to drive the growth of responsible innovation and strengthen the UK’s position as one of the most trusted places in the world for data-driven businesses to invest in. 
● We propose that the Centre acts through:
a. analysing and anticipating gaps​ in the governance landscape  
b. agreeing and articulating best practice​ to guide ethical and innovative uses of data 
c. advising Government on the need for specific policy or regulatory action 
● Understanding the public’s views​, and acting on them, will be at the heart of the Centre’s work, as well as responding to and seeking to shape the international debate. 
● We recognise that the issues in relation to data use and AI are complex, fast moving and far reaching, ​and the Centre itself - as well as the advice it delivers - will need to be highly dynamic and responsive to shifting developments and associated governance implications. 
● To enshrine and strengthen the independent advisory status of the Centre, we will seek to place it on a statutory footing as soon as possible.​ This will be critical in building the Centre’s long term capacity, independence and authority. 
● This consultation seeks views on the way the Centre will operate and its priority areas of work. ​We want to ensure the Centre adds real value and builds confidence and clarity for businesses and citizens. We will therefore engage extensively with all those who have an interest and stake in the way data use and AI are governed and regulated. This includes citizens, businesses, regulators, local and devolved authorities, academia and civil society.
'The Other Side of Autonomous Weapons: Using Artificial Intelligence to Enhance Precautions in Attack' by Peter Margulies in Eric Talbot Jensen (ed.) The Impact of Emerging Technologies on the Law of Armed Conflict (Oxford University Press, 2018) comments
The role of autonomy and artificial intelligence (AI) in armed conflict has sparked heated debate. The resulting controversy has obscured the benefits of autonomy and AI for compliance with international humanitarian law (IHL). Compliance with IHL often hinges on situational awareness: information about a possible target’s behavior, nearby protected persons and objects, and conditions that might compromise the planner’s own perception or judgment. This paper argues that AI can assist in developing situational awareness technology (SAT) that will make target selection and collateral damage estimation more accurate, thereby reducing harm to civilians. 
SAT complements familiar precautionary measures such as taking additional time and consulting with more senior officers. These familiar precautions are subject to three limiting factors: contingency, imperfect information, and confirmation bias. Contingency entails an unpredictable turn of events, such as the last-minute entrance of civilians into a targeting frame. Imperfect information involves relevant data that is inaccessible to the planner of an attack. For example, an attack in an urban area may damage civilian objects that are necessary for health and safety, such as sewer lines. Finally, confirmation bias entails the hardening of preconceived theories and narratives. 
SAT’s ability to rapidly assess shifting variables and discern patterns in complex data can address perennial problems with targeting such as the contingent appearance of civilians at a target site or the risk of undue damage to civilian infrastructure. Moreover, SAT can help diagnose flaws in human targeting processes caused by confirmation bias. This Article breaks down SAT into three roles. Gatekeeper SAT ensures that operators have the information they need. Cancellation SAT can respond to contingent events, such as the unexpected presence of civilians. The most advanced system, behavioral SAT, can identify flaws in the targeting process and remedy confirmation bias. In each of these contexts, SAT can help fulfill IHL’s mandate of “constant care” in the avoidance of harm to civilian persons and objects.