25 September 2020

Skimming and Survivor Fraud

In Islam v R [2020] NSWCCA 236 Tariqul Islam has been unsuccessful in an appeal after convicttion for 'skimming' offences. 

Wilson J states that Islam had entered a plea of guilty in the Local Court and was committed for sentence with respect to the following offences: 

 An offence contrary to s 93T(1A) of the Crimes Act 1900 (NSW) that the applicant knowingly participated in a criminal group by directing the activities of the group, knowing that his participation contributed to criminal activity. This offence carries a maximum sentence of 10 years imprisonment; and

An offence contrary to s 192J of the same Act that the applicant dealt with identification information, being credit card information for over 550 persons, intending to commit fraud. This offence similarly carries a maximum sentence of 10 years imprisonment. 

The Crown's statement of agreed facts  indicated that 

 The applicant was the “ringleader” of a group of men, all taxi drivers, who were involved in a scheme to “skim” the data from credit cards of unknowing taxi passengers, thereafter producing a clone of the victim’s credit card which could be used to withdraw cash or purchase goods. Between late August 2017 and late January 2018, the applicant directed the activities of the four other men who were involved, and many hundreds of thousands of dollars were fraudulently obtained by them.  

The applicant managed the scheme, recruiting taxi drivers to participate, instructing them in the use of a “skimming device”, cloning fraudulent credit cards, and subsequently directing the activities of his co-offenders. 

The applicant had possession and control of five devices known as “Ghost Terminals”. Ostensibly portable EFTPOS terminals, participating taxi drivers used the devices to collect a fare from a passenger who had used the taxi driven by the particular member of the group. In fact, the terminal did not process a charge on the passenger’s credit card in payment for the journey; instead, it “skimmed” or recorded the data contained on the magnetic strip on the card, which included the personal identification number, or “PIN”, necessary to access cash machines. 

The applicant’s practice was to distribute the Ghost Terminals to his four co-offenders, who used them to record the credit card details of passengers of the taxi service. The men would then meet at a pre-arranged location and return the terminals to the applicant. The applicant paid the drivers a fee for each card “skimmed”. The applicant used the data recorded by the drivers on the terminals to clone credit cards using the skimmed or stolen data. For all practical purposes, the cloned card functioned in the same way as the original credit card it copied, and the cards were then able to be used to withdraw cash or purchase goods. 

Electronic surveillance by police officers of the applicant recorded him instructing other group members in the use of cloned cards – which he provided to them – in automatic teller machines (“ATM”) to withdraw sums of money from the accounts of the victims of the group. The applicant was careful to use different ATMs for each group of transactions, and to instruct his co-offenders to make modest withdrawals of $500 or less to avoid exceeding any daily withdrawal limit. He also provided information as to the conduct of his co-offenders, to assist them to avoid “looking suspicious”. 

The applicant was followed by police officers on a number of occasions as he and a co-offender drove from place to place, using multiple cards in ATMs located around Sydney to withdraw sums of money. At the end of any particular day, the applicant counted the money stolen in the various transactions and paid the relevant co-offender a share of the fraudulent takings. He was observed to direct these outings on multiple occasions. This scheme was reflected by the s 93T(1A) offence. 

The drivers were progressively arrested by police until, on 26 January 2018, the applicant was arrested at his home in Marrickville. His home was searched. Police officers found a sum of cash on the applicant’s person, and a laptop computer which contained card cloning software that could be used to clone credit cards. The laptop also held the credit card information of 557 specific individuals, of which 98 sets of information had already been used to clone cards and make fraudulent cash withdrawals. The information was held by the applicant to facilitate the commission of further fraudulent activity (the s 192J offence). 

Electronic surveillance during the course of the investigation into the applicant’s conduct established that he had been stealing and cloning credit card data for the previous three years, and had fraudulently obtained between $250,000 and $300,000. 

Throughout the period that the applicant was directing these fraudulent activities, he was at liberty subject to conditional bail granted to him by the Supreme Court, he having been earlier charged with 19 counts of obtaining a financial advantage by deception, contrary to s 192E(2)(b) of the Crimes Act. He was in fact awaiting the completion of sentence proceedings with an intensive correction order (“ICO”) in contemplation when he was charged afresh. 

The matters were not able to be joined and dealt with together because the applicant did not enter pleas of guilty to the second set of charges until well after the finalisation of the first set. 

The applicant’s criminal history as it was before the sentencing court reveals that the applicant was convicted before the District Court of 11 of the outstanding sentence matters, with the balance of 8 offences taken into account pursuant to s 33 of the Crimes (Sentencing Procedure) Act 1999 (NSW) (the “CSP Act”). On 20 April 2018, an aggregate sentence of 2 years imprisonment was imposed upon him, to date from 19 December 2017 and expiring on 18 December 2019. A non-parole period (“NPP”) of 15 months imprisonment was specified, which concluded on 18 March 2019. The sentencing judge, his Honour Acting Judge Armitage, made a finding of special circumstances pursuant to s 44(2) of the CSP Act in the applicant’s favour, reducing the NPP by 3 months on the ordinary statutory ratio that would otherwise have seen the applicant serve a NPP of 18 months. Parole was made subject to the supervision of the Community Corrections Service, with a direction that the applicant accept drug rehabilitation and psychological services. 

The only other matter in the applicant’s criminal history was a conviction for common assault in December 2016, which was dealt with by way of a fine. 

Information relating to the 2018 proceedings was provided to the sentencing court in April 2019. The Crown tendered the indictment containing the 11 counts contrary to s 192E(1)(b) to which the applicant had pleaded guilty, the Form One document with details of the 8 further offences that the applicant acknowledged having committed and asked to have taken into account on sentence for those matters on indictment, an agreed Statement of Facts, a Pre-Sentence report (“PSR”) and an ICO Assessment Report. 

The facts of the earlier offences were broadly similar to those before his Honour Judge Williams SC. They came to light when police officers in Picton observed the applicant and another man parked in suspicious circumstances in the township late at night on 29 May 2016. The applicant’s companion was found to be the subject of an arrest warrant, and to be in Australia unlawfully. A search of the car in which the men had been seated located the sum of $11,900 in cash, secured by a band; numerous blank and marked white credit cards; documentary records of various bank accounts including cardholder names and PINS for each; ATM receipts evidencing withdrawals and attempted withdrawals made in Helensburgh and Picton from multiple accounts held at varying financial institutions; and some smaller amounts of money, with the largest single amount being a sum of $1590 in cash. Stolen financial data was found on the white credit cards when they were later forensically examined. It was also determined that 20 of the cards seized had been used in Picton in a 63 minute period on the night of 29 May 2016 to steal $9,180 in cash, with another $570 stolen in fraudulent ATM withdrawals at Helensburgh earlier that evening. 

The applicant’s fingerprints and DNA linked to him were later found on a number of the items from the car. The applicant, when interviewed by police, denied any knowledge of the counterfeit credit cards or the cash found, and denied attending any ATM. 

The PSR that was before the District Court when the applicant was sentenced in 2018 for these 2016 offences reported that the applicant was a Bangladeshi national who came to Australia in 2008, with his family intending to pay for his tertiary education in this country. When his family in Bangladesh experienced financial strain and could no longer support his education, the applicant ceased his studies and took on various unskilled jobs. The applicant told the author of the PSR that he had been “propositioned” by his co-offender (an Indian national who was deported prior to sentence) and became involved without being aware of his offending until it had begun. His only reason for participating was the financial gain his offences gave him. He expressed regret at the loss to the victims of the offences, but “sought to minimise his responsibility” for his crimes. 

(As it turned out, rather optimistically) the author assessed the applicant as posing a low risk of re-offending. 

An ICO Assessment Report similarly observed that the applicant sought to minimise his responsibility for his crimes, although considered him to have made some positive changes by securing employment, and expressing an intention to give up what had been his acknowledged daily use of cannabis. He was regarded as suitable for an ICO.

In Canada Shehroze Chaudhry, who has portrayed himself as a former ISIS member living freely in Canada (and under the alias Abu Huzayfah appeared in the award-winning New York Times podcast Caliphate where he described conducting public executions) has been charged with faking his involvement in ISIS. 

He has been charged under section 83.231(1) of the Criminal Code; dealing with terrorism hoaxes, apparently the first in Canada's history. Chaudhry's Facebook profile has described him as Abu Huzayfa, a mujahid and jihadist. He has reportedly posting on social media and told reporters since 2016 that he was a former member of the ISIS religious police in Syria. 

Investigation by the RCMP’s Toronto Integrated National Security Enforcement Team resulted in the terrorism hoax charge, with a spokesperson stating

Hoaxes can generate fear within our communities and create the illusion there is a potential threat to Canadians, while we have determined otherwise. As a result, the RCMP takes these allegations very seriously, particularly when individuals, by their actions, cause the police to enter into investigations in which human and financial resources are invested and diverted from other ongoing priorities.

Chaudhry's activity - whether for personal gratification or profit - is analogous to the survivor fraud noted elsewhere in this blog.


24 September 2020

Identity Systems

Privacy International's A Guide to Litigating Identity Systems comments

Some of the largest, data-intensive government programs in the world are National identity systems—centralised government identity schemes that link an individual’s identity to a card or number, often using biometric data, and require identity authentication within the system for the provision of public benefits and participation in public life. The discussion surrounding these systems has largely centered on their perceived benefits for fraud protection, security, and the delivery of services. 

While a number of national identity systems have been challenged in national courts, court analyses of the implications of identity systems have largely mirrored this broader public discourse centered on arguments in favour of identity systems. Two of the three most prominent national court judgments analyzing identity systems—the Aadhaar judgment in India and the Madhewoo judgment in Mauritius and the Huduma Namba judgment in Kenya —upheld the systems, lauding perceived benefits while under-developing critiques. Human rights advocates may find this largely one-sided discussion discouraging, as it limits the extent to which groups and individuals concerned about the human rights impact of identity systems can organize around strong arguments challenging those systems, in whole or part. 

It is in response to this context that Privacy International partnered with the International Human Rights Clinic at Harvard Law School to guide the reader through a simple presentation of the legal arguments explored by national courts around the world who have been tasked with national courts that discuss the negative implications of identity systems, particularly on human rights, and to present their judgement. This argumentation guide seeks to fill that gap by providing a clear, centralised source of the arguments advanced in and discussed by national courts that discuss the negative implications of identity systems, particularly on human rights. It gives advocates a tool for developing arguments in any given national context challenging an identity system, informing debate from a human rights perspective, and further building the repertoire of arguments that can be advanced in the future.

This guide proceeds in five parts: 

First, the guide lays out the wide range of arguments challenging identity systems because of their impact on the right to privacy, providing advocates tools for ensuring privacy right infringement is given adequate weight in courts’ proportionality analyses. 

Second, it outlines arguments surrounding biometric information (which includes iris and fingerprint information), an important component of most identity systems, challenging assumptions of biometric authentication’s effectiveness and necessity. 

Third, the guide presents arguments on data protection concerns, highlighting the importance of safeguards to protect rights, and pointing to issues around the role of consent, function creep, and data sharing. 

Fourth, the guide sets out arguments on rights other than privacy, namely liberty, dignity, and equality. The fourth section provides detail on the social and economic exclusion and discrimination that can result from the design or implementation of identity systems. 

Finally, the fifth section of this guide discusses identity systems’ implications for the rule of law, the role of international human rights law, and considerations of gender identity. Rather than providing a list of arguments, as is the case in the other sections of this guide, the fifth section provides a general overview describing the absence of consideration of these themes in existing jurisprudence and the reasons why these themes warrant future consideration. By developing these arguments in conjunction with the variety of existing arguments illustrated in this guide, advocates can address and challenge the multitude of facets of human rights threatened by identity systems.

Understanding

'Disinformation and Science: Report of an investigation into gullibility of false science news in central European countries' by the Panel for the Future of Science and Technology European Science (European Parliamentary Research Service) comments 

The main aim of this report is to present and discuss the results of a survey concerning perspectives on fake news among undergraduate university students in central and eastern Europe. The survey was carried out in spring 2020, during the coronavirus pandemic. An online questionnaire was used. The report is therefore the product of what could be achieved under highly unusual circumstances and should serve as a pointer for further study. 

Misinformation is always troubling, especially in science. Scientists feel distressed when public understanding diverges from the truth. Intentional disinformation (fake news), however, is not always the cause of misinformation. The report discusses the causes related to social trust and types of media consumption. 

The sample of the study consisted of several hundred bachelors or masters students from each participating country. Half of the students were recruited from social sciences areas and the other half of the sample were recruited from natural sciences areas. The method of approaching the students was online questioning. One university was chosen from each participating country, and the link to the questionnaire was sent by that university's administration to the students. The response to the questionnaire was naturally anonymous and voluntary. 

The questionnaire consisted of four parts. The first part presented several typical fake news announcements from the field of the natural and social sciences. (e.g. 'there has never been a landing on the Moon'; 'homosexuality can be cured by genetic engineering'). In the second part, true news announcements were applied as a control. In the third part, we tested the effect of the fake news by measuring the level of agreement or rejection. In the fourth part, social psychological, social and demographic data, including social trust, social media usage and general news consumption, were gathered. Responses were stored in an online data file and analysed by multivariable statistical means, such as principal component, factor and cluster, and multiple regression analysis. 

Respondents in more or less all countries have shown resistance against falsehood in scientific communication, casting doubt over false news headlines. The students accepting headline news as evidence-based true statements and simultaneously rejecting fake headline news in each country outnumbered the scientific communication non-believers. However, the content of the individual headline news mattered. False or true, headline news referring to specific spheres of human existence, such as gender and sexuality, incited more interest than news concerning more neutral problems of society and nature. The central issue was social trust, which can provide a solution to help people emerge from the mess created by the new information ecosystem that creates information bubbles and crushes reliable and responsible sources of information.

23 September 2020

Rights

One of my bleaker moments today was reminding participants in an international meeting that disregard of human rights may indeed be lawful, in the absence of a constitutionally enshfrined justiciable Bill of Rights, and that Ministers on occasion act as if they are above the law. That hubris has been highlighted elsewhere in this blog. 

This afternoon brings Minister for Immigration, Citizenship, Migrant Services and Multicultural Affairs v PDWL [2020] FCA 1354. 

Flick J states 

[68] A party to a proceeding in this Court, be it a Minister of the Crown or otherwise, cannot fail to comply with findings and orders made by the Tribunal or this Court simply because he “does not like” them. Decisions and orders or directions of the Tribunal or a court, made in accordance with law, are to be complied with. The Minister cannot unilaterally place himself above the law. ... 

[74]    Even if both Grounds of review were made out, however, relief should be refused in the exercise of the Court’s discretion. The Minister cannot place himself above the law and, at the same time, necessarily expect that this Court will grant discretionary relief. The Minister has acted unlawfully. His actions have unlawfully deprived a person of his liberty. His conduct exposes him to both civil and potentially criminal sanctions, not limited to a proceeding for contempt. In the absence of explanation, the Minister has engaged in conduct which can only be described as criminal. He has intentionally and without lawful authority been responsible for depriving a person of his liberty. Whether or not further proceedings are to be instituted is not a matter of present concern. The duty Judge in the present proceeding was quite correct to describe the Minister’s conduct as “disgraceful”. Such conduct by this particular Minister is, regrettably, not unprecedented: AFX17 v Minister for Home Affairs (No 4) [2020] FCA 926 at [8] to [9] per Flick J. Any deference to decisions made by Ministers by reason of their accountability to Parliament and ultimately the electorate assumes but little relevance in the present case. Ministerial “responsibility”, with respect, cannot embrace unlawful conduct intentionally engaged in by a Minister who seeks to place himself above the law. Although unlawful conduct on the part of a litigant does not necessarily dictate the refusal of relief, on the facts of the present case the Minister’s conduct warrants the refusal of relief.

MacKinnon, Misogyny and Speech

'Catharine MacKinnon and the Common Law' (Virginia Public Law and Legal Theory Research Paper No 2020-69) by Charles L Barzun comments

Few scholars have influenced an area of law more profoundly than Catharine MacKinnon. In Sexual Harassment of Working Women (1979), MacKinnon virtually invented the law of sexual harassment by arguing that it constitutes a form of discrimination under Title VII of the Civil Rights Act of 1964. Her argument was in some ways quite radical. She argued, in effect, that sexual harassment was not what it appeared to be. Behavior that judges at the time had thought was explained by the particular desires (and lack thereof) of individuals was better understood as a form of social domination of women by men. Judges, she argued, had failed to see that such conduct was a form of oppression because the social and legal categories through which they interpreted it was itself the product of male power. 

This argument is not your typical legal argument. It may not even seem like a legal argument at all. But this article explains why on one, but only one, model of legal reasoning, MacKinnon’s argument properly qualifies as a form of legal reasoning. Neither the rationalist nor the empiricist tradition of common-law adjudication can explain the rational force of her argument. But a third, holistic tradition of the common law captures its logic well. It does so because, like MacKinnon’s argument (but unlike the other two traditions), it treats judgments of fact and value as interdependent. This structural compatibility between MacKinnon’s argument about gender oppression, on the one hand, and the holistic tradition of the common law, on the other, has theoretical and practical implications. It not only tells us something about the nature of law; it also suggests that critical theorists (like MacKinnon) may have more resources within the common law tradition to make arguments in court than has been assumed.

In the UK the Law Commission making proposals to reform hate crime laws to remove the disparity in the way hate crime laws treat each protected characteristic – race, religion, sexual orientation, disability and transgender identity. It also proposes that sex or gender be added to the protected characteristics for the first time. 

The Commission comments that 

hate crime refers to existing criminal offences (such as assault, harassment or criminal damage) where the victim is targeted on the basis of hostility towards one or more protected characteristic. There are also specific hate speech offences: the offences of “stirring up hatred”, and the racist chanting at football matches. However, a number of issues have been raised over how hate crime laws work in practice. The laws are complex, spread across different statutes and use multiple overlapping legal mechanisms. Not all five characteristics are protected equally by the law, and campaigners have also argued for additional characteristics such as sex/gender to be included. 

Its consequent proposals to improve hate crime laws include: 

  •  Adding sex or gender to the protected characteristics. 
  • Establishing criteria for deciding whether any additional characteristics should be recognised in hate crime laws, and consulting further on a range of other characteristics, notably “age”. 
  • Extending the protections of aggravated offences and stirring up hatred offences to cover all current protected characteristics, but also any characteristics added in the future (including sex or gender). This would ensure all characteristics are protected equally. 
  • Reformulating the offences of stirring up hatred to focus on deliberate incitement of hatred, providing greater protection for freedom of speech where no intent to incite hatred can be proven. 
  • Expanding the offence of racist chanting at football matches to cover homophobic chanting, and other forms of behaviour, such as gestures and throwing missiles at players. 

It notes that

 Hate crime laws in England and Wales include multiple, overlapping legal mechanisms. These include aggravated offences, where a more serious form of an offence such as assault, harassment or criminal damage is prosecuted, and enhanced sentences, which require a sentence to be increased because of the hate crime element. There are also separate offences for stirring up racial hatred, and for stirring up hatred on the basis of religion or sexual orientation. For racial hatred, the behaviour must be “threatening, abusive or insulting.” On the basis of religion or sexual orientation, the words or conduct must be threatening (not merely abusive or insulting). 

However, the law does not work as well as it should. For example: The complexity and lack of clarity in the current laws can make them hard to understand. The laws do not operate consistently in the way that the existing five characteristics are protected in law – for example LGBT and disabled people receive less protection. In practice, we have also heard that disability hate crime is particularly difficult to prosecute, as it often takes more subtle forms and can be hard to prove. There have also been calls for hate crime laws to be expanded to include new protected characteristics to tackle hatred such as misogyny and ageism, and hostility towards other groups such as homeless people, sex workers, people who hold non-religious philosophical beliefs (for example, humanists) and alternative subcultures (for example goths or punks). Some legal definitions including the definition of “transgender” in the current laws have also been criticised for using outdated language.

The Commission is also consulting on 'whether other characteristics and groups such as age, sex workers, homelessness, alternative subcultures (such as being a goth) and philosophical beliefs (such as humanism) should be protected'. 

22 September 2020

Regulatory Disasters

'Learning from Regulatory Disasters' by Julia Black in (2014) 10(3) Policy Quarterly 3-11 comments

Regulatory disasters are catastrophic events or series of events which have significantly harmful impacts on the life, health or financial wellbeing of individuals or the environment. They are caused, at least in part, by failures in, or unforeseen consequences of, the design and /or operation of the regulatory system put in place to prevent those harmful effects from occurring. Regulatory disasters are horrendous for those affected by them. Because of that we have an obligation to learn as much from them as we can, notwithstanding all the well-known challenges related to policy and organisational learning. The article focuses on five distinct and unrelated regulatory disasters which, although they occurred in apparently unrelated domains or countries, contain insights for all regulators as the regulatory regimes share a common set of elements which through their differential configuration and interaction create the unique dynamics of that regime. In the regulatory disasters analysed here, these manifest themselves as six contributory causes, operating alone or together: the incentives on individuals or groups; the organisational dynamics of regulators, regulated operators and the complexity of the regulatory system in which they are situated; weaknesses, ambiguities and contradictions in the regulatory strategies adopted; misunderstandings of the problem and the potential solutions; problems with communication about the conduct expected, or conflicting messages; and trust and accountability structures.

Black argues 

 In 2010 an explosion in the Pike River mine in New Zealand killed twenty nine people and, on the other side of the world, a blowout at the Macondo oil well killed eleven people and caused major environmental damage as four million barrels of oil spilled into the Gulf of Mexico. In 2005 a cloud of petrol vapour from the Buncefield tank storage depot in the south of England exploded over two major motorways early on a Sunday morning, which if it had happened at any other time could have caused significant loss of life. In 2008 the Royal Bank of Scotland (RBS), one of the UK’s largest banks, was rescued from collapse by a government bail-out of £46bn, a contributor to and casualty of the global financial crisis. In the mid-late 1990s to early 2000s poor building practices led to significant losses for homeowners in New Zealand caused by leaky buildings. Estimates of the losses range to as high as NZ$11.3bn. 

These disastrous events from opposite sides of the globe seem to be disparate. Some are systemic failures across an industry, others are single events; some are low probability, high impact events, others high probability and low impact if measured as the impact per individual affected at a single point in time, but high impact if assessed on an aggregate basis across a number of individuals and a period of time. What they have in common is that they are all regulatory disasters: a catastrophic event or series of events which have significantly harmful impacts on the life, health or financial wellbeing of individuals or the environment, caused, at least in part, by a failure in the design and /or operation of the regulatory regime put in place to prevent their occurrence. 

Regulatory disasters can be a particular form of policy disaster. Policy disasters have been defined as the disastrous unintended consequences which occur as the direct consequence of poor intentional choices by top political decision-makers. Regulatory disasters may also be seen as a particularly acute form of a policy blunder. King and Crewe, for example, define a ‘policy blunder’ as: an episode in which a government adopts a particular course of action in order to achieve one or more objectives, and as a result largely or wholly of its own mistakes, either fails completely to achieve those objectives or does achieve them but at a totally disproportionate cost, or else does achieve them but contrives at the same time to cause a significant amount of ‘collateral damage’ in the form of unintended or undesired consequences. 

However, the scale of their consequences means that ‘regulatory disasters’ are more than just ‘policy blunders’. They include disasters caused by ‘judgement calls’ as well as poor design and implementation and, as used here, ‘regulatory disasters’ deliberately excludes ‘political disasters’ – those which are disasters for the reputation or continued existence in power for the politicians or regulators involved. Many of the regulatory disasters highlighted here are also political disasters, but a policy which is purely or mainly a disaster in political terms is not included. Regulatory disasters are also distinct from policy disasters in that they occur in a particular sub-field of public policy, and indeed need not be confined to the state at all: they result from the unintended and unforeseen consequences of the design and / or operation of a regulatory system and its interactions with other systems. As such they can arises from poor decisions by politicians in the design of the regulatory regime and / or political influences on its operation, and / or poor decisions and practices by regulatory officials themselves within a system that may be either well or poorly designed. Regulation, or regulatory governance, is the organised attempt to manage risks or behaviour in order to achieve a publicly stated objective or set of objectives; a regulatory system consists of the (sometimes shifting) set of interrelated actors who are engaged in such attempts and their interactions with one another and the dynamic institutional and organisational environment in which they sit. Thus regulatory disasters also differ from public service delivery disasters, as they do not involve the delivery of services to the public directly organised by a government department, agency or authority; or that are provided on behalf of, financed and regulated by government, unless those disasters arise at least in part from failures in the design and / or operation of the regulatory system to which that public service, such as a hospital, is subject. 

Regulatory disasters are horrendous for those affected by them. Because of that, we have an obligation to learn as much from them as we can, notwithstanding all the well-known challenges related to policy and organisational learning. For regulators, probing the reasons for the disaster, even if it occurred in another country, or in a different regulatory domain, can provide insights for the evaluation of their own systems. They can also provide useful leverage for persuading political overseers of the need for change. Regulatory systems can have a significant number of ‘latent’ failures which only become apparent on the occurrence of a particular major event, such as an explosion or financial collapse, or through the recognition of an accumulation of a number of smaller events, such as individual deaths, smaller scale pollution events or individual financial losses. These are the disasters ‘which are waiting to happen’. Other disasters were not foreseen, but neither may they have been reasonably foreseeable, or involve ‘black swan’ events – what had been seen as low probability albeit high impact events. Nonetheless, the inquiries that often follow a disaster, even if it is a ’black swan’ event, often reveal systemic problems within the regime which have hence far gone unnoticed by regulators, or unheeded by key policy actors. 

Analysing the causes and nature of regulatory disasters also enables us to understand more about the nature of regulation itself. Although regulatory disasters often occur in apparently unrelated domains or countries, they can in fact contain lessons for all regulators, as regulatory regimes share a common set of elements which through their differential configuration and interaction create the unique dynamics of that regime. In the regulatory disasters analysed here, these manifest themselves as six contributory causes, operating alone or together:

• The incentives on individuals or groups; 

• The organisational dynamics of regulators, regulated operators and the complexity of the regulatory system; 

• Weaknesses, ambiguities and contradictions in the regulatory strategies adopted; 

• Misunderstandings of the problem and the potential solutions; 

• Problems with communication about the conduct expected or conflicting messages; 

• Trust and accountability structures.

The article focuses on five distinct and unrelated regulatory disasters: the construction of ‘leaky buildings’ in New Zealand in the late 1990s-2000s, the explosion at the Buncefield chemical plant in the UK in 2005, the events leading up to the bail out of the Royal Bank of Scotland in the UK in 2008, the Macondo oil well blow out at the Deepwater Horizon oil rig in the Gulf of Mexico in 2010, and Pike River mining tragedy in New Zealand, also in 2010. These are chosen because they are uncontroversial examples of regulatory disasters – significantly adverse impacts on human health, financial position or the environment which arose from the design and operation of a regulatory regime intended to manage the very risks which materialised. They also have the advantage that each was subject to extensive investigation by an independent body established specifically to inquire into the causes of the disaster, thus providing a wealth of factual information. Whilst there are always inherent biases in any investigation, those which followed each of these disasters have not been significantly criticised as biased or ‘captured’ by any particular interest.

Algorithmic Policing

To Surveil and Predict - A Human Rights Analysis of Algorithmic Policing in Canada by Kate Robertson, Cynthia Khoo, and Yolanda Song examines algorithmic technologies designed for use in criminal law enforcement systems. 

The authors comment

Algorithmic policing is an area of technological development that, in theory, is designed to enable law enforcement agencies to either automate surveillance or to draw inferences through the use of mass data processing in the hopes of predicting potential criminal activity. The latter type of technology and the policing methods built upon it are often referred to as predictive policing. Algorithmic policing methods often rely on the aggregation and analysis of massive volumes of data, such as personal information, communications data, biometric data, geolocation data, images, social media content, and policing data (such as statistics based on police arrests or criminal records). 

In order to guide public dialogue and the development of law and policy in Canada, the report focuses on the human rights and constitutional law implications of the use of algorithmic policing technologies by law enforcement authorities. This report first outlines the methodology and scope of analysis in Part 1. In Part 2, the report provides critical social and historical contexts regarding the criminal justice system in Canada, including issues regarding systemic discrimination in the criminal justice system and bias in policing data sets. This social and historical context is important to understand how algorithmic policing technologies present heightened risks of harm to civil liberties and related concerns under human rights and constitutional law for certain individuals and communities. The use of police-generated data sets that are affected by systemic bias may create negative feedback loops where individuals from historically disadvantaged communities are labelled by an algorithm as a heightened risk because of historic bias towards those communities. Part 3 of the report then provides a few conceptual building blocks to situate the discussion surrounding algorithmic policing technology, and it outlines how algorithmic policing technology differs from traditional policing methods. 

In Part 4, the report sets out and summarizes findings on how law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods. The report compiles original research with existing research to provide a comprehensive overview of what is known about the algorithmic policing landscape in Canada to date. In the overview of the use of algorithmic policing technology in Canada, the report classifies algorithmic policing technologies into the following three categories:

  • Location-focused algorithmic policing technologies are a subset of what has generally been known as ‘predictive policing’ technologies. This category of algorithmic policing technologies purports to identify where and when potential criminal activity might occur. The algorithms driving these systems examine correlations in historical police data in order to attempt to make predictions about a given set of geographical areas. 

  • Person-focused algorithmic policing technologies are also a subset of predictive policing technologies. Person-focused algorithmic policing technologies rely on data analysis in order to attempt to identify people who are more likely to be involved in potential criminal activity or to assess an identified person for their purported risk of engaging in criminal activity in the future

  • Algorithmic surveillance policing technologies, as termed in this report, do not inherently include any predictive element and are thus distinguished from the two categories above (location-focused and person-focused algorithmic policing). Rather, algorithmic surveillance technologies provide police services with sophisticated, but general, surveillance and monitoring functions. These technologies automate the systematic collection and processing of data (such as data collected online or images taken from physical outdoor spaces). Some of these technologies (such as facial recognition technology that processes photos from mug-shot databases) may process data that is already stored in law enforcement police files in a new way. For ease of reference, this set of technologies will be referred to as simply algorithmic surveillance technologies throughout the rest of this report. The term should be understood to be confined to the context of policing (thus excluding other forms of algorithmic surveillance technologies that are more closely tied to other contexts, such as tax compliance or surveilling recipients of social welfare).

The primary research findings of this report show that technologies have been procured, developed, or used in Canada in all three categories. For example, at least two agencies, the Vancouver Police Department and the Saskatoon Police Service, have confirmed that they are using or are developing ‘predictive’ algorithmic technologies for the purposes of guiding police action and intervention. Other police services, such as in Calgary and Toronto, have acquired technologies that include algorithmic policing capabilities or that jurisdictions outside of Canada have leveraged to build predictive policing systems. The Calgary Police Service engages in algorithmic social network analysis, which is a form of technology that may also be deployed by law enforcement to engage in person-focused algorithmic policing. Numerous law enforcement agencies across the country also now rely on a range of other algorithmic surveillance technologies (e.g., automated licence plate readers, facial recognition, and social media surveillance algorithms), or they are developing or considering adopting such technologies. This report also uncovers information suggesting that the Ontario Provincial Police and Waterloo Regional Police Service may be unlawfully intercepting private communications in online private chat rooms through reliance on an algorithmic social media surveillance technology known as the ICAC Child On-line Protection System (ICACCOPS). Other police services throughout Canada may also be using or developing additional predictive policing or algorithmic surveillance technologies outside of public awareness. Many of the freedom of information (FOI) requests submitted for this report were met with responses from law enforcement authorities that claimed privilege as justification for non-disclosure; in other cases, law enforcement agencies did not provide any records in response to the submitted FOI request, or requested exorbitant fees in order to process the request. 

Building on the findings about the current state of algorithmic policing in Canada, Part 5 of the report presents a human rights and constitutional law analysis of the potential use of algorithmic policing technologies. The legal analysis applies established legal principles to these technologies and demonstrates that their use by law enforcement agencies has the potential to violate fundamental human rights and freedoms that are protected under the Canadian Charter of Rights and Freedoms (“the Charter”) and international human rights law. Specifically, the authors analyze the potential impacts of algorithmic policing technologies on the following rights: the right to privacy; the right to freedoms of expression, peaceful assembly, and association; the right to equality and freedom from discrimination; the right to liberty and to be free from arbitrary detention; the right to due process; and the right to a remedy.

The major findings are - 

 Implications for the Right to Privacy and the Right to Freedom of Expression, Peaceful Assembly, and Association: 

The increasing use of algorithmic surveillance technologies in Canada threatens privacy and the fundamental freedoms of expression, peaceful assembly, and association that are protected under the Charter and international human rights law. The advanced capabilities and heightened data requirements of algorithmic policing technologies introduces new threats to privacy and these fundamental freedoms, such as in the repurposing of historic police data, constitutionally questionable data sharing arrangements, or in algorithmically surveilling public gatherings or online expression, raising significant risks of violations. The Canadian legal system currently lacks sufficiently clear and robust safeguards to ensure that use of algorithmic surveillance methods—if any—occurs within constitutional boundaries and is subject to necessary regulatory, judicial, and legislative oversight mechanisms. Given the potential damage that the unrestricted use of algorithmic surveillance by law enforcement may cause to fundamental freedoms and a free society, the use of such technology in the absence of oversight and compliance with limits defined by necessity and proportionality is unjustified. 

Implications for the Right to Equality and Freedom from Discrimination: 

Systemic racism in the Canadian criminal justice system must inform any analysis of algorithmic policing, particularly its impacts on marginalized communities. The seemingly ‘neutral’ application of algorithmic policing tools masks the reality that they can disproportionately impact marginalized communities in a protected category under equality law (i.e., communities based on characteristics such as race, ethnicity, sexual orientation, or disability). The social and historical context of systemic discrimination influences the reliability of data sets that are already held by law enforcement authorities (such as data about arrests and criminal records). Numerous inaccuracies, biases, and other sources of unreliability are present in most of the common sources of police data in Canada. As a result, drawing unbiased and reliable inferences based on historic police data is, in all likelihood, impossible. Extreme caution must be exercised before law enforcement authorities are permitted, if at all, to use algorithmic policing technologies that process mass police data sets. Otherwise, these technologies may exacerbate the already unconstitutional and devastating impact of systemic targeting of marginalized communities. 

Implications for the Right to Liberty and to Freedom from Arbitrary Detention: 

It is incompatible with constitutional and human rights law to rely on algorithmic forecasting to justify interfering with an individual’s liberty. By definition, algorithmic policing methods tend to produce generalized inferences. Under human rights law and the Charter, loss of liberty (such as detention, arrest, denial of bail, and punishment through sentencing) cannot be justified based on generalized or stereotypical assumptions, such as suspicion based on beliefs about an ethnic group or on the location where an individual was found. Reliance on algorithmic policing technologies to justify interference with liberty may violate Charter rights where the purported grounds for interfering with liberty are based on algorithmic predictions drawn from statistical trends, as opposed to being particularized to a specific individual. Violations may include instances where an individual would not have been detained or arrested but for the presence of an algorithmic prediction based on statistical trends, all other circumstances remaining the same. In addition to these major findings, the report documents problems that are likely to arise with respect to meaningful access to justice and the rights to due process and remedy, given that impactful accountability mechanisms for algorithmic policing technology are often lacking, and in light of the systemic challenges faced by individuals and communities seeking meaningful redress for rights violations that do not result in charges in Canadian courts. The absence of much needed transparency in the Canadian algorithmic policing landscape animates many of the core recommendations in this report. The authors hope that this report provides insight into the critical need for transparency and accountability regarding what types of technologies are currently in use or under development and how these technologies are being used in practice. With clarified information regarding what is currently in use and under development, policy- and lawmakers can enable the public and the government to chart an informed path going forward. 

In response to conclusions drawn from the legal analysis, the report ends with a range of recommendations for governments and law enforcement authorities with a view to developing law and oversight that would establish necessary limitations on the use of algorithmic policing technologies. Part 6 provides a list of these recommendations, each of which is accompanied by contextual information to explain the purpose of the recommendation and offer potential guidance for implementation. The recommendations are divided into priority recommendations, which must be implemented now, with urgency, and ancillary recommendations, which may be inapplicable where certain algorithmic policing technologies are banned but must be implemented if any such technologies are to be developed or adopted.

The authors offer several recommendations, summarised as follows  

A. Priority recommendations for governments and law enforcement authorities that must be acted upon urgently in order to mitigate the likelihood of human rights and Charter violations associated with the use of algorithmic policing technology in Canada:

  • Governments must place moratoriums on law enforcement agencies’ use of technology that relies on algorithmic processing of historic mass police data sets, pending completion of a comprehensive review through a judicial inquiry, and on use of algorithmic policing technology that does not meet prerequisite conditions of reliability, necessity, and proportionality. 

  • The federal government should convene a judicial inquiry to conduct a comprehensive review regarding law enforcement agencies’ potential repurposing of historic police data sets for use in algorithmic policing technologies. 

  • Governments must make reliability, necessity, and proportionality prerequisite conditions for the use of algorithmic policing technologies, and moratoriums should be placed on every algorithmic policing technology that does not meet these established prerequisites. 

  • Law enforcement agencies must be fully transparent with the public and with privacy commissioners, immediately disclosing whether and what algorithmic policing technologies are currently being used, developed, or procured, to enable democratic dialogue and meaningful accountability and oversight. 

  • Provincial governments should enact directives regarding the use and procurement of algorithmic policing technologies, including requirements that law enforcement authorities must conduct algorithmic impact assessments prior to the development or use of any algorithmic policing technology; publish annual public reports that disclose details about how algorithmic policing technologies are being used, including information about any associated data, such as sources of training data, potential data biases, and input and output data where applicable; and facilitate and publish independent peer reviews and scientific validation of any such technology prior to use. 

  • Law enforcement authorities must not have unchecked use of algorithmic policing technologies in public spaces: police services should prohibit reliance on algorithmic predictions to justify interference with individual liberty, and must obtain prior judicial authorization before deploying algorithmic surveillance tools at public gatherings and in online environments. 

  • Governments and law enforcement authorities must engage external expertise, including from historically marginalized communities that are disproportionately impacted by the criminal justice system, before and when considering, developing, or adopting algorithmic policing technologies, when developing related regulation and oversight mechanisms, as part of completing algorithmic impact assessments, and in monitoring the effects of algorithmic policing technologies that have been put into use if any. 

B. Ancillary recommendations for law enforcement authorities:

  • Law enforcement authorities should enhance police database integrity and management practices, including strengthening the ability of individuals to verify and correct the accuracy of personal information stored in police databases. 
  • Law enforcement authorities must exercise extreme caution to prevent unconstitutional data-sharing practices with the private sector and other non-police government actors. 
  • Law enforcement authorities should undertake the following best practices whenever an algorithmic policing technology has been or will be adopted or put into use, with respect to that technology: 
    • Implement ongoing tracking mechanisms to monitor the potential for bias in the use of any algorithmic policing technology; 

    • Engage external expertise ongoingly, including consulting with communities and individuals who are systemically marginalized by the criminal justice system, about the potential or demonstrated impacts of the algorithmic policing technology on them; 

    • Formally document written policies surrounding the use of algorithmic policing technology; and 

    • Adopt audit mechanisms within police services to reinforce and identify best practices and areas for improvement over time. 

C. Ancillary recommendations for law reform and related measures by federal, provincial, territorial, and municipal governments:

  • The federal government should reform the judicial warrant provisions of the Criminal Code to specifically address the use of algorithmic policing technology by law enforcement authorities. 

  • Federal and provincial legislatures should review and modernize privacy legislation with particular attention to reevaluating current safeguards to account for the advanced capabilities of algorithmic policing technologies and to the retention and destruction of biometric data by law enforcement authorities. 

  • The federal government should expand the Parliamentary reporting provisions under the Criminal Code that currently only apply to certain electronic surveillance methods to specifically address police powers in relation to the use of algorithmic policing technology by law enforcement authorities. 

  • Governments must ensure that privacy and human rights commissioners are empowered and have sufficient resources to initiate and conduct investigations into law enforcements’ use of algorithmic policing technology. 

D. Ancillary recommendations for government to enable access to justice in relation to the human rights impacts of algorithmic policing technology:

  • Governments should make funding available for research to develop the availability of independent expertise. 

  • Governments must ensure adequate assistance is available for low income and unrepresented defendants in order that they might to retain expert witnesses in criminal proceedings. 

  • Governments and law enforcement agencies must make the source code of algorithmic policing technologies publicly available or, where appropriate, confidentially available to public bodies and independent experts for the purposes of algorithmic impact assessments, pre-procurement review, security testing, auditing, investigations, and judicial proceedings.

21 September 2020

Mining

IT News reports that Jonathan Khoo has been sentenced  in Sydney Local Court to a 15-month intensive correction order after pleading guilty to charges regarding his unauthorised modification, as an IT contractor, of CSIRO's data systems to engage in cryptocurrency mining. 

Khoo was charged by the Australian Federal Police in May last year after installing and running mining scripts on CSIRO’s two high performance computers that are also used by other entities such as the Royal Australian Navy and Victor Chang Cardiac Research Institute. The charges included unauthorised modification of data to cause impairment and unauthorised modification of restricted data. The maximum penalty for the charge of unauthorised modification of data to cause impairment is 10 years imprisonment. 

The AFP's 2019 media release stated

The man is scheduled to appear in Sydney Local Court (Downing Centre) today (21 May 2019) in response to the following charges:

  • Unauthorised modification of data to cause impairment, contrary to section 477.2 of the Criminal Code Act 1995 (Cth)

  • Unauthorised modification of restricted data, contrary to section 478.1 of the Criminal Code Act 1995 (Cth).

Acting Commander Chris Goldsmid, Manager Cybercrime Operations, said alleged abuse of public office is a very serious matter. 

“Australian taxpayers put their trust in public officials to perform vital roles for our community with the utmost integrity. Any alleged criminal conduct which betrays this trust for personal gain will be investigated and prosecuted,” he said.

Khoo generated $9,422 worth of cryptocurrency in the form of Monero and Ethereum in early 2018. 

Although there was no “permanent impairment to CSIRO operations", the mining caused a "loss of productivity", with CSIRO referring to reduced capacity to run legitimate jobs on the devices as costing some $76,668. Unsurprisingly Khoo was "incredibly remorseful".

AI patenting and innovation hubs

'Patenting patterns in Artificial Intelligence: Identifying national and international breeding grounds' by Matheus Eduardo Leusin, Jutta Günther, Björn Jindra and Martin G Moehrle in (2020) 62 World Patent Information comments 

This paper identifies countries at the forefront of Artificial Intelligence (AI) development and proposes two novel patent-based indicators to differentiate structural differences in the patterns of intellectual property (IP) protection observed for AI across countries. In particular, we consider (i) the extent to which countries specialise in AI and are relevant markets for corresponding IP protection (‘National Breeding Ground’); and (ii) the extent to which countries attract AI from abroad for IP protection and extend the protection of their AI-related IP to foreign markets (‘International Breeding Ground’). Our investigation confirms prior findings regarding substantial changes in the technological leadership in AI, besides drastic changes in the relevance of AI techniques over time. Particularly, we find that National and International Breeding Grounds overlap only partially. China and the US can be characterised as dominant National Breeding Grounds. Australia and selected European countries, but primarily the US, are major International Breeding Grounds. We conclude that China promotes AI development with a major focus on IP protection in its domestic market, whereas the US sustains its AI progress in the international context as well. This might indicate a considerable bifurcation in the structural patterns of IP protection in global AI development.

'Who collects intellectual rents from knowledge and innovation hubs? questioning the sustainability of the singapore model' by Cecilia Rikap and David Flacher in (2020) 55 Structural Change and Economic Dynamics 59-73 comments 

While knowledge and innovation are produced in networks involving diverse actors, associated rents are greatly appropriated by global leaders, mostly coming from core countries, that become intellectual monopolies. This raises the question on emerging or peripheral countries companies’ capacity to catch-up, innovate and compete for intellectual rents. The article considers the case of Singapore who has pursued a knowledge hub strategy aimed at: 1) creating world class universities inserted in global knowledge networks of defined fields; and 2) capturing intellectual rents through those institutions’ research and contributing to local firms’ catching up. We show that research universities caught-up. However, we find that foreign companies, particularly multinationals, capture most of Singapore's intellectual rents at the expense of local companies and research institutions. Overall, our findings point to the limitations of Singapore's knowledge hub as a catching-up strategy. The article ends discussing the relevancy of these findings for emerging countries in general.

Cyberstrategies

'Strategic leadership in cyber security, case Finland' by Martti Lehto and Jarno Limnéll in (2020) Information Security Journal: A Global Perspective comments 

 Cyber security has become one of the biggest priorities for businesses and governments. Streamlining and strengthening strategic leadership are key aspects in making sure the cyber security vision is achieved. The strategic leadership of cyber security implies identifying and setting goals based on the protection of the digital operating environment. Furthermore, it implies coordinating actions and preparedness as well as managing extensive disruptions. The aim of this article is to define what is strategic leadership of cyber security and how it is implemented as part of the comprehensive security model in Finland. In terms of effective strategic leadership of cyber security, it is vital to identify structures that can respond to the operative requirements set by the environment. As a basis for national security development and preparedness, it is necessary to have a clear strategy level leadership model for crises management in disturbances in normal and in emergency conditions. In order to ensure cyber security and achieve the set strategic goals, society must be able to engage different parties and reconcile resources and courses of action as efficiently as possible. Cyber capability must be developed in the entire society, which calls for strategic coordination, management and executive capability.

The authors argue 

 Cyber security is an elemental part of society’s comprehensive security, and the cyber security operating model is in keeping with the principles and practices specified in Finland´s Security Strategy for Society (2017b). Cybersecurity has become a focal point for conflicting domestic and international interests, and increasingly for the projection of state power (Limnéll, 2016). The challenges of cyber security management are particularly prominent at the level of strategic leadership. 

Cybersecurity is a foundational element underpinning the achievement of socio-economic objectives of modern economies. Digitalization and information societies are ever evolving, and new cyber threats continue to be devised. In this progress, cyber security must form an integral and indivisible part of the nation’s security process. Countries need to be aware of their current capability level in cyber security and at the same time identify areas where cybersecurity needs to be enhanced. It can be said that cyber security is a constant “arms race” between countries, but also between the security community and the hostile hackers. Cybersecurity is a complex challenge that encompasses multiple different governance, policy, operational, technical and legal aspects (ITU, 2018; Lehto & Limnéll, 2016). 

Cyber-attacks, malware, denial of service attacks and different forms of influencing through information are becoming ever more prolific. The reliable operation of telecommunications, information systems and communications are an essential precondition for modern society’s undisrupted functioning, security and citizens’ livelihoods. This is also about maintaining citizens’ trust in a well-functioning society. The development of business continuity management accounts for a large proportion of the security of supply work carried out in the information society sector. Due to this development, improved preparedness for maintaining the functioning of society’s vital information technology systems and structures in the face of cyber threats and incidents is also needed in normal conditions. In particular, it should be noted that Finnish society’s and companies’ dependence on the cyber environment will grow further in the years to come (Lehto et al., 2018). 

The transformational power of ICTs and the Internet as catalysts for economic growth and social development are at a critical point where citizens’ and national trust and confidence in the use of ICTs are being eroded by cyber-insecurity. To fully realize the potential of technology, states must align their national economic visions with their national security priorities. Setting out the vision, objectives and priorities enables governments to look at cybersecurity holistically across their national digital ecosystem, instead of at a particular sector, objective, or in response to a specific risk – it allows them to be strategic (ITU, 2018). 

The national strategic leadership of cyber security consists of two entities: managing cyber security preparedness and managing serious and extensive incidents in normal and emergency conditions. The Security Strategy for Society 2017 discusses a general functional model for leadership and incident management, which describes the relationships between the government’s top management on the one hand, and local and regional level management on the other (Figure 1). Today the Prime Minister’s Office has an important role in coordinating the authorities’ activities and supporting the Government’s decision-making (Security committee, 2017a). 

This article is based on a research we made for the Prime Minister’s Office in 2017–2018 (Lehto et al., 2018). In terms of Cyber Security Strategy implementation and the commitment of different branches of administration, the situation in Finland was different from what it was as the first Cyber Security Strategy was prepared in 2013. The branches of administration had widely recognized the significance of cyber security in their everyday work. While their views of cyber security differed around the time the 2013 strategy was drafted, the world has changed rapidly since its publication. 

This research project prepared proposals for measures related to the management of society’s and public administration’s cyber security, measuring the state of cyber security and preparedness, and managing extensive disruptions in the cyber environment. 

Key research questions examined were the following:

  • What is strategic leadership of cyber security and how is it implemented in the responsibility model for comprehensive security? 

  • How can a general incident management model be implemented during extensive cyber security disruptions? 

  • How should the strategic leadership of cyber security be organized? 

  • How is the management of cyber security in central government structured?

draft UNESCO AI principles

The draft UNESCO AI ethics principles noted in the preceding post are  ...

Recognizing the profound and dynamic impact of artificial intelligence (AI) on societies, ecosystems, and human lives, including the human mind, in part because of the new ways in which it influences human thinking, interaction and decision-making, and affects education, human, social and natural sciences, culture, and communication and information, 

Recalling that, by the terms of its Constitution, UNESCO seeks to contribute to peace and security by promoting collaboration among nations through education, the sciences, culture, and communication and information, in order to further universal respect for justice, for the rule of law and for the human rights and fundamental freedoms which are affirmed for the peoples of the world,  

Convinced that the standard-setting instrument presented here, based on international law and on a global normative approach, focusing on human dignity and human rights, as well as gender equality, social and economic justice, physical and mental well-being, diversity, interconnectedness, inclusiveness, and environmental and ecosystem protection can guide AI technologies in a responsible direction, 

Considering that AI technologies can be of great service to humanity but also raise fundamental ethical concerns, for instance regarding the biases they can embed and exacerbate, potentially resulting in inequality, exclusion and a threat to cultural, social and ecological diversity and social or economic divides; the need for transparency and understandability of the workings of algorithms and the data with which they have been trained; and their potential impact on human dignity, human rights, gender equality, privacy, freedom of expression, access to information, social, economic, political and cultural processes, scientific and engineering practices, animal welfare, and the environment and ecosystems, 

Recognizing that AI technologies can deepen existing divides and inequalities in the world, within and between countries, and that justice, trust and fairness must be upheld so that no one should be left behind, either in enjoying the benefits of AI technologies or in the protection against their negative implications, while recognizing the different circumstances of different countries and the desire of some people not to take part in all technological developments, 

Conscious of the fact that all countries are facing an acceleration of the use of information and communication technologies and AI technologies, as well as an increasing need for media and information literacy, and that the digital economy presents important societal, economic and environmental challenges and opportunities of benefits sharing, especially for low- and middle-income countries (LMICs), including but not limited to least developed countries (LDCs), landlocked developing countries (LLDCs) and small island developing States (SIDS), requiring the recognition, protection and promotion of endogenous cultures, values and knowledge in order to develop sustainable digital economies, 

Recognizing that AI technologies have the potential to be beneficial to the environment and ecosystems but in order for those benefits to be realized, fair access to the technologies is required without ignoring but instead addressing potential harms to and impact on the environment and ecosystems, 

Noting that addressing risks and ethical concerns should not hamper innovation but rather provide new opportunities and stimulate new and responsible practices of research and innovation that anchor AI technologies in human rights, values and principles, and moral and ethical reflection, 

Recalling that in November 2019, the General Conference of UNESCO, at its 40th session, adopted 40 C/Resolution 37, by which it mandated the Director-General “to prepare an international standard-setting instrument on the ethics of artificial intelligence (AI) in the form of a recommendation”, which is to be submitted to the General Conference at its 41st session in 2021, 

Recognizing that the development of AI technologies results in an increase of information which necessitates a commensurate increase in media and information literacy as well as access to critical sources of information, 

Observing that a normative framework for AI technologies and its social implications finds its basis in ethics, as well as human rights, fundamental freedoms, access to data, information and knowledge, international and national legal frameworks, the freedom of research and innovation, human and environmental and ecosystem well-being, and connects ethical values and principles to the challenges and opportunities linked to AI technologies, based on common understanding and shared aims, 

Recognizing that ethical values and principles can powerfully shape the development and implementation of rights-based policy measures and legal norms, by providing guidance where the ambit of norms is unclear or where such norms are not yet in place due to the fast pace of technological development combined with the relatively slower pace of policy responses, 

Convinced that globally accepted ethical standards for AI technologies and international law, in particular human rights law, principles and standards can play a key role in harmonizing AI-related legal norms across the globe, 

Recognizing the Universal Declaration of Human Rights (1948), including Article 27 emphasizing the right to share in scientific advancement and its benefits; the instruments of the international human rights framework, including the International Convention on the Elimination of All Forms of Racial Discrimination (1965), the International Covenant on Civil and Political Rights (1966), the International Covenant on Economic, Social and Cultural Rights (1966), the United Nations Convention on the Elimination of All Forms of Discrimination against Women (1979), the United Nations Convention on the Rights of the Child (1989), and the United Nations Convention on the Rights of Persons with Disabilities (2006); the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions (2005), Noting the UNESCO Declaration on the Responsibilities of the Present Generations Towards Future Generations (1997); the United Nations Declaration on the Rights of Indigenous Peoples (2007); the Report of the United Nations Secretary-General on the Follow-up to the Second World Assembly on Ageing (A/66/173) of 2011, focusing on the situation of the human rights of older persons; the Report of the Special Representative of the United Nations Secretary-General on the issue of human rights and transnational corporations and other business enterprises (A/HRC/17/31) of 2011, outlining the “Guiding Principles on Business and Human Rights: Implementing United Nations ‘Protect, Respect and Remedy’ Framework”; the United Nations General Assembly resolution on the review of the World Summit on the Information Society (A/68/302); the Human Rights Council’s resolution on “The right to privacy in the digital age” (A/HRC/RES/42/15) adopted on 26 September 2019; the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (A/73/348); the UNESCO Recommendation on Science and Scientific Researchers (2017); the UNESCO Internet Universality Indicators (endorsed by UNESCO’s International Programme for the Development of Communication in 2019), including the R.O.A.M. principles (endorsed by UNESCO’s General Conference in 2015); the UNESCO Recommendation Concerning the Preservation of, and Access to, Documentary Heritage Including in Digital Form (2015); the Report of the United Nations Secretary-General’s High-level Panel on Digital Cooperation on “The Age of Digital Interdependence” (2019), and the United Nations Secretary-General’s Roadmap for Digital Cooperation (2020); the Universal Declaration on Bioethics and Human Rights (2005); the UNESCO Declaration on Ethical Principles in relation to Climate Change (2017); the United Nations Global Pulse initiative; and the outcomes and reports of the ITU’s AI for Good Global Summits, 

Noting also existing frameworks related to the ethics of AI of other intergovernmental organizations, such as the relevant human rights and other legal instruments adopted by the Council of Europe, and the work of its Ad Hoc Committee on AI (CAHAI); the work of the European Union related to AI, and of the European Commission’s High-Level Expert Group on AI, including the Ethical Guidelines for Trustworthy AI; the work of OECD’s first Group of Experts (AIGO) and its successor the OECD Network of Experts on AI (ONE AI), the OECD’s Recommendation of the Council on AI and the OECD AI Policy Observatory (OECD.AI); the G20 AI Principles, drawn therefrom, and outlined in the G20 Ministerial Statement on Trade and Digital Economy; the G7’s Charlevoix Common Vision for the Future of AI; the work of the African Union’s Working Group on AI; and the work of the Arab League’s Working Group on AI, 

Emphasizing that specific attention must be paid to LMICs, including but not limited to LDCs, LLDCs and SIDS, as they have their own capacity but have been underrepresented in the AI ethics debate, which raises concerns about neglecting local knowledge, cultural and ethical pluralism, value systems and the demands of global fairness to deal with the positive and negative impacts of AI technologies, 

Conscious of the many existing national policies and other frameworks related to the ethics and regulation of AI technologies, Conscious as well of the many initiatives and frameworks related to the ethics of AI developed by the private sector, professional organizations, and non-governmental organizations, such as the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems and its work on Ethically Aligned Design; the World Economic Forum’s “Global Technology Governance: A Multistakeholder Approach”; the UNI Global Union’s “Top 10 Principles for Ethical Artificial Intelligence”; the Montreal Declaration for a Responsible Development of AI; the Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems; the Harmonious Artificial Intelligence Principles (HAIP); and the Tenets of the Partnership on AI, 

Convinced that AI technologies can bring important benefits, but that achieving them can also amplify tension around innovation debt, asymmetric access to knowledge, barriers of rights to information and gaps in capacity of creativity in developing cycles, human and institutional capacities, barriers to access to technological innovation, and a lack of adequate physical and digital infrastructure and regulatory frameworks regarding data, 

Underlining that global cooperation and solidarity are needed to address the challenges that AI technologies bring in diversity and interconnectivity of cultures and ethical systems, to mitigate potential misuse, and to ensure that AI strategies and regulatory frameworks are not guided only by national and commercial interests and economic competition, 

Taking fully into account that the rapid development of AI technologies challenges their ethical implementation and governance, because of the diversity of ethical orientations and cultures around the world, the lack of agility of the law in relation to technology and knowledge societies, and the risk that local and regional ethical standards and values be disrupted by AI technologies,

1. Adopts the present Recommendation on the Ethics of Artificial Intelligence; 

2. Recommends that Member States apply the provisions of this Recommendation by taking appropriate steps, including whatever legislative or other measures may be required, in conformity with the constitutional practice and governing structures of each State, to give effect within their jurisdictions to the principles and norms of the Recommendation in conformity with international law, as well as constitutional practice; 

3. Also recommends that Member States ensure assumption of responsibilities by all stakeholders, including private sector companies in AI technologies, and bring the Recommendation to the attention of the authorities, bodies, research and academic organizations, institutions and organizations in public, private and civil society sectors involved in AI technologies, in order to guarantee that the development and use of AI technologies are guided by both sound scientific research as well as ethical analysis and evaluation.

I. SCOPE OF APPLICATION 

1. This Recommendation addresses ethical issues related to AI. It approaches AI ethics as a systematic normative reflection, based on a holistic and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies, and the environment and ecosystems, and offers them a basis to accept or reject AI technologies. Rather than equating ethics to law, human rights, or a normative add-on to technologies, it considers ethics as a dynamic basis for the normative evaluation and guidance of AI technologies, referring to human dignity, well-being and the prevention of harm as a compass and rooted in the ethics of science and technology. 

2. This Recommendation does not have the ambition to provide one single definition of AI, since such a definition would need to change over time, in accordance with technological developments. Rather, its ambition is to address those features of AI systems that are of central ethical relevance and on which there is large international consensus. Therefore, this Recommendation approaches AI systems as technological systems which have the capacity to process information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control. Three elements have a central place in this approach:

(a) AI systems are information-processing technologies that embody models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in real and virtual environments. AI systems are designed to operate with some aspects of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations. AI systems may include several methods, such as but not limited to: (i) machine learning, including deep learning and reinforcement learning, (ii) machine reasoning, including planning, scheduling, knowledge representation and reasoning, search, and optimization, and (iii) cyber-physical systems, including the Internet-of-Things, robotic systems, social robotics, and human-computer interfaces which involve control, perception, the processing of data collected by sensors, and the operation of actuators in the environment in which AI systems work. 

(b) Ethical questions regarding AI systems pertain to all stages of the AI system life cycle, understood here to range from research, design, and development to deployment and use, including maintenance, operation, trade, financing, monitoring and evaluation, validation, end-of-use, disassembly, and termination. In addition, AI actors can be defined as any actor involved in at least one stage of the AI life cycle, and can refer both to natural and legal persons, such as researchers, programmers, engineers, data scientists, end-users, large technology companies, small and medium enterprises, start ups, universities, public entities, among others. 

(c) AI systems raise new types of ethical issues that include, but are not limited to, their impact on decision-making, employment and labour, social interaction, health care, education, media, freedom of expression, access to information, privacy, democracy, discrimination, and weaponization. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce biases, for instance regarding gender, ethnicity, and age, and thus to exacerbate already existing forms of discrimination, identity prejudice and stereotyping. Some of these issues are related to the capacity of AI systems to perform tasks which previously only living beings could do, and which were in some cases even limited to human beings only. These characteristics give AI systems a profound, new role in human practices and society, as well as in their relationship with the environment and ecosystems, creating a new context for children and young people to grow up in, develop an understanding of the world and themselves, critically understand media and information, and learn to make decisions. In the long term, AI systems could challenge human’s special sense of experience and agency, raising additional concerns about human self-understanding, social, cultural and environmental interaction, autonomy, agency, worth and dignity. 

3. This Recommendation pays specific attention to the broader ethical implications of AI systems in relation to the central domains of UNESCO: education, science, culture, and communication and information, as explored in the 2019 Preliminary Study on the Ethics of Artificial Intelligence by the UNESCO World Commission on Ethics of Scientific Knowledge and Technology (COMEST): (a) Education, because living in digitalizing societies requires new educational practices, the need for ethical reflection, critical thinking, responsible design practices, and new skills, given the implications for the labour market and employability. (b) Science, in the broadest sense and including all academic fields from the natural sciences and medical sciences to the social sciences and humanities, as AI technologies bring new research capacities, have implications for our concepts of scientific understanding and explanation, and create a new basis for decision-making. (c) Cultural identity and diversity, as AI technologies can enrich cultural and creative industries, but can also lead to an increased concentration of supply of cultural content, data, markets, and income in the hands of only a few actors, with potential negative implications for the diversity and pluralism of languages, media, cultural expressions, participation and equality. (d) Communication and information, as AI technologies play an increasingly important role in the processing, structuring and provision of information, and the issues of automated journalism and the algorithmic provision of news and moderation and curation of content on social media and search engines are just a few examples raising issues related to access to information, disinformation, misinformation, misunderstanding, the emergence of new forms of societal narratives, discrimination, freedom of expression, privacy, and media and information literacy, among others. 

4. This Recommendation is addressed to States, both as AI actors and as responsible for developing legal and regulatory frameworks throughout the entire AI system life cycle, and for promoting business responsibility. It also provides ethical guidance to all AI actors, including the private sector, by providing a basis for an Ethical Impact Assessment of AI systems throughout their life cycle. 

II. AIMS AND OBJECTIVES 

5. This Recommendation aims to provide a basis to make AI systems work for the good of humanity, individuals, societies, and the environment and ecosystems; and to prevent harm. 

6. In addition to the ethical frameworks regarding AI that have already been developed by various organizations all over the world, this Recommendation aims to bring a globally accepted normative instrument that does not only focus on the articulation of values and principles, but also on their practical realization, via concrete policy recommendations, with a strong emphasis on issues of gender equality and protection of the environment and ecosystems. 

7. Because the complexity of the ethical issues surrounding AI necessitates the cooperation of multiple stakeholders across the various levels and sectors of international, regional and national communities, this Recommendation aims to enable stakeholders to take shared responsibility based on a global and intercultural dialogue. 

8. The objectives of this Recommendation are: (a) to provide a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI; (b) to guide the actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle; (c) to promote respect for human dignity and gender equality, to safeguard the interests of present and future generations, and to protect human rights, fundamental freedoms, and the environment and ecosystems in all stages of the AI system life cycle; (d) to foster multi-stakeholder, multidisciplinary and pluralistic dialogue about ethical issues relating to AI systems; and (e) to promote equitable access to developments and knowledge in the field of AI and the sharing of benefits, with particular attention to the needs and contributions of LMICs, including LDCs, LLDCs and SIDS. 

III. VALUES AND PRINCIPLES 

9. The values and principles included below should be respected by all actors in the AI system life cycle, in the first place, and be promoted through amendments to existing and elaboration of new legislation, regulations and business guidelines. This must comply with international law as well as with international human rights law, principles and standards, and should be in line with social, political, environmental, educational, scientific and economic sustainability objectives. 

10. Values play a powerful role as motivating ideals in shaping policy measures and legal norms. While the set of values outlined below thus inspires desirable behaviour and represents the foundations of principles, the principles unpack the values underlying them more concretely so that the values can be more easily operationalized in policy statements and actions. 

11. While all the values and principles outlined below are desirable per se, in any practical context there are inevitable trade-offs among them, requiring complex choices to be made about contextual prioritization, without compromising other principles or values in the process. Trade-offs should take account of concerns related to proportionality and legitimate purpose. To navigate such scenarios judiciously will typically require engagement with a broad range of appropriate stakeholders guided by international human rights law, standards and principles, making use of social dialogue, as well as ethical deliberation, due diligence, and impact assessment. 

12. The trustworthiness and integrity of the life cycle of AI systems, if achieved, work for the good of humanity, individuals, societies, and the environment and ecosystems, and embody the values and principles set out in this Recommendation. People should have good reason to trust that AI systems bring shared benefits, while adequate measures are taken to mitigate risks. An essential requirement for trustworthiness is that, throughout their life cycle, AI systems are subject to monitoring by governments, private sector companies, independent civil society and other stakeholders. As trustworthiness is an outcome of the operationalization of the principles in this document, the policy actions proposed in this Recommendation are all directed at promoting trustworthiness in all stages of the AI life cycle. 

III.1 VALUES 

Respect, protection and promotion of human dignity, human rights and fundamental freedoms 

13. The dignity of every human person constitutes a foundation for the indivisible system of human rights and fundamental freedoms and is essential throughout the life cycle of AI systems. Human dignity relates to the recognition of the intrinsic worth of each individual human being and thus dignity is not tied to sex, gender, language, religion, political or other opinion, national, ethnic, indigenous or social origin, sexual orientation and gender identity, property, birth, disability, age or other status. 

14. No human being should be harmed physically, economically, socially, politically, or mentally during any phase of the life cycle of AI systems. Throughout the life cycle of AI systems the quality of life of every human being should be enhanced, while the definition of “quality of life” should be left open to individuals or groups, as long as there is no violation or abuse of human rights, or the dignity of humans in terms of this definition. 

15. Persons may interact with AI systems throughout their life cycle and receive assistance from them such as care for vulnerable people, including but not limited to children, older persons, persons with disabilities or the ill. Within such interactions, persons should never be objectified, nor should their dignity be undermined, or human rights violated or abused. 

16. Human rights and fundamental freedoms must be respected, protected, and promoted throughout the life cycle of AI systems. Governments, private sector, civil society, international organizations, technical communities, and academia must respect human rights instruments and frameworks in their interventions in the processes surrounding the life cycle of AI systems. New technologies need to provide new means to advocate, defend and exercise human rights and not to infringe them. 

Environment and ecosystem flourishing 

17. Environmental and ecosystem flourishing should be recognized and promoted through the life cycle of AI systems. Furthermore, environment and ecosystems are the existential necessity for humanity and other living beings to be able to enjoy the benefits of advances in AI. 

18. All actors involved in the life cycle of AI systems must follow relevant international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to, its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems. 

Ensuring diversity and inclusiveness 

19. Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems, at a minimum consistent with international human rights law, standards and principles, as well as demographic, cultural, gender and social diversity and inclusiveness. This may be done by promoting active participation of all individuals or groups based on sex, gender, language, religion, political or other opinion, national, ethnic, indigenous or social origin, sexual orientation and gender identity, property, birth, disability, age or other status, in the life cycle of AI systems. Any homogenizing tendency should be monitored and addressed. 

20. The scope of lifestyle choices, beliefs, opinions, expressions or personal experiences, including the optional use of AI systems and the co-design of these architectures should not be restricted in any way during any phase of the life cycle of AI systems. 21. Furthermore, efforts should be made to overcome, and never exploit, the lack of necessary technological infrastructure, education and skills, as well as legal frameworks, in some communities, and particularly in LMICs, LDCs, LLDCs and SIDS. 

Living in harmony and peace 

22. AI actors should play an enabling role for harmonious and peaceful life, which is to ensure an interconnected future ensuring the benefit of all. The value of living in harmony and peace points to the potential of AI systems to contribute throughout their life cycle to the interconnectedness of all living creatures with each other and with the natural environment. 

23. The notion of humans being interconnected is based on the knowledge that every human belongs to a greater whole, which is diminished when others are diminished in any way. Living in harmony and peace requires an organic, immediate, uncalculated bond of solidarity, characterized by a permanent search for non-conflictual, peaceful relations, tending towards consensus with others and harmony with the natural environment in the broadest sense of the term. 

24. This value demands that peace should be promoted throughout the life cycle of AI systems, in so far as the processes of the life cycle of AI systems should not segregate, objectify, or undermine the safety of human beings, divide and turn individuals and groups against each other, or threaten the harmonious coexistence between humans, non-humans, and the natural environment, as this would negatively impact on humankind as a collective. 

III.2 PRINCIPLES 

Proportionality and do no harm 

25. It should be recognized that AI technologies do not necessarily, per se, ensure human and environmental and ecosystem flourishing. Furthermore, none of the processes related to the AI system life cycle shall exceed what is necessary to achieve legitimate aims or objectives and should be appropriate to the context. In the event of possible occurrence of any harm to human beings or the environment and ecosystems, the implementation of procedures for risk assessment and the adoption of measures in order to preclude the occurrence of such harm should be ensured. 

26. The choice of an AI method should be justified in the following ways: (a) The AI method chosen should be desirable and proportional to achieve a given legitimate aim; (b) The AI method chosen should not have a negative infringement on the foundational values captured in this document; (c) The AI method should be appropriate to the context and should be based on rigorous scientific foundations. In scenarios that involve life and death decisions, final human determination should apply. 

Safety and security 

27. Unwanted harms (safety risks) and vulnerabilities to attacks (security risks) should be avoided throughout the life cycle of AI systems to ensure human and environmental and ecosystem safety and security. Safe and secure AI will be enabled by the development of sustainable, privacy-protective data access frameworks that foster better training of AI models utilizing quality data. 

Fairness and non-discrimination 

28. AI actors should promote social justice, by respecting fairness. Fairness implies sharing benefits of AI technologies at local, national and international levels, while taking into consideration the specific needs of different age groups, cultural systems, different language groups, persons with disabilities, girls and women, and disadvantaged, marginalized and vulnerable populations. At the local level, it is a matter of working to give communities access to AI systems in the languages of their choice and respecting different cultures. At the national level, governments are obliged to demonstrate equity between rural and urban areas, and among all persons without distinction as to sex, gender, language, religion, political or other opinion, national, ethnic, indigenous or social origin, sexual orientation and gender identity, property, birth, disability, age or other status, in terms of access to and participation in the AI system life cycle. At the international level, the most technologically advanced countries have an obligation of solidarity with the least advanced to ensure that the benefits of AI technologies are shared such that access to and participation in the AI system life cycle for the latter contributes to a fairer world order with regard to information, communication, culture, education, research, and socio-economic and political stability. 

29. AI actors should make all efforts to minimize and avoid reinforcing or perpetuating inappropriate socio-technical biases based on identity prejudice, throughout the life cycle of the AI system to ensure fairness of such systems. There should be a possibility to have a remedy against unfair algorithmic determination and discrimination. 30. Furthermore, discrimination, digital and knowledge divides, and global inequalities need to be addressed throughout an AI system life cycle, including in terms of access to technology, data, connectivity, knowledge and skills, and participation of the affected communities as part of the design phase, such that every person is treated equitably. 

Sustainability 

31. The development of sustainable societies relies on the achievement of a complex set of objectives on a continuum of social, cultural, economic and environmental dimensions. The advent of AI technologies can either benefit sustainability objectives or hinder their realization, depending on how they are applied across countries with varying levels of development. The continuous assessment of the social, cultural, economic and environmental impact of AI technologies should therefore be carried out with full cognizance of the implications of AI technologies for sustainability as a set of constantly evolving goals across a range of dimensions, such as currently identified in the United Nations Sustainable Development Goals (SDGs). 

Privacy 

32. Privacy, a right essential to the protection of human dignity, human autonomy and human agency, must be respected, protected and promoted throughout the life cycle of AI systems both at the personal and collective level. It is crucial that data for AI is being collected, used, shared, archived and deleted in ways that are consistent with the values and principles set forth in this Recommendation. 

33. Adequate data protection frameworks and governance mechanisms should be established by regulatory agencies, at national or supranational level, protected by judicial systems, and ensured throughout the life cycle of AI systems. This protection framework and mechanisms concern the collection, control over, and use of data and exercise of their rights by data subjects and of the right for individuals to have personal data erased, ensuring a legitimate aim and a valid legal basis for the processing of personal data as well as for the personalization, and de- and re-personalization of data, transparency, appropriate safeguards for sensitive data, and effective independent oversight. 

34. Algorithmic systems require thorough privacy impact assessments which also include societal and ethical considerations of their use and an innovative use of the privacy by design approach. 

Human oversight and determination 

35. It must always be possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems to physical persons or to existing legal entities. Human oversight refers thus not only to individual human oversight, but to public oversight, as appropriate. 

36. It may be the case that sometimes humans would have to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability. 

Transparency and explainability 

37. The transparency of AI systems is often a crucial precondition to ensure that fundamental human rights and ethical principles are respected, protected and promoted. 

Transparency is necessary for relevant national and international liability legislation to work effectively. 

38. While efforts need to be made to increase transparency and explainability of AI systems throughout their life cycle to support democratic governance, the level of transparency and explainability should always be appropriate to the context, as some trade-offs exist between transparency and explainability and other principles such as safety and security. People have the right to be aware when a decision is being made on the basis of AI algorithms, and in those circumstances require or request explanatory information from private sector companies or public sector institutions. 

39. From a socio-technical lens, greater transparency contributes to more peaceful, just and inclusive societies. It allows for public scrutiny that can decrease corruption and discrimination, and can also help detect and prevent negative impacts on human rights. Transparency may contribute to trust from humans for AI systems. Specific to the AI system, transparency can enable people to understand how each stage of an AI system is put in place, appropriate to the context and sensitivity of the AI system. It may also include insight into factors that impact a specific prediction or decision, and whether or not appropriate assurances (such as safety or fairness measures) are in place. In cases where serious adverse human rights impacts are foreseen, transparency may also require the sharing of specific code or datasets. 

40. Explainability refers to making intelligible and providing insight into the outcome of AI systems. The explainability of AI systems also refers to the understandability of the input, output and behaviour of each algorithmic building block and how it contributes to the outcome of the systems. Thus, explainability is closely related to transparency, as outcomes and sub-processes leading to outcomes should be understandable and traceable, appropriate to the use context. 

41. Transparency and explainability relate closely to adequate responsibility and accountability measures, as well as to the trustworthiness of AI systems. 

Responsibility and accountability 

42. AI actors should respect, protect and promote human rights and promote the protection of the environment and ecosystems, assuming ethical and legal responsibility in accordance with extant national and international law, in particular international human rights law, principles and standards, and ethical guidance throughout the life cycle of AI systems. The ethical responsibility and liability for the decisions and actions based in any way on an AI system should always ultimately be attributable to AI actors. 

43. Appropriate oversight, impact assessment, and due diligence mechanisms should be developed to ensure accountability for AI systems and their impact throughout their life cycle. Both technical and institutional designs should ensure auditability and traceability of (the working of) AI systems in particular to address any conflicts with human rights and threats to environmental and ecosystem well-being. 

Awareness and literacy 

44. Public awareness and understanding of AI technologies and the value of data should be promoted through open and accessible education, civic engagement, digital skills and AI ethics training, media and information literacy and training led jointly by governments, intergovernmental organizations, civil society, academia, the media, community leaders and the private sector, and considering the existing linguistic, social and cultural diversity, to ensure effective public participation so that all members of society can take informed decisions about their use of AI systems and be protected from undue influence. 

45. Learning about the impact of AI systems should include learning about, through and for human rights, meaning that the approach and understanding of AI systems should be grounded by their impact on human rights and access to rights. 

Multi-stakeholder and adaptive governance and collaboration 

46. International law and sovereignty should be respected in the use of data. Data sovereignty means that States, complying with international law, regulate the data generated within or passing through their territories, and take measures towards effective regulation of data based on respect for the right to privacy and other human rights. 

47. Participation of different stakeholders throughout the AI system life cycle is necessary for inclusive AI governance, sharing of benefits of AI, and fair technological advancement and its contribution to development goals. Stakeholders include but are not limited to governments, intergovernmental organizations, the technical community, civil society, researchers and academia, media, education, policy-makers, private sector companies, human rights institutions and equality bodies, anti-discrimination monitoring bodies, and groups for youth and children. The adoption of open standards and interoperability to facilitate collaboration must be in place. Measures must be adopted to take into account shifts in technologies, the emergence of new groups of stakeholders, and to allow for meaningful intervention by marginalized groups, communities and individuals. 

IV. AREAS OF POLICY ACTION 

48. The policy actions described in the following policy areas operationalize the values and principles set out in this Recommendation. The main action is for Member States to put in place policy frameworks or mechanisms and to ensure that other stakeholders, such as private sector companies, academic and research institutions, and civil society, adhere to them by, among other actions, assisting all stakeholders to develop ethical impact assessment and due diligence tools. The process for developing such policies or mechanisms should be inclusive of all stakeholders and should take into account the circumstances and priorities of each Member State. UNESCO can be a partner and support Member States in the development as well as monitoring and evaluation of policy mechanisms. 

49. UNESCO recognizes that Member States will be at different stages of readiness to implement this Recommendation, in terms of scientific, technological, economic, educational, legal, regulatory, infrastructural, societal, cultural and other dimensions. It is noted that “readiness” here is a dynamic status. In order to enable the effective implementation of this Recommendation, UNESCO will therefore: (1) develop a readiness assessment methodology to assist Member States in identifying their status at specific moments of their readiness trajectory along a continuum of dimensions; and (2) ensure support for Member States in terms of developing a globally accepted methodology for Ethical Impact Assessment (EIA) of AI technologies, sharing of best practices, assessment guidelines and other mechanisms and analytical work. 

POLICY AREA 1: ETHICAL IMPACT ASSESSMENT 

50. Member States should introduce impact assessments to identify and assess benefits, concerns and risks of AI systems, as well as risk prevention, mitigation and monitoring measures. The ethical impact assessment should identify impacts on human rights, in particular but not limited to the rights of vulnerable groups, labour rights, the environment and ecosystems, and ethical and social implications in line with the principles set forth in this Recommendation. 

51. Member States and private sector companies should develop due diligence and oversight mechanisms to identify, prevent, mitigate and account for how they address the impact of AI systems on human rights, rule of law and inclusive societies. Member States should also be able to assess the socio-economic impact of AI systems on poverty and ensure that the gap between people living in wealth and poverty, as well as the digital divide among and within countries are not increased with the massive adoption of AI technologies at present and in the future. In order to do this, enforceable transparency protocols should be implemented, corresponding to the right of access to information, including information of public interest held by private entities. 

52. Member States and private sector companies should implement proper measures to monitor all phases of an AI system life cycle, including the behaviour of algorithms used for decision-making, the data, as well as AI actors involved in the process, especially in public services and where direct end-user interaction is needed. 

53. Governments should adopt a regulatory framework that sets out a procedure, particularly for public authorities, to carry out ethical impact assessments on AI systems to predict consequences, mitigate risks, avoid harmful consequences, facilitate citizen participation and address societal challenges. The assessment should also establish appropriate oversight mechanisms, including auditability, traceability and explainability  which enable the assessment of algorithms, data and design processes, as well as include  external review of AI systems. Ethical impact assessments carried out by public authorities should be transparent and open to the public. Such assessments should also be multidisciplinary, multi-stakeholder, multicultural, pluralistic and inclusive. Member States are encouraged to put in place mechanisms and tools, for example regulatory sandboxes or testing centres, which would enable impact monitoring and assessment in a multidisciplinary and multi-stakeholder fashion. The public authorities should be required to monitor the AI systems implemented and/or deployed by those authorities by introducing appropriate mechanisms and tools. 

54. Member States should establish monitoring and evaluation mechanisms for initiatives and policies related to AI ethics. Possible mechanisms include: a repository covering human rights-compliant and ethical development of AI systems; a lessons sharing mechanism for Member States to seek feedback from other Member States on their policies and initiatives; a guide for all AI actors to assess their adherence to policy recommendations mentioned in this document; and follow-up tools. International human rights law, standards and principles should form part of the ethical aspects of AI system assessments. 

POLICY AREA 2: ETHICAL GOVERNANCE AND STEWARDSHIP 

55. Member States should ensure that any AI governance mechanism is inclusive, transparent, multidisciplinary, multilateral (this includes the possibility of mitigation and redress of harm across borders), and multi-stakeholder. Governance should include aspects of anticipation, protection, monitoring of impact, enforcement and redressal. 

56. Member States should ensure that harms caused to users through AI systems are investigated and redressed, by enacting strong enforcement mechanisms and remedial actions, to make certain that human rights and the rule of law are respected in the digital world as it is in the physical world. Such mechanisms and actions should include remediation mechanisms provided by private sector companies. The auditability and traceability of AI systems should be promoted to this end. In addition, Member States should strengthen their institutional capacities to deliver on this duty of care and should collaborate with researchers and other stakeholders to investigate, prevent and mitigate any potentially malicious uses of AI systems. 

57. Member States are encouraged to consider forms of soft governance such as a certification mechanism for AI systems and the mutual recognition of their certification, according to the sensitivity of the application domain and expected impact on human lives, the environment and ecosystems, and other ethical considerations set forth in this Recommendation. Such a mechanism might include different levels of audit of systems, data, and adherence to ethical guidelines, and should be validated by authorized parties in each country. At the same time, such a mechanism must not hinder innovation or disadvantage small and medium enterprises or start-ups by requiring large amounts of paperwork. These mechanisms would also include a regular monitoring component to ensure system robustness and continued integrity and adherence to ethical guidelines over the entire life cycle of the AI system, requiring re-certification if necessary. 

58. Government and public authorities should be required to carry out self-assessment of existing and proposed AI systems, which in particular, should include the assessment whether the adoption of AI is appropriate and, if so, should include further assessment to determine what the appropriate method is, as well as assessment as to whether such adoption transgresses any human rights law, standards and principles. 

59. Member States should encourage public entities, private sector companies and civil society organizations to involve different stakeholders in their AI governance and to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee ethical impact assessment, auditing and continuous monitoring efforts and ensure ethical guidance of AI systems. Member States, private sector companies and civil society organizations, with the support of UNESCO, are encouraged to create a network of independent AI Ethics Officers to give support to this process at national, regional and international levels. 

60. Member States should foster the development of, and access to, a digital ecosystem for ethical development of AI systems at the national level, while encouraging international collaboration. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, Member States should consider reviewing their policies and regulatory frameworks, including on access to information and open government to reflect AI-specific requirements and promoting mechanisms, such as open repositories for publicly-funded or publicly-held data and source code and data trusts, to support the safe, fair, legal and ethical sharing of data, among others. 

61. Member States should establish mechanisms, in collaboration with international organizations, transnational corporations, academic institutions and civil society, to ensure the active participation of all Member States, especially LMICs, in particular LDCs, LLDCs and SIDS, in international discussions concerning AI governance. This can be through the provision of funds, ensuring equal regional participation, or any other mechanisms. Furthermore, in order to ensure the inclusiveness of AI fora, Member States should facilitate the travel of AI actors in and out of their territory, especially from LMICs, in particular LDCs, LLDCs and SIDS, for the purpose of participating in these fora. 

62. Amendments to existing or elaboration of new national legislation addressing AI systems must comply with international human rights law and promote human rights and fundamental freedoms throughout the AI system life cycle. Promotion thereof should also take the form of governance initiatives, good exemplars of collaborative practices regarding AI systems, and national and international technical and methodological guidelines as AI technologies advance. Diverse sectors, including the private sector, in their practices regarding AI systems must respect, protect and promote human rights and fundamental freedoms using existing and new instruments in combination with this Recommendation. 

63. Member States should provide mechanisms for human rights and for social and economic impact of AI monitoring and oversight, and other governance mechanisms such as independent data protection authorities, sectoral oversight, public bodies for the oversight of acquisition of AI systems for human rights sensitive use cases, such as criminal justice, law enforcement, welfare, employment, health care, among others, and independent judiciary systems. 

64. Member States should ensure that governments and multilateral organizations play a leading role in guaranteeing the safety and security of AI systems. Specifically, Member States, international organizations and other relevant bodies should develop international standards that describe measurable, testable levels of safety and transparency, so that systems can be objectively assessed and levels of compliance determined. Furthermore, Member States should continuously support strategic research on potential safety and security risks of AI technologies and should encourage research into transparency and explainability by putting additional funding into those areas for different domains and at different levels, such as technical and natural language. 

65. Member States should implement policies to ensure that the actions of AI actors are consistent with international human rights law, standards and principles throughout the life cycle of AI systems, while demonstrating awareness and respect for the current cultural and social diversities including local customs and religious traditions. 

66. Member States should put in place mechanisms to require AI actors to disclose and combat any kind of stereotyping in the outcomes of AI systems and data, whether by design or by negligence, and to ensure that training data sets for AI systems do not foster cultural, economic or social inequalities, prejudice, the spreading of non-reliable information or the dissemination of anti-democratic ideas. Particular attention should be given to regions where the data are scarce. 

67. Member States should implement policies to promote and increase diversity in AI development teams and training datasets, and to ensure equal access to AI technologies and their benefits, particularly for marginalized groups, both from rural and urban zones. 

68. Member States should develop, review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their life cycle. Member States should introduce liability frameworks or clarify the interpretation of existing frameworks to ensure the attribution of accountability for the outcomes and behaviour of AI systems. Furthermore, when developing regulatory frameworks, Member States should, in particular, take into account that ultimate responsibility and accountability must always lie with natural or legal persons and that AI systems should not be given legal personality themselves. To ensure this, such regulatory frameworks should be consistent with the principle of human oversight and establish a comprehensive approach focused on the actors and the technological processes involved across the different stages of the AI systems life cycle. 

69. Member States should enhance the capacity of the judiciary to make decisions related to AI systems as per the rule of law and in line with international standards, including in the use of AI systems in their deliberations, while ensuring that the principle of human oversight is upheld. 

70. In order to establish norms where these do not exist, or to adapt existing legal frameworks, Member States should involve all AI actors (including, but not limited to, researchers, representatives of civil society and law enforcement, insurers, investors, manufacturers, engineers, lawyers, and users). The norms can mature into best practices, laws and regulations. Member States are further encouraged to use mechanisms such as policy prototypes and regulatory sandboxes to accelerate the development of laws, regulations and policies in line with the rapid development of new technologies and ensure that laws and regulations can be tested in a safe environment before being officially adopted. Member States should support local governments in the development of local policies, regulations, and laws in line with national and international legal frameworks. 

71. Member States should set clear requirements for AI system transparency and explainability so as to help ensure the trustworthiness of the full AI system life cycle. Such requirements should involve the design and implementation of impact mechanisms that take into consideration the nature of application domain (Is this a high-risk domain such as law enforcement, security, education, recruitment and health care?), intended use (What are the risks in terms of transgression of safety and human rights?), target audience (Who is requesting the information) and feasibility (Is the algorithm explainable or not and what are the trade-offs between accuracy and explainability?) of each particular AI system. 

POLICY AREA 3: DATA POLICY 

72. Member States should work to develop data governance strategies that ensure the continual evaluation of the quality of training data for AI systems including the adequacy of the data collection and selection processes, proper security and data protection measures, as well as feedback mechanisms to learn from mistakes and share best practices among all AI actors. Striking a balance between the collection of metadata and users’ privacy should be an upfront goal for such a strategy. 

73. Member States should put in place appropriate safeguards to recognize and protect individuals’ fundamental right to privacy, including through the adoption or the enforcement of legislative frameworks that provide appropriate protection, compliant with international law. Member States should strongly encourage all AI actors, including private sector companies, to follow existing international standards and in particular to carry out privacy impact assessments, as part of ethical impact assessments, which take into account the wider socio-economic impact of the intended data processing and to apply privacy by design in their systems. Privacy should be respected, protected and promoted throughout the life cycle of AI systems. 

74. Member States should ensure that individuals retain rights over their personal data and are protected by a framework which notably foresees transparency, appropriate safeguards for the processing of sensitive data, the highest level of data security, effective and meaningful accountability schemes and mechanisms, the full enjoyment of data subjects’ rights, in particular the right to access and the right to erasure of their personal data in AI systems, an appropriate level of protection while data are being used for commercial purposes such as enabling micro-targeted advertising, transferred cross-border, and an effective independent oversight as part of a data governance mechanism which respects data sovereignty and balances this with the benefits of a free flow of information internationally, including access to data. 

75. Member States should establish their data policies or equivalent frameworks, or reinforce existing ones, to ensure increased security for personal data and sensitive data, which if disclosed, may cause exceptional damage, injury or hardship to a person. Examples include data relating to offences, criminal proceedings and convictions, and related security measures; biometric and genetic data; personal data relating to ethnic or social origin, political opinions, trade union membership, religious and other beliefs, health and sexual life. 

76. Member States should use AI systems to improve access to information and knowledge, including of their data holdings, and address gaps in access to the AI system life cycle. This can include support to researchers and developers to enhance freedom of expression and access to information, and increased proactive disclosure of official data and information. Member States should also promote open data, including through developing open repositories for publicly-funded or publicly-held data and source code. 

77. Member States should ensure the overall quality and robustness of the dataset for AI, and exercise vigilance in overseeing their collection and use. This could, if possible and feasible, include investing in the creation of gold standard datasets, including open and trustworthy datasets, which are diverse, constructed on a valid legal basis, including consent of data subjects, when required by law. Standards for annotating datasets should be encouraged, so it can easily be determined how a dataset is gathered and what properties it has. 

78. Member States, as also suggested in the report of the UNSG’s High-level Panel on Digital Cooperation, with the support of the United Nations and UNESCO, should adopt a Digital Commons approach to data where appropriate, increase interoperability of tools and datasets and interfaces of systems hosting data, and encourage private sector companies to share the data they collect as appropriate for research or public benefits. They should also promote public and private efforts to create collaborative platforms to share quality data in trusted and secured data spaces. 

POLICY AREA 4: DEVELOPMENT AND INTERNATIONAL COOPERATION 

79. Member States and transnational corporations should prioritize AI ethics by including discussions of AI-related ethical issues into relevant international, intergovernmental and multi-stakeholder fora. 

80. Member States should ensure that the use of AI in areas of development such as health care, agriculture/food supply, education, media, culture, environment, water management, infrastructure management, economic planning and growth, and others, adheres to the values and principles set forth in this Recommendation. 

81. Member States should work through international organizations to provide platforms for international cooperation on AI for development, including by contributing expertise, funding, data, domain knowledge, infrastructure, and facilitating collaboration between technical and business experts to tackle challenging development problems, especially for LMICs, in particular LDCs, LLDCs and SIDS. 

82. Member States should work to promote international collaboration on AI research and innovation, including research and innovation centres and networks that promote greater participation and leadership of researchers from LMICs and other regions, including LDCs, LLDCs and SIDS. 

83. Member States should promote AI ethics research by international organizations and research institutions, as well as transnational corporations, that can be a basis for the ethical use of AI systems by public and private entities, including research into the applicability of specific ethical frameworks in specific cultures and contexts, and the possibilities to match these frameworks to technologically feasible solutions. 

84. Member States should encourage international cooperation and collaboration in the field of AI to bridge geo-technological lines. Technological exchanges/consultations should take place between Member States and their populations, between the public and private sectors, and between and among Member States in the Global North and Global South. 

85. Member States should develop and implement an international legal framework to encourage international cooperation between States and other stakeholders paying special attention to the situation of LMICs, in particular LDCs, LLDCs and SIDS. 

POLICY AREA 5: ENVIRONMENT AND ECOSYSTEMS 

86. Member States should assess the direct and indirect environmental impact throughout the AI system life cycle, including but not limited to, its carbon footprint, energy consumption, and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. They should ensure compliance of all AI actors with environmental law, policies, and practices. 

87. Member States should introduce incentives, when needed and appropriate, to ensure the development and adoption of rights-based and ethical AI-powered solutions for disaster risk resilience; the monitoring, protection and regeneration of the environment and ecosystems; and the preservation of the planet. These AI systems should involve the participation of local and indigenous communities throughout their life cycle and should support circular economy type approaches and sustainable consumption and production patterns. Some examples include using AI systems, when needed and appropriate, to: (a) Support the protection, monitoring, and management of natural resources. (b) Support the prevention, control, and management of climate-related problems. (c) Support a more efficient and sustainable food ecosystem. (d) Support the acceleration of access to and mass adoption of sustainable energy. (e) Enable and promote the mainstreaming of sustainable infrastructure, sustainable business models, and sustainable finance for sustainable development. (f) Detect pollutants or predict levels of pollution and thus help relevant stakeholders identify, plan and put in place targeted interventions to prevent and reduce pollution and exposure. 

88. When choosing AI methods, given the data-intensive or resource-intensive character of some of them and the respective impact on the environment, Member States should ensure that AI actors, in line with the principle of proportionality, favour data, energy and resource-efficient AI methods. Requirements should be developed to ensure that appropriate evidence is available showing that an AI application will have the intended effect, or that safeguards accompanying an AI application can support the justification. 

POLICY AREA 6: GENDER 

89. Member States should ensure that digital technologies and artificial intelligence fully contribute to achieve gender equality; and that the rights and fundamental freedoms of girls and women, including their safety and integrity are not violated at any stage of the AI system life cycle. Moreover, Ethical Impact Assessments should include a transversal gender perspective. 

90. Member States should have dedicated funds from the public budgets linked to financing gender-related schemes, ensure that national digital policies include a gender action plan, and develop specific policies, e.g. on labour education, targeted at supporting girls and women to make sure girls and women are not left out of the digital economy powered by AI. Special investment in providing targeted programmes and gender-specific language, to increase the opportunities of participation of girls and women in science, technology, engineering, and mathematics (STEM), including information and communication technologies (ICT) disciplines, preparedness, employability, career development and professional growth of girls and women should be considered and implemented. 

91. Member States should ensure that the potential of AI systems to improve gender equality is realized. They should guarantee that these technologies do not contribute to exacerbating the already wide gender gaps existing in several fields in the analogue world. This includes the gender wage gap; the representation in certain professions and activities gap; the lack of representation at the top management positions, boards of directors, or research teams in the AI field; the education gap; digital/AI access, adoption, usage and affordability gap; the unequal distribution of unpaid work and of the caring responsibilities in our societies. 

92. Member States should ensure that gender stereotyping, and discriminatory biases are not translated into the AI systems. Efforts are necessary to avoid the compounding negative effect of technological divides in achieving gender equality and avoiding violence against girls and women, and all other types of gender identities. 

93. Member States should encourage female entrepreneurship, participation and engagement in all stages of an AI system life cycle by offering and promoting economic, regulatory incentives, among other incentives and support schemes, as well as policies that aim at a balanced gender participation in AI research in academia, gender representation on digital/AI companies top management positions, board of directors, or research teams. Governments should ensure public funds (on innovation, research and technologies) are channelled to inclusive programmes and companies, with clear gender representation, and that private funds are encouraged through affirmative action principles. Moreover, policies on harassment-free environments should be developed and enforced together with the encouragement of the transfer of best practices on how to promote diversity throughout the AI system life cycle. 

94. UNESCO can help form a repository of best practices for incentivizing the participation of women and under-represented groups on all stages of the AI life cycle. 

POLICY AREA 7: CULTURE 

95. Member States are encouraged to incorporate AI systems where appropriate in the preservation, enrichment, understanding, promotion and accessibility of tangible, documentary and intangible cultural heritage, including endangered languages as well as indigenous languages and knowledge, for example by introducing or updating educational programmes related to the application of AI systems in these areas where appropriate and ensuring a participatory approach, targeted at institutions and the public. 

96. Member States are encouraged to examine and address the cultural impact of AI systems, especially Natural Language Processing applications such as automated translation and voice assistants on the nuances of human language and expression. Such assessments should provide input for the design and implementation of strategies that maximize the benefits from these systems by bridging cultural gaps and increasing human understanding, as well as negative implications such as the reduction of use, which could lead to the disappearance of endangered languages, local dialects, and tonal and cultural variations associated with human language and expression. 

97. Member States should promote AI education and digital training for artists and creative professionals to assess the suitability of AI technologies for use in their profession as AI technologies are being used to create, produce, distribute and broadcast a variety of cultural goods and services, bearing in mind the importance of preserving cultural heritage, diversity and artistic freedom. 

98. Member States should promote awareness and evaluation of AI tools among local cultural industries and small and medium enterprises working in the field of culture, to avoid the risk of concentration in the cultural market. 

99. Member States should engage large technology companies and other stakeholders to promote a diverse supply and plural access to cultural expressions, and in particular to ensure that algorithmic recommendation enhances the visibility and discoverability of local content. 

100. Member States should foster new research at the intersection between AI and intellectual property, for example to determine who are the rights-holders of the works created by means of AI technologies among the different stakeholders throughout the AI life cycle. 

101. Member States should encourage museums, galleries, libraries and archives at the national level to develop and use AI systems to highlight their collections, strengthen their databases and grant access to them for their users. 

POLICY AREA 8: EDUCATION AND RESEARCH 

102. Member States should work with international organizations, private and non-governmental entities to provide adequate AI literacy education to the public in all countries in order to empower people and reduce the digital divide and digital access inequalities resulting from the wide adoption of AI systems. 103. Member States should promote the acquisition of “prerequisite skills” for AI education, such as basic literacy, numeracy, coding and digital skills, and media and information literacy, as well as critical thinking, teamwork, communication, socio-emotional, and AI ethics skills, especially in countries where there are notable gaps in the education of these skills. 

104. Member States should promote general awareness programmes about AI developments, including on the opportunities and challenges brought about by AI technologies. These programmes should be accessible to non-technical as well as technical groups. 

105. Member States should encourage research initiatives on the responsible use of AI technologies in teaching, teacher training and e-learning among other topics, in a way that enhances opportunities and mitigates the challenges and risks involved in this area. The initiatives should be accompanied by an adequate assessment of the quality of education and of impact on students and teachers of the use of AI technologies. Member States should also ensure that AI technologies empower students and teachers and enhance their experience, bearing in mind that emotional and social aspects and the value of traditional forms of education are vital in the teacher-student and student-student relationships, and should be considered when discussing the adoption of AI technologies in education. 

106. Member States should promote the participation of girls and women, diverse ethnicities and cultures, and persons with disabilities, in AI education programmes at all levels, as well as the monitoring and sharing of best practices in this regard with other Member States. 

107. Member States should develop, in accordance with their national education programmes and traditions, AI ethics curricula for all levels, and promote cross-collaboration between AI technical skills education and humanistic, ethical and social aspects of AI education. Online courses and digital resources of AI ethics education should be developed in local languages, especially in accessible formats for persons with disabilities. 

108. Member States should promote AI ethics research either through investing in such research or by creating incentives for the public and private sectors to invest in this area. 

109. Member States should ensure that AI researchers are trained in research ethics and require them to include ethical considerations in their designs, products and publications, especially in the analyses of the datasets they use, how they are annotated and the quality and the scope of the results. 

110. Member States should encourage private sector companies to facilitate the access of scientific community to their data for research, especially in LMICs, in particular LDCs, LLDCs and SIDS. This access should not be at the expense of privacy. 

111. Member States should promote gender diversity in AI research in academia and industry by offering incentives to girls and women to enter the field, putting in place mechanisms to fight gender stereotyping and harassment within the AI research community, and encouraging academic and private entities to share best practices on how to enhance gender diversity. 

112. To ensure a critical evaluation of AI research and proper monitoring of potential misuses or adverse effects, Member States should ensure that any future developments with regards to AI technologies should be based on rigorous scientific research, and promote interdisciplinary AI research by including disciplines other than science, technology, engineering, and mathematics (STEM), such as cultural studies, education, ethics, international relations, law, linguistics, philosophy, and political science. 

113. Recognizing that AI technologies present great opportunities to help advance scientific knowledge and practice, especially in traditionally model-driven disciplines, Member States should encourage scientific communities to be aware of the benefits, limits and risks of their use; this includes attempting to ensure that conclusions drawn from data-driven approaches are robust and sound. Furthermore, Member States should welcome and support the role of the scientific community in contributing to policy, and in cultivating awareness of the strengths and weaknesses of AI technologies. 

POLICY AREA 9: ECONOMY AND LABOUR 

114. Member States should assess and address the impact of AI systems on labour markets and its implications for education requirements, in all countries and with special emphasis on countries where the economy is labour-intensive. This can include the introduction of a wider range of “core” and interdisciplinary skills at all education levels to provide current workers and new generations a fair chance of finding jobs in a rapidly changing market and to ensure their awareness of the ethical aspects of AI systems. Skills such as “learning how to learn”, communication, critical thinking, teamwork, empathy, and the ability to transfer one’s knowledge across domains, should be taught alongside specialist, technical skills, as well as low-skilled tasks such as labelling datasets. Being transparent about what skills are in demand and updating curricula around these are key. 

115. Member States should support collaboration agreements among governments, academic institutions, industry, workers’ organizations and civil society to bridge the gap of skillset requirements to align training programmes and strategies with the implications of the future of work and the needs of industry. Project-based teaching and learning approaches for AI should be promoted, allowing for partnerships between private sector companies, universities and research centres. 

116. Member States should work with private sector companies, civil society organizations and other stakeholders, including workers and unions to ensure a fair transition for at-risk employees. This includes putting in place upskilling and reskilling programmes, finding effective mechanisms of retaining employees during those transition periods, and exploring “safety net” programmes for those who cannot be retrained. Member States should develop and implement programmes to research and address the challenges identified that could include upskilling and reskilling, enhanced social protection, proactive industry policies and interventions, tax benefits, new taxations forms, among others. Tax regimes and other relevant regulations should be carefully examined and changed if needed to counteract the consequences of unemployment caused by AI-based automation. 

117. Member States should encourage and support researchers to analyse the impact of AI systems on the local labour environment in order to anticipate future trends and challenges. These studies should investigate the impact of AI systems on economic, social and geographic sectors, as well as on human-robot interactions and human-human relationships, in order to advise on reskilling and redeployment best practices. 

118. Member States should devise mechanisms to prevent the monopolization of AI systems throughout their life cycle and the resulting inequalities, whether these are data, research, technology, market or other monopolies. Member States should assess relevant markets, and regulate and intervene if such monopolies exist, taking into account that, due to a lack of infrastructure, human capacity and regulations, LMICs, in particular LDCs, LLDCs and SIDS are more exposed and vulnerable to exploitation by large technology companies. 

POLICY AREA 10: HEALTH AND SOCIAL WELL-BEING 

119. Member States should endeavour to employ effective AI systems for improving human health and protecting the right to life, while building and maintaining international solidarity to tackle global health risks and uncertainties, and ensure that their deployment of AI systems in health care be consistent with international law and international human rights law, standards and principles. Member States should ensure that actors involved in health care AI systems take into consideration the importance of a patient’s relationships with their family and with health care staff. 

120. Member States should regulate the development and deployment of AI systems related to health in general and mental health in particular to ensure that they are safe, effective, efficient, scientifically and medically sound. Moreover, in the related area of digital health interventions, Member States are strongly encouraged to actively involve patients and their representatives in all relevant steps of the development of the system. 

121. Member States should pay particular attention in regulating prediction, detection and treatment solutions for health care in AI applications by: (a) ensuring oversight to minimize bias; (b) ensuring that the professional, the patient, caregiver or service user is included as a “domain expert” in the team when developing the algorithms; (c) paying due attention to privacy because of the potential need of being constantly monitored; (d) ensuring that those whose data is being analysed are aware of and provide informed consent to the tracking and analysis of their data; and (e) ensuring the human care and final decision of diagnosis and treatment are taken by humans while acknowledging that AI systems can also assist in their work. 

122. Member States should establish research on the effects and regulation of potential harms to mental health related to AI systems, such as higher degrees of depression, anxiety, social isolation, developing addiction, trafficking and radicalization, misinformation, among others. 

123. Member States should develop guidelines for human-robot interactions and their impact on human-human relationships, based on research and directed at the future development of robots, with special attention to the mental and physical health of human beings, especially regarding robots in health care and the care for older persons and persons with disabilities, and regarding educational robots, toy robots, chatbots, and companion robots for children and adults. Furthermore, assistance of AI technologies should be applied to increase the safety and ergonomic use of robots, including in a human-robot working environment. 

124. Member States should ensure that human-robot interactions comply with the same values and principles that apply to any other AI systems, including human rights, the promotion of diversity in relationships, and the protection of vulnerable groups. 

125. Member States should protect the right of users to easily identify whether they are interacting with a living being, or with an AI system imitating human or animal characteristics. 

126. Member States should implement policies to raise awareness about the anthropomorphization of AI technologies, including in the language used to mention them, and assess the manifestations, ethical implications and possible limitations of such anthropomorphization in particular in the context of robot-human interaction and especially when children are involved. 

127. Member States should encourage and promote collaborative research into the effects of long-term interaction of people with AI systems, paying particular attention to the psychological and cognitive impact that these systems can have on children and young people. This should be done using multiple norms, principles, protocols, disciplinary approaches, and assessment of the modification of behaviours and habits, as well as careful evaluation of the downstream cultural and societal impacts. 

128. Member States, as well as all stakeholders, should put in place mechanisms to meaningfully engage children and young people in conversations, debates, and decision-making with regards to the impact of AI systems on their lives and futures. 

129. Member States should promote the accountable use of AI systems to counter hate speech in the online domain and disinformation and also to ensure that AI systems are not used to produce and spread such content, particularly in times of elections. 

130. Member States should create enabling environments for media to have the rights and resources to effectively report on the benefits and harms of AI systems, and also to make use of AI systems in their reporting. 

V. MONITORING AND EVALUATION 

131. Member States should, according to their specific conditions, governing structures and constitutional provisions, credibly and transparently monitor and evaluate policies, programmes and mechanisms related to ethics of AI using a combination of quantitative and qualitative approaches. In support to Member States, UNESCO can contribute by: (a) developing a globally accepted methodology for Ethical Impact Assessment (EIA) of AI technologies, including guidance for its implementation in all stages of the AI system life cycle, based on rigorous scientific research; (b) developing a readiness methodology to assist Member States in identifying their status at specific moments of their readiness trajectory along a continuum of dimensions; (c) developing a globally accepted methodology to evaluate ex ante and ex post the effectiveness and efficiency of the policies for AI ethics and incentives against defined objectives; (d) strengthening the research- and evidence-based analysis of and reporting on policies regarding AI ethics, including the publication of a comparative index; and (e) collecting and disseminating progress, innovations, research reports, scientific publications, data and statistics regarding policies for AI ethics, to support sharing best practices and mutual learning, and to advance the implementation of this Recommendation. 

132. Processes for monitoring and evaluation should ensure broad participation of relevant stakeholders, including, but not limited to, people of different age groups, girls and women, persons with disabilities, disadvantaged, marginalized and vulnerable populations, indigenous communities, as well as people from diverse socio-economic backgrounds. Social, cultural, and gender diversity must be ensured, with a view to improving learning processes and strengthening the connections between findings, decision-making, transparency and accountability for results. 

133. In the interests of promoting best policies and practices related to ethics of AI, appropriate tools and indicators should be developed for assessing the effectiveness and efficiency thereof against agreed standards, priorities and targets, including specific targets for persons belonging to disadvantaged, marginalized and vulnerable groups, as well as the impact of AI systems at individual and societal levels. The monitoring and assessment of the impact of AI systems and related AI ethics policies and practices should be carried out continuously in a systematic way. This should be based on internationally agreed frameworks and involve evaluations of private and public institutions, providers and programmes, including self-evaluations, as well as tracer studies and the development of sets of indicators. Data collection and processing should be conducted in accordance with national legislation on data protection and data privacy. 

134. The possible mechanisms for monitoring and evaluation may include an AI ethics observatory, or contributions to existing initiatives by addressing adherence to ethical principles across UNESCO’s areas of competence, an experience-sharing mechanism for Member States to provide feedback on each other’s initiatives, AI regulatory sandboxes, and an assessment guide for all AI actors to evaluate their adherence to policy recommendations mentioned in this document. 

VI. UTILIZATION AND EXPLOITATION OF THE PRESENT RECOMMENDATION 

135. Member States and all other stakeholders as identified in this Recommendation must respect, promote and protect the ethical principles and standards regarding AI that are identified in this Recommendation, and should take all feasible steps to give effect to its policy recommendations. 136. Member States should strive to extend and complement their own action in respect of this Recommendation, by cooperating with all national and international governmental and non-governmental organizations, as well as transnational corporations and scientific organizations, whose activities fall within the scope and objectives of this Recommendation. The development of a globally accepted Ethical Impact Assessment methodology and the establishment of national commissions for the ethics of technology can be important instruments for this.