06 July 2019


As a point of reference for current controversy about 'Begging gangs' -
Summary Offences Act 1923 (NT) 
s 56 Offences 
(1) Any person who:
(c) wanders abroad, or from house to house, or places himself in any public place, street, highway, court, or passage, to beg or gather alms, or causes or procures or encourages any child so to do; 
(e) has on or about his person, without lawful excuse (proof whereof shall lie upon the person charged), any deleterious drug, or any article of disguise; or 
(i) habitually consorts with reputed criminals, 
shall be guilty of an offence. 
Penalty: 500 dollars or imprisonment for 3 months, or both.
Summary Offences Act 2005 (Qld) 
s 8 Begging in a public place 
(1) A person must not—
(a) beg for money or goods in a public place; or 
(b) cause, procure or encourage a child to beg for money or goods in a public place; or 
(c) solicit donations of money or goods in a public place. 
Penalty— Maximum penalty—10 penalty units or 6 months imprisonment. 
(2) Subsection (1)(c) does not apply to a person who— 
(a) is an individual authorised by a charity registered under the Collections Act 1966 to solicit donations for the charity; or 
(b) is authorised by a local government to busk in a public place. 
(3) In this section— "procure" includes— (a) enable; and (b) facilitate. 
Police Offences Act 1935 (Tas) 
s 8 Begging, imposition, 
(1) A person shall not – (a) in a public place beg or expose wounds or deformities, or place himself or herself or otherwise act so as to induce, or attempt to induce, the giving of money or other financial advantage, or instigate or incite another person to do any of those things; 
(1AA) A person who contravenes a provision of subsection (1) is guilty of an offence and is liable on summary conviction to a penalty not exceeding 5 penalty units or to imprisonment for a term not exceeding 6 months. 
Summary Offences Act 1953 (SA)
s 12 — Begging alms
(1) A person who — (a) begs or gathers alms in a public place; or (b) is in a public place for the purpose of begging or gathering alms; or (c) goes from house to house begging or gathering alms; or (d) causes or encourages a child to beg or gather alms in a public place, or to be in a public place for the purpose of begging or gathering alms; or (e) exposes wounds or deformities with the object of obtaining alms, is guilty of an offence. 
Maximum penalty: $250. 
(2) In this section— "house" includes a building or any separately occupied part of a building. 
Summary Offences Act 1966 (Vic)
s 49A Begging or gathering alms 
(1) A person must not beg or gather alms. 
Penalty: 12 months imprisonment. 
(2) A person must not cause, procure or encourage a child to beg or gather alms. 
Penalty: 12 months imprisonment. 
Public Transport Authority Regulations 2003 (WA) 
r 14 Begging and busking prohibited 
Unless authorised in writing by the chief executive officer, a person who begs or busks in or on a conveyance or a facility commits an offence. 
Modified penalty: a fine of $100. 
Penalty: a fine of $500.
On 27 June this year the Tasmanian Government announced that "following the introduction of the Police Offences Amendment (Begging) Bill 2018"  the Department of Police, Fire and Emergency Management has reviewed the existing Act and briefed the Government, which will "now proceed to remove the current offence of begging from the Police Offences Act 1935".
 Tasmania Police advise that they will, however, still need to ensure they have power to move people on if they are intimidating, creating a nuisance or otherwise harassing other people.

Partner Surveillance

Installing Fear: A Canadian Legal and Policy Analysis of Using, Developing, and Selling Smartphone Spyware and Stalkerware Applications by Cynthia Khoo, Kate Robertson and Ron Deibert comments
 This report provides an in-depth legal and policy analysis of technology-facilitated intimate partner surveillance (IPS) under Canadian law. In particular, the analysis focuses on a growing marketplace of spyware products that exists online and in major software application (app) stores. These apps are designed to facilitate remote surveillance of an individual’s mobile device use with the surveillance often being covert or advertised as such. Despite increasing recognition of the prevalence of technology-enabled intimate partner abuse and harassment, the legality of the creation, sale, and use of consumer-level spyware apps has not yet been closely considered by Canadian courts, legislators, or regulators. 
Spyware and other forms of technology that facilitate IPS are sometimes referred to as stalkerware. In some circumstances, stalkerware technology is used in an intimate relationship to conduct powerfully intrusive covert or coerced surveillance of an intimate or former partner’s mobile device without their knowledge. Once installed, stalkerware apps allow an operator to access an array of intimately personal information about the surveillance target. The apps can enable real-time and remote access to text messages, emails, photos, videos, incoming and outgoing phone calls, GPS location, banking or other account passwords, social media accounts, and more. Stalkerware apps are sometimes used covertly while, in other circumstances, the technology is used openly to intimidate, harass, or extort the surveillance target. 
Hundreds of spyware apps relevant to IPS are available at the consumer level. Research conducted in Canada and internationally suggests that a significant proportion of women who experience intimate partner violence, abuse, and harassment also report experiences with a range of technology-facilitated abuse, including surveillance and abuse that is enabled by the powerful mobile device spyware apps that are the focus of this report. Despite this troubling context, few reported cases involving spyware-enabled IPS have appeared in Canadian courts, and spyware companies, which profit from the sale of these apps, appear to operate in the Canadian marketplace without being hindered by criminal or regulatory law enforcement. 
This report conducts an in-depth analysis of the criminal, regulatory, and civil law consequences of using, creating, selling, or facilitating the sale of stalkerware technology in Canada. The analysis concludes that the creation, use, and sale of spyware apps that enable covert surveillance of mobile devices can potentially violate numerous criminal, civil, privacy, and regulatory laws in Canada. With respect to the criminal law, notably, purchasing and selling spyware that is primarily useful for surreptitiously intercepting private communications (as many of the major consumer-level spyware products do), likely constitute a criminal offence in Canada. These offences expose vendors and operators of spyware products to the risk of criminal law consequences, such as jail. 
Operators of stalkerware are also subject to civil liability if they are found to have perpetrated a tort (wrongful act). Targeted individuals may bring a cause of action (lawsuit) against an operator on legal grounds of: invasion of privacy, public disclosure of private facts, breach of confidence, and intentional infliction of mental suffering (IIMS). We also briefly discuss non-intentional torts and assess the emerging novel tort of harassment as a potential additional response to stalkerware. Our legal analysis found that the act of making and selling—as opposed to using—spyware products likely also runs afoul of both criminal and product liability law with respect to dangerous or defective product design. We also review the applicability of non-binding instruments such as the United Nations Guiding Principles on Business and Human Rights and industry efforts at self-regulation, including ethical codes and internal worker resistance in the technology sector. We consider, briefly, the limited applicability of intellectual property laws to impeding the creation and dissemination of stalkerware. Canadian consumer privacy and data protection law, governed by the federal Personal Information Protection and Electronic Documents Act (PIPEDA), and substantially similar provincial legislation, includes several provisions regarding informed consent, notice, and appropriate purposes that would apply to stalkerware businesses and likely render their activities unlawful. We find that PIPEDA includes three potential exceptions, or loopholes, that may allow stalkerware vendors to circumvent accountability. We recommend that the Office of the Privacy Commissioner of Canada or federal and provincial legislators take action to close these potential gaps. 
App stores and web platforms that sell apps to consumers also play a role as intermediaries that can facilitate sales of stalkerware through their platforms. Despite active efforts by companies such as Apple and Google to enforce app developer policies and agreements against such apps, research shows evidence of a continued, albeit decreased, presence and availability of stalkerware on popular app stores. We recommend that all app stores clarify their relevant policies and revise developer terms of agreement regarding user privacy, consent, security, and malicious behaviour to expressly state that such protective policies apply to the individual whose data is being collected, processed, or disclosed by the app in every case, instead of referring simply to a generic ‘user’. The generic term ‘user’ can inappropriately or incorrectly be interpreted as referring to the stalkerware operator rather than the targeted individual. 
Despite the available data about the prevalence of IPS and technology-facilitated abuse and harassment in Canada and its impact on victims and gender equality rights more broadly, there appears to be a significant measurable gap between what the law dictates about such conduct and whether legal remedies are readily available to victims in practice. One complicating factor is that many spyware apps market themselves as, or are genuinely intended as, apps for ostensibly legitimate purposes, such as child and employee monitoring. Such apps are then repurposed into stalkerware for abusive purposes. Similar repurposing occurs with non-spyware apps or built-in phone features such as a GPS tracker, which abusive operators may manipulate or repurpose into stalkerware. We discuss this dual-use nature of spyware technologies, and critique the legitimacy of dual-use spyware even where such technology is used to surveil children or employees. 
The report concludes by recommending a range of measures that relate to public legal education, law reform, heightened investigative and regulatory scrutiny of consumer spyware markets, and enhanced training and resources for law enforcement, regulators, and other justice system participants who are tasked with enforcing Canada’s laws. Given stalkerware’s inherent dangers and invasive capabilities and the documented association between stalkerware apps and intimate partner violence and gender-based abuse, justice system participants and the private technology sector bear a responsibility to establish and reinforce a web of meaningful restraints that address and remedy the harms of stalkerware, both in law and in practice. 
Our purpose in this report is to contribute to greater substantive efforts to address technology-facilitated gender-based abuse in Canada, beginning with the harms and violence that stalkerware enables through its covert or exploitative surveillance of targeted individuals. The critical analysis provided in this report is designed to enhance public understanding of legal remedies, policy considerations, and human rights concerns associated with stalkerware. The report is also designed to provide assistance to policymakers, legal professionals, academics, community workers, and advocates who are trying to support victims or navigate the complex implications of this technology.

05 July 2019

Facial Biometric Failures

The UK Human Rights, Big Data and Technology Project Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology by Pete Fussey and Daragh Murray offers a damning view of the facial biometrics program undertaken by the UK's leading police force.

The authors comment
Between 2016 and 2019 the Metropolitan Police Service (MPS) conducted a total of 10 test deployments, trialling live facial recognition (LFR) technology during policing operations. This research was initiated in order to provide an independent academic report on this process. Researchers observed the final six test deployments, beginning in Stratford (Westfield) in June 2018. Researchers joined officers on location in the LFR control rooms, engaged with officers responding on the ground, and attended briefing and de-briefing sessions in addition to planning meetings. A number of legal and other documents prepared by the MPS were also reviewed.
This report is focused on issues arising in relation to the MPS’ LFR test deployments. It does not directly engage with broader issues regarding the legality, or use, of LFR technology by law enforcement agencies. As such, although certain elements of the analysis presented herein may be relevant to future debates, no conclusions are drawn in that regard.
This report centres on the overall governance of the LFR test deployments, the procedures and practices of LFR in operational settings (as observed over the course of the test deployments), and human rights compliance. A draft of this report was submitted to the MPS so that any factual errors could be noted, and to provide a right of reply. After review of the document, the MPS chose not to exercise this right. This report is independent, was externally funded as part of the ESRC Human Rights, Big Data & Technology Project, and the findings and opinions expressed are those of the authors alone.
Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’.
As detailed in the report, it is highly possible that the LFR trial process adopted by the MPS would be held unlawful if challenged before the courts. In particular, this report concludes that the implicit legal authorisation claimed by the MPS for the use of LFR - coupled with the absence of publicly available, clear, online guidance - is likely inadequate when compared with the ‘in accordance with the law’ requirement established under human rights law. This demonstrates a need to reform how the trialling or incorporation of new technology and policing practices is approached by the MPS, and underlines the need for the effective incorporation of human rights considerations into all stages of the MPS’ decision making process, including with respect to if, and how, trials should be undertaken. It also highlights a need for meaningful engagement with, and debate on, these issues at a national level.
The report highlights a number of issues arising from the LFR test deployments, and raises some significant concerns. These concerns are most notable with respect to:
(1) The research process adopted by the MPS to trial LFR technology. The test deployments offered an opportunity to both examine technical accuracy and to understand the implications of LFR on police operations. However, MPS trial methodology focused primarily on the technical aspects of the trials. There is less clarity on how the test deployments were intended to satisfy the non-technical objectives, such as those relating to the utility of LFR as a policing tool. This report raises concerns that the process adopted by the MPS was inadequate with respect to addressing the non-technical objectives identified. 
(2) The absence of an explicit legal basis for the use of LFR, and concerns that the implicit legal basis identified by the MPS is inadequate in relation to the ‘in accordance with the law’ requirement established by human rights law. This is compounded by the absence of online guidance capable of addressing the ‘foreseeability’ of how LFR technology was utilised. Without explicit legal authorisation in domestic law, it is highly possible that police deployment of LFR technology may be held unlawful if challenged before the courts; 
(3) Human rights law requires that any rights interference be ‘necessary in a democratic society’. MPS analysis did not effectively address this requirement, and it is considered highly possible that the MPS’ test deployments of LFR technology would not be regarded as ‘necessary in a democratic society’ if challenged before the courts; and 
(4) Operational factors relating to inconsistency in the adjudication process, including a presumption to intervene, problems with how the MPS engaged with individuals, and difficulties in obtaining the consent of those affected.
Research Methodology Adopted for the Report
Six test deployments were observed from beginning to end. Observations extended to attendance at pre-deployment police briefings for each test deployment and post- deployment debriefings which usually took place the following day. Observation mainly focused on the operational practices of the intelligence units in the control room: that is, the activities of officers monitoring LFR camera feeds, deliberating over computer-generated matches and, when deemed appropriate, issuing instructions to intercept matched persons. Research also engaged street-based intervention teams responsible for intercepting individuals matched to watchlists by the LFR system, plain clothes officers deployed at the test sites, and uniformed officers involved in LFR-related public facing activities. Researchers were also invited to several LFR planning meetings.
All documents provided by the MPS were examined. A variety of interview techniques were used to gain additional data. Observations involved detailed conversations with a wide range of MPS staff. These included operational officers, individuals holding tactical and strategic roles, and those engaged in the technical evaluation of LFR. Formal interviews were also conducted with a number of key external stakeholders including oversight bodies, technology evaluation specialists and civil society organisations.
The Nature of the Test Deployments Undertaken by the MPS, and Appropriateness of the Trial Method Technology adopted
Numerous voices, including the Surveillance Camera Commissioner and the Biometrics Commissioner, have stressed the importance of trialling emergent technologies. Several attempts have been made to offer principles to guide this process, yet their development has been piecemeal and no agreed national standards or established oversight mechanisms exist. It is within this context that the MPS trials of LFR took place.
The MPS’ trial methodology focused primarily on technical aspects, examining the performance of the technology in live settings. There did not appear to be a clearly defined research plan that set out how the test deployments were intended to satisfy the non-technical objectives, such as those relating to the utility of LFR as a policing tool. This necessarily complicated the trial process, and affected its effectiveness and overall utility.
It is unclear if alternative approaches to testing LFR were considered and discarded. It is uncertain whether the initial decision to trial LFR considered the use of simulated conditions, with volunteer-based watchlists (as adopted in Berlin) or live condition trials focused on technical performance but not policing responses (as in the United States).
The mixing of trials with operational deployments raises a number of issues including with respect to consent, public legitimacy and trust. A key concern is the lack of a clear distinction between research objectives regarding the trial of LFR technology, and the policing objectives associated with operational deployments. This holds particular meaning when considering differences between an individual’s consent to participate in research and their consent to the use of technology for police operations. For example, from the perspective of research ethics, someone avoiding the cameras is an indication that they are exercising their entitlement not to be part of a particular trial or are protecting their own right to privacy. From a policing perspective, this same behaviour may acquire a different meaning and serve as an indicator of suspicion. This resulted in a number of issues relating to how police officers engaged with individuals on the ground.
The absence of national leadership at government level – including a lack of clear lines of responsibility regarding whether trials should be conducted, and if so how – leaves police evaluation teams with the enormous task of not only undertaking scientific evaluation, but also compensating for a lack of national leadership by recreating and reinterpreting policy anew. While this tension may apply to other trials of police equipment, it is particularly acute in the case of LFR given its intrusive nature, and requires urgent attention for future testing of this technology (as well as other technological innovations). A key element in this regard must be considering and building in human rights compliance from the outset of any trial process, including with respect to if, and how, any trials should be undertaken.
The Legal Basis Underpinning the MPS’ Use of LFR
No explicit legal basis exists authorising the MPS’ use of LFR technology. The legal mandate documents prepared by the MPS reference a number of different sources of law, including the common law, the Human Rights Act 1998, the Freedom of Information Act 2000, the Protection of Freedoms Act 2012, the Data Protection Act 2018, and the Regulation of Investigatory Powers Act 2000. Of these, only the common law and the Protection of Freedoms Act 2012 could potentially establish an implicit legal basis for LFR. The other sources either relate to public access to information regarding police activity or regulate the use of LFR technology, without establishing explicit legal authorisation for LFR as such.
The difficulty with relying upon the common law or the Protection of Freedoms Act 2012 as sources of implicit legal authorisation vis-à-vis the use of LFR technology is the ambiguity that will inevitably arise. The ‘in accordance with the law’ test established under human rights law incorporates a number of different elements, relating both to the existence of a legal basis and the quality of that legal basis. Key in this regard is protection against arbitrary rights interferences, and foreseeability with respect to how the law will be applied. Existing case law reinforces the concern that the legal basis identified by the MPS may be overly ambiguous. Issues in this regard are discussed in greater detail in Section 3.2. of the report. Similar concerns regarding the absence of a clear legal basis have been raised by Liberty and Big Brother Watch when interviewed for this report, and in academic commentary.
Ultimately, this report concludes that the implicit legal authorisation claimed by the MPS for the use of LFR appears inadequate when compared with the ‘in accordance with the law’ requirement established under human rights law. The absence of publicly available guidance clearly circumscribing its circumstances of use – thereby facilitating foreseeability – reinforces this point. Without explicit legal authorisation in domestic law it is highly possible that police deployment of LFR technology – as a particularly invasive surveillance technology directly affecting a number of human rights protections, including those relevant to democratic participation – may be held unlawful if challenged before the courts.
The Absence of Effective Analysis Addressing the ‘Necessity in a Democratic Society’ Determination for LFR
Determining the necessity in a democratic society of any measure that interferences with human rights protections isessential in order to ensure overall rights compliance. In this context this requirement is intended to ensure that measures useful to the protection of public order and the prevention of crime do not inappropriately undermine other rights, including those necessary to the effective functioning of a democratic society, such as the right to private life, the right to freedom of expression, and/or the right to freedom of assembly and association. The test itself involves a number of different elements. An interference will be considered necessary in a democratic society ‘if it answers to a “pressing social need”, if it is proportionate to the legitimate aim pursued and if the reasons adduced by the national authorities to justify it are relevant and sufficient.’
In order to determine whether LFR is ‘necessary in a democratic society’ in the circumstances of the MPS’ test deployments, impact or risk assessments should be conducted prior to deployment in order to identify and understand any potential human rights harm. This conclusion is supported by the Surveillance Camera Commissioner’s recent guidance on ‘Police Use of Automated Facial Recognition Technology with Surveillance Camera Systems’.
The MPS did prepare a number of impact/risk assessment documents. However, these documents are regarded as inadequate with respect to engagement with human rights law requirements.
No MPS documents have been seen that clearly set out the justification underpinning the deployment of LFR technology in a manner capable of addressing whether such deployments may be considered ‘necessary in a democratic society’. Of particular concern is the lack of effective consideration of alternative measures, the absence of clear criteria for inclusion on the watchlist, including with respect to the seriousness of the underlying offence, and the failure to conduct an effective necessity and proportionality analysis. For these reasons, and as discussed in greater detail in Sections 3.3.1. and 3.3.2., it is highly possible that the MPS’ test deployments of LFR technology would not be regarded as ‘necessary in a democratic society’ if challenged before the courts.
Operational Factors
Overall, the LFR system generated 46 matches over the course of observed test deployments, involving 45 separate individuals. 42 matches were deemed eligible for analysis. Adjudicating officers judged 16 (38.1%) of these 42 computer generated matches to be ‘non-credible’; that is, officers did not believe the image recorded by the LFR technology match the image on the watchlist. MPS officers considered the LFR match sufficiently credible to stop individuals and perform an identity check on 26 occasions. Four of these attempted interventions were unsuccessful, as individuals were lost in the crowd.
Of the remaining 22 stops, 14 (63.64%) were verified as incorrect matches following an identity check. Eight (36.36%) were verified as correct matches following an identity check. This means that across all six observed trials, and from all computer- generated alerts, face recognition matches were verifiably correct on eight occasions (eight of 42 matches, 19.05%).
Watchlist Construction
The condition of being ‘wanted’ was consistently stated as a criterion for being enrolled on a watchlist. However, ambiguity exists regarding the definition of ‘wanted’ adopted by the MPS, and documentation indicates that this included both ‘wanted by the courts’ and ‘wanted by the police’. Those included on the watchlist thus apparently ranged from individuals wanted by the courts to those wanted for questioning, across a range of different offences.
categories of ‘wanted’ persons. he identification of ‘individuals shown as wanted by the police and the courts’ was not the only watchlist criterion in use. MPS documentation also highlighted the use of LFR to identify individuals who present a risk of harm to themselves and others; support ongoing policing activity with regards to a specific problem or location; and assist police in identifying individuals who may be ‘at risk or vulnerable’.
The category of ‘wanted’ (re ‘to identify individuals shown as wanted by the police and the courts’) was relied on in creating watchlists for all observed test deployments. Refinements to the threshold for inclusion under this criterion were made from December 2018 onwards. Nonetheless, significant ambiguity remains regarding the criteria used for watchlist construction. This directly affects the ‘foreseeability’ of MPS activity regarding the use of LFR technology. The size of the watchlists varied considerably across the observed test deployments. No discernible direct relationship existed between the watchlist size and number of alerts.
Watchlist Accuracy
Legacy data handling systems meant data relevant to watchlists was spread across different databases and each watchlist entry needed to be assembled by manually extracting and merging records from each of these locations.
Ensuring accurate and up-to-date information from across these different data sources posed a significant challenge. Such difficulties made compliance with overall standards of good practice complex and placed a significant burden on officers. Issues to do with the accuracy of the watchlist played out when individuals were stopped on the basis of outdated information. On occasion, individuals were flagged by the LFR technology in relation to a serious offence, but this had already been dealt with by the criminal justice system. However, they were wanted in relation to more minor offences and were arrested accordingly. It is unlikely this lesser offence would have been sufficiently serious to be included in the initial watchlist. This raises additional concerns when LFR is deployed on a necessity calculation intended to address serious crime but is then also used for more minor offences.
Matching Intelligence to LFR Deployments
Most ethical guidance, and legal and oversight provisions governing surveillance also require a clearly prescribed application. Police uses of surveillance measures are directed towards protecting the public from crime and upholding public order. However, the legitimacy of any measure must still be determined in relation to the ‘necessary in a democratic society’ requirement. One key element when evaluating issues of necessity and, by extension, proportionality, is a consideration of the stated purpose of LFR test deployments and, crucially, analysis of the extent to which the use of the technology is ‘rationally connected’ to this purpose.
In the first test deployments observed (Stratford), LFR use was regularly justified in briefings on the basis of several perceived benefits including detection, deterrence, intentional crime displacement and disruption. These applications involve clear differences in the purpose of LFR, requiring distinct necessity calculations.
Public Consent
The role of public consent constituted a contentious debate surrounding the LFR test deployments. Like CCTV, LFR is classified by the MPS as a form of overt surveillance and the consent of affected individuals is seen as fundamental. The importance of consent is also emphasised in regulatory instruments. Measures undertaken by the MPS that bear on consent overlapped with attempts to promote public reassurance and to test public opinion.
For consent to be meaningful, several conditions are important:
1. Informed consent. 
The MPS pursued a number of strategies intended to ensure that public consent for LFR constituted informed consent, including the use of uniformed officers to explain the role of the technology to the public, leafleting and signage boards. A key question emerges over the degree to which consent can be considered informed on the basis of the information supplied by the MPS. Information provided transparency regarding the time and location of the LFR test deployments yet less clarity over the purpose of the deployment, who was likely to be the subject of surveillance, and how additional information could be ascertained. With the exception of the morning of the first Soho trial an individual reading or standing next to a sign was out of camera range for each LFR deployment. A key conclusion is the importance of being clear about why information is provided to the public. What might be appropriate in respect to issues of public support is not necessarily sufficient or well targeted enough to support individual consent. During test deployments individuals were required to make an informed decision regarding consent in a very short time-frame, a factor exacerbated by the limits on prior knowledge amongst the public. 
2. Consent and opportunities to exercise a different choice. 
Opportunities for pedestrians to bypass the cameras and continue walking towards the same destination varied across the test deployments. These ranged from simply crossing the street to a walking detour of an additional 18 minutes to reach the same point. 
3. Capacity to refuse or withdraw consent without penalty. 
Treating LFR camera avoidance as suspicious behaviour undermines the premise of informed consent. In addition, the arrest of LFR camera avoiding individuals for more minor offences than those used to justify the test deployments raise clear issues regarding the extension of police powers and of ‘surveillance creep’. These issues highlight the distinctions between gaining consent for research (e.g. trials) and consent for police operations, and the tensions that will inevitably arise during live test deployments.
The Adjudication Process
All observed officer briefings were clear on the importance of human discretion in the LFR process. Officers were consistently instructed that a computer derived match was not sufficient to confirm an identity in and of itself. This is appropriate given the significant error rates associated with LFR and legal stipulations concerning meaningful human intervention in digital decision-making processes. Adjudication practices varied across the deployments. These can be placed within three distinct categories:
1. Multiple adjudicators in the control room. 
Multiple operators brought additional scrutiny to the process but also raised the likelihood of contrasting approaches within the same adjudication team. 
2. Simultaneous adjudication and street engagement. 
This involved intelligence officers radioing through a description of a LFR match while they were still in the process of deliberating over the credibility of the alerted match. A decision to trigger the street intervention team to start looking for the matched individual may have sound operational reasons. However, every instance in which this simultaneous approach was followed led to an attempt to engage a matched individual. This forms part of a wider and discernible ‘presumption to intervene’. Greater clarity is possible over whether communications to intervention teams are instructions to maintain observation or an instruction to intervene. 
3. Mobile devices and simultaneous adjudication on the street and in the control room. 
During the second Romford test deployment, the decisive choice to intervene with a matched individual was made by street-based officers equipped with handheld devices capable of receiving LFR alerts on at least five occasions. Decisions of the control room-based intelligence teams to engage a subject were never rejected by mobile-equipped officers. Decisions by the control room-based intelligence units not to intervene were frequently ‘overruled’ by street-based mobile-equipped officers on the basis of their separate access to imaging information. These processes also contribute towards a presumption to intervene.
Physical Factors Relating to Deployments
The physical characteristics of a particular area, along with the spatial location of intervening officers, had significant bearing on the adjudication process and subsequent street intervention. The spatial deployment of officers to provide the best opportunity to locate a matched individual on the street constricted the time available to adjudicators to reach a decision. Conversely, situating officers further from cameras afforded more time for control room adjudication but increased the likelihood of losing track of individuals.

02 July 2019


Commissioning the Consumer Financial Protection Bureau' by Jolina C. Cuaresma in (2019) 31 Loyola Consumer Law Review comments
There has been much debate over the Consumer Financial Protection Bureau’s lack of executive and congressional oversight: its single director removable only for cause and its operations are not subject to appropriations. This paper explains how this very leadership and accountability structure — intended to politically insulate the agency — had the perverse effect of politicizing it. Since Director Cordray’s departure, there has been increased regulatory uncertainty, discouraging financial innovation and harming consumer welfare. This paper recommends that Congress restructure the Bureau into a multi-member, bipartisan commission to provide industry regulatory predictability and ensure that consumer protection retains its independent seat in the financial regulatory system.


In State of New South Wales v Schmidt [2019] NSWSC 764, regarding an extended supervision order ,Hamill J comments
As has become customary in cases of this kind, the State tendered what can only be described as an overwhelming, not to say preposterous, amount of material. How much of this was at the insistence of the lawyers for the defendant I do not know. A great deal of this material was repetitive and superfluous. In percentage terms, by reference to thousands of pages presented to the Court, the parties referred to very little of it. It appeared that every interaction the defendant has had with those supervising him, or running the establishments in which he has resided, was subject to a note that the parties felt compelled to tender into evidence. It would be disingenuous for me to suggest that I have read all of this material closely. I haven’t. 
The Judges who regularly preside over these matters are not provided with reading time to scrutinise and examine every document tendered in such matters. Similar complaints have been made by judges in earlier cases. No doubt those complaints, and this one, will have no impact. However, the Crown Solicitor’s Office should consider implementing some kind of protocol that reduces the volume of material tendered. Those appearing for defendants should co-operate to ensure that the amount of material tendered is both sensible in its volume and relevant to the issues in dispute. It is, to speak bluntly, simply dumb and unhelpful to tender every OIMS note made over a three year period. Further, it is of no assistance to the Court to include in the bundles of material multiple copies of the same documents. I was grateful to counsel for the State for providing a schedule which detailed the many duplications, but the better course is for the parties to co-operate to put together a joint tender bundle that does not include duplications and excludes material that is not significant to the matters in dispute. 
Having relieved myself of that exasperation, counsel in this matter have provided helpful written and oral submissions, summaries, chronologies and more compendious evidence allowing the Court to focus on the important material and be in a position to make reasoned and informed decisions on the controversial aspects of the case. 
Included in the material were reports going back many years which explain the opinions of various experts concerning the defendant’s psychology and pathology as well as providing reasoned assessments as to the extent of his risk of re-offending and the triggers or risk factors that might lead him to re-offend. The material also concerned information about Mr Schmidt’s criminal and custodial history, the remarks on sentence and facts relating to a number of his earlier crimes, and the evidence that was tendered in the sentencing proceedings. There is also evidence concerning his compliance and non-compliance with the extended supervision order imposed by Button J, his engagement with counselling and other rehabilitation services and his behaviour during those periods when he has been in gaol. In a nutshell, the material shows that Mr Schmidt committed two extremely grave and chilling offences of violence some years ago, was punished severely for those crimes, has been at times indifferent or inconsistent with his attempts at rehabilitation and often frustrated at the strictures of the supervision order to which he has been subject since February 2016.

01 July 2019

Sovereign Citizens

'Tinfoil Hats and Powdered Wigs: Thoughts on Pseudolaw' by Colin McRoberts in (2019) 58(3) Washburn Law Journal comments
This article describes “pseudolaw,” the phenomenon of individuals who use elaborate, fictional rules in real-world courts and legal disputes. I explain why "pseudolaw" is a better label for this phenomenon than more common but less accurate terms like "sovereign citizen," and describe several real-world examples (such as the pseudolegal guru who claimed to have developed a quantum legal language that would defeat any lawyer, and to be the King of Hawaii). I then discuss the harms pseudolaw does to the legal system, the general public, and pseudolawyers themselves, as well as several likely causes of pseudolegal ideation. Finally, I propose solutions that judges, court staff, practitioners, and the public can use to help stem the growth of pseudolaw.
'A “Lunatic Fringe”? The Persistence of Right Wing Extremism in Australia' by Kristy Campion in (2019) 13(2) Perspectives on Terrorism 2-20 comments
Right Wing Extremism (RWE) in Australia is historically persistent and contemporarily well-established. The persistence is not simply the consequence of an Australian-centric white nationalism, but is the result of international and domestic exchanges. This article investigates the persistence and appeal of Australian RWE groups. The first movements emerged in the 1930s against Bolshevik Communism, and quickly established ties with fellow travellers elsewhere in the Western world. While their influence diminished, their sentiment persisted in subcultural networks which also demonstrated international ties. RWE resurged in the 1980s, seeking to stymie pluralism and immigration. Some extremists travelled overseas, and formed connections with international counterparts. Their activities were suppressed by law enforcement, but the sentiment continues to survive in subcultural networks. RWE resurfaced in the decade prior to the 2019 Christchurch attack, largely targeting ethnic Australians and members of the Muslim community. Currently, the RWE threat in Australia is inherently tied to extremist attitudes regarding jihadism, Muslims, and immigration.

30 June 2019

Personal Data in Hong Kong

Who Decides What is Personal Data? Testing the Access Principle with Telecommunication Companies and Internet Provides in Hong Kong' by Lokman Tsui and Stuart Hargreaves in (2019) 13 International Journal of Communication 1684 comments
Do personal data protection laws allow citizens to access their personal data? We answer this question by testing the data access principle of the Personal Data Privacy Ordinance (PDPO) with telecommunication companies and Internet providers in Hong Kong. In our study, we submitted data access requests to telecommunication companies and Internet providers for a range of information, including subscriber information, call logs, IP addresses, geolocation data, and whether they had shared any of this data with third parties. We argue that the telecommunication companies failed to (1) let users see their personal information in a comprehensive manner, including IP addresses or geolocations;(2) tell users whether they indeed process such information;(3) offer the possibility of correction or deletion; and (4) tell users whether they have shared this data with third parties, including law enforcement.


'A Purpose-Based Theory of Corporate Law' by Asaf Raz comments
Modern corporate law scholarship focuses on flexible, normative questions: should multiple-class shares be permitted? To what extent should staggered boards be implemented? Yet, the range of possible answers is constrained by a far more fundamental inquiry: what is a corporation, and what is its purpose? The structure of corporate law – the set of economic and doctrinal concepts that attach to every corporation, without exception – informs an extremely wide range of practical issues. However, that structure is inadequately addressed in current scholarship, leaving the participants in the corporate law sphere engaged in isolated, ineffective discourse. 
This Article mainly operates in two scholarly spaces: one is the increasingly salient debate between shareholderists ("shareholder primacy" advocates) and stakeholderists ("corporate social responsibility" advocates). This Article reveals that neither camp actually complies with positive law and normative considerations. The corporation is a separate legal person, not owned by its shareholders, nor having identity of interests with them. Yet, the purpose of the (for-profit) corporation is, and should be, the lawful pursuit of its own economic profit. 
The second space in which this Article contributes a fresh outlook is the "corporate anatomy" literature. The purpose-based theory of corporate law – a unified, ground-level "instructions manual" for what corporate law is and how it differs from other fields – delivers a more nuanced understanding of the corporation's anatomy than that suggested by Kraakman et al., and relies on a broader, more substantive definition of what "corporations" include. 
After an introduction, Part 2 of the Article discusses in great detail the five phenomena that define corporate law: the corporation's purpose, personhood, stakeholders, residual claimants and fiduciaries. As this Article explains, the structure of corporate law places certain boundaries on what our normative analysis can do: for example, due to corporate law's uniquely open-ended nature, it is not possible to rely solely on "contract" when analyzing the corporation's relationships with other parties. At the same time, contrary to some stakeholderist claims, corporate law is also not public law. In short, corporate law has a unique structure of its own, designed to achieve certain economic and societal goals (and greatly succeeding in that). Part 3 applies the theory to three high-currency topics: shareholder activism, corporations' constitutional rights, and the rise of LLCs and other "alternative" corporations. In each case, the purpose-based theory of corporate law produces new, often pointy, conclusions. Part 4 summarizes. 
Not only is corporate law not "dead," as some commentators are keen on suggesting, it is more important than ever. This Article assembles the puzzle of corporate law.


'Privacy as a Public Good: A Case for Electronic Cash' by Rodney Garratt and Maarten R.C. van Oordt comments
Privacy is a feature inherent to the use of cash for payments. With steadily increasing market shares of commercial digital payments platforms, privacy in payments may no longer be attainable in the future. In this paper, we explore the potential welfare impact of reductions in privacy in payments in a dynamic framework. In our framework, firms may use data collected through payments to price discriminate among future customers. A public good aspect of privacy in payments arises because individual customers do not bear the full costs of failing to protect their privacy. As a consequence, they may sub-optimally choose not to preserve their privacy in payments. When left to market forces alone, the use of privacy-preserving means of payments, such as cash, may decline faster than is optimal.

Health Justice

'When Law is Good for Your Health: Mitigating the social determinants of health through access to justice' by Hazel Genn in (2019) Current Legal Problems comments
Access to justice research over two decades has documented the health-harming effects of unmet legal needs. There is growing evidence of bidirectional links between law and health demonstrating that social and economic problems with a legal dimension can exacerbate or create ill health and, conversely that ill-health can create legal problems. Independently, social epidemiological research documents gross and widening inequalities in health, largely explained by social determinants such as income, housing, employment, and education. Although legal issues are embedded in most social determinants of health, law has been largely invisible in social determinants discourse, research and interventions. 
This article argues that legal services have an important role to play in mitigating many of the socio-economic determinants that disproportionately impact the health of low income and vulnerable groups. It describes the international practitioner-led movement of Health Justice Partnership through which lawyers work with healthcare teams to address the root causes of ill health rather than focusing on physical and psychological manifestations of negative social determinants. 
Finally, the article attempts to delineate the evolving field of health justice, advancing a transdisciplinary research agenda that could strengthen both public health and access to justice research by moving beyond the limitations of single discipline approaches. Noting the vigorous policy emphasis in law and health on prevention and partnership to address the twin challenges of access to justice and health inequalities, the article ends with a plea for policy coordination that acknowledges shared responsibility across government for improving the health of the public.