05 July 2020

Juries

Flowers v State of New South Wales [2020] NSWSC 526 considers a refusal of trial by jury and the sort of misunderstanding of law that is exhibited by Australian sovereign citizens.

Harrison J states
 The background to these proceedings has been adequately recorded in previous judgments of this Court: see, for example, Flowers v State of New South Wales [2019] NSWSC 1308 and Flowers v State of New South Wales [2019] NSWSC 1467, among others. It is unnecessary to repeat what has been said in those earlier decisions. 
Mr Flowers now asks me to order that his claim for damages for malicious prosecution be heard by a jury. More than that, Mr Flowers contends, uniquely in my experience, that his application for a jury should itself be determined by a jury. 
Section 85 of the Supreme Court Act 1970 provides relevantly as follows:
85 Trial without jury unless jury required in interests of justice 
(1) Proceedings in any Division are to be tried without a jury, unless the Court orders otherwise. 
(2) The Court may make an order under subsection (1) that proceedings are to be tried with a jury if: (a) any party to the proceedings: (i) files a requisition for trial with a jury, and (ii) pays the fee prescribed by the regulations made under section 18 of the Civil Procedure Act 2005, and (b) the Court is satisfied that the interests of justice require a trial by jury in the proceedings. 
(3) The rules may prescribe the time within which a requisition must be filed for the purposes of subsection (2) (a). 
(4) A fee paid under this section is to be treated as costs in the proceedings, unless the Court orders otherwise. ... 
UCPR 29.2 is in these terms:
29.2 Applications and requisitions for juries in proceedings other than defamation proceedings 
(1) This rule applies to proceedings other than defamation proceedings. 
(2) An application in proceedings to which this rule applies for the proceedings to be tried by jury must be made by notice of motion. 
(3) For the purposes of section 85 of the Supreme Court Act 1970 and section 76A of the District Court Act 1973, a requisition for a jury in proceedings to which this rule applies must be filed at the same time as the notice of motion referred to in subrule (2) is filed. 
(4) Unless the court otherwise orders, a notice of motion under subrule (2) must be filed-- (a) if the notice is filed by the plaintiff-- (i) within 56 days after service on the defendant of the statement of claim, or (ii) if a defence is served on the plaintiff within that period, within 28 days after service of the defence on the plaintiff, or (b) if the notice is filed by the defendant-- (i) within 28 days after service on the defendant of the statement of claim, or (ii) if, pursuant to rule 14.3, the court directs some other date for the filing of a defence, within 28 days after the date fixed by the court's direction.
Mr Flowers has not complied with the Act or the rules. Mr Williams of counsel for the State of New South Wales takes no point about this. 
In support of his application, Mr Flowers has provided me with a very impressive document headed “Challenge to the Jurisdiction of the Court”. I suggested to Mr Flowers that the enigmatic nature of the document meant that his best course was to have me treat it as a submission in aid of the present application. Mr Flowers accepted my suggestion. 
With the aid of that document, Mr Flowers contends that trial by jury is an inalienable right guaranteed to him by the Magna Carta over 800 years ago and remains the common law of the land. He maintained that what he styled “a special jury” should be convened to determine his challenge to the validity or effect any Act or subordinate legislation that derogated from that right. 
Although I cannot be certain, many of Mr Flowers’ submissions have a vaguely familiar ring. It is, for example, unusual in my limited experience to be referred to trial by jury as the Palladium of Liberty. Mr Flowers submits that denial of his right to a trial by jury is “sinister, vile and reprehensible”. Lord Edward Coke also gets a run, telling me that “Common law doth control Acts of Parliament and adjudges them when against common right to be void”. I feel confident I have heard similar submissions before. 
Mr Flowers’ proposition, to the extent that I understand it, is that his consent to have his case heard without a jury has not been given so that any purported exercise of jurisdiction otherwise than by jury is void. Any such consent must be clear and unequivocal. Somewhat troubling from my personal perspective is Mr Flowers’ submission in the following terms:
“In any action, both parties must give their clear and unequivocal consent to be without a jury. Without that consent, the court has no jurisdiction to proceed summarily and the jurisdiction of the court must be challenged. The challenge can only be judged by a special jury. Should a judge or magistrate dismiss this challenge, then he or she is liable to imprisonment for five years. Should a judge or magistrate dismiss this challenge, that is a violation of due process and the rule of law.”
Mr Flowers also reminds me that no “evil counsellors, judges and [sic, or] ministers” can be allowed to subvert or extirpate the laws and liberties of the people: Bill of Rights, 1688. To deny trial by jury is to deny democracy and to deny democracy is treason. 
Mr Flowers’ contentions appear to proceed upon the underlying basis that, to the extent to which s 85 of the Supreme Court Act or UCPR 29.2 operate somehow to modify or extinguish what would otherwise be an automatic right to a trial by jury, they are ineffective or void. Mr Flowers maintains that no Act of Parliament can take away his right to trial by jury. In Mr Flowers’ submission, rights never die. Mr Flowers asserts that “people are not subject to statute law, which is inferior to common law, and are only accountable to common law that is made and imposed by their equals, i.e. accountable only to juries”. 
Mr Flowers has submitted that all Acts of Parliament in Australia since 1901, with the Proclamation of the Commonwealth of Australia, have not been lawfully enacted. This is due to the fact that there have been no orders in the Privy Council for the appointments of any Vice Regal executive representatives of the Crown of the United Kingdom to grant the Royal Assent to bring any statutes into effect. Mr Flowers then makes the further troubling, if disconnected, submission that “all Australian judges and magistrates are equally fraudulent”. 
At least one difficulty with Mr Flowers’ contentions is that they are no more than that: unsupported assertions. Mr Flowers offers no evidence that could support a claim that, for example, the Supreme Court Act is void or was not enacted according to law. 
Another difficulty lies in the fact that this Court and the Court of Appeal have consistently operated upon the basis that s 85 of the Supreme Court Act is a valid law of New South Wales and have applied it accordingly. In the absence of an arguable legal basis supported by evidence that suggests that I should take a different approach, I consider that I am bound to apply the provision according to its terms.
Harrison J had encountered invocation of Coke LJ in Wilson v GIO General Ltd [2007] NSWSC 1445.

The current judgment quotes Mason CJ, Hall J and others regarding the  framework for jury trials and directions, with Harrison J stating
 I have included this helpful extract in order to assist Mr Flowers to appreciate the now well established regime that governs the circumstances in which a party might demonstrate an entitlement to a jury in civil cases in New South Wales. The general rule in this Court in civil proceedings is trial by judge alone. The Court must be positively satisfied that the disinterested interests of justice require departure from that general rule. The same reasoning applies as well to Mr Flowers’ contention that his entitlement to a jury should be decided by a jury. The alternative for which he contends conjures up the prospect of a never ending descent into litigious absurdity. 
In the nature of things, having regard to his fundamental proposition that s 85 is invalid and of no force or effect, Mr Flowers did not address this issue. It will be apparent that I consider that s 85 operates and applies in the present circumstances to govern the question of the mode of trial. If Mr Flowers wished to contend, despite his so-called “jurisdictional” point, that his case warranted trial by jury, because it was in the interests of justice to depart from the usual mode of trial, he should of course be given an opportunity to do so. In the circumstances, given the way in which Mr Flowers approached the matter, I will direct him within 21 days, if so advised, to furnish me with written submissions not exceeding five pages, setting out the reasons why, in his opinion, the interests of justice in this case lead to the conclusion that a jury should determine whether he can demonstrate that he was prosecuted without reasonable and probable cause and maliciously and if so, the quantum of any damages to which he is entitled. 
In anticipation of receiving those submissions by 5 June 2020, I will appoint Friday 12 June 2020 before me at 9.30am for judgment and further directions. 
Finally I should note that Mr Flowers has appeared throughout in these proceedings without legal advice or assistance or representation. The courts necessarily extend significant latitude to people in his position in order that indolence or suspicion or even choice should not frustrate the prospect of securing the protection of the law and the vindication of a right or access to justice. However, Mr Flowers is not alone in craving his day in court. The resources of this Court and others like it are finite and delays are often unavoidable despite the best efforts of all concerned. Mr Flowers wants his case heard and the State of New South Wales evidently shares his view. In such circumstances it is very important that Mr Flowers not become diverted by unhelpful voices chattering on the sidelines or by loud drums being beaten by folk with unhelpful agendas that are inevitably destined to frustrate his progress before eventually discarding him and moving on to their next target. There must necessarily be a limit to the amount of valuable court time Mr Flowers (or anyone like him) can be permitted to dedicate to silly arguments or confected obsessions that clog the court and waste everybody’s time without advancing his case.

04 July 2020

Algorithms and Democracy

'Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse' from AlgorithmWatch comments
 A healthy democracy needs informed citizens. People are expected to be aware of important issues and public affairs in order to provide feedback on the political system. Therefore, a diversity of viewpoints is considered a core democratic value and one of the core public values in media law and policy. With the increasing importance of intermediaries, the question arises whether algorithmic news curation is a threat to democracy. 
Fears that algorithmic personalization leads to filter bubbles and echo chambers are likely to be overstated, but the risk of societal fragmentation and polarization remains
  • The fear that large parts of the population are trapped in filter bubbles or echo chambers seems overstated. Empirical studies offer a much more nuanced view of how social media affects political polarization. Due to the fact that our information repertoires are still broadly dispersed, we adapt our worldview, allow opposing opinions, and are exposed to relevant societal issues. Several studies show that incidental exposure and network effects can even contribute to an expansion of the diversity of information. 
  • Nevertheless, there is evidence for possible polarization at the ends of the political spectrum. The current state of research permits the assumption that echo chambers may arise under certain circumstances; that is, facilitated by homogeneous networks, highly emotionalized and controversial topics, and strong political predispositions. In particular, social media logics can reinforce affective polarization because the features of social media platforms can lead to very stereotypical and negative evaluations of out-groups. 
  • Moreover, social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion. As a result, spiraling processes begin because the perception of the strength of one’s own opinion camp compared to those of other camps is overstated. The entire process leads to an overrepresentation of radical viewpoints and arguments in the political discourse. At this point during the opinion formation process, individuals are more vulnerable to being influenced by “fake news” on Facebook or Twitter. Thus, strategic disinformation cannot only influence the media’s agenda through specific agenda-setting effects but can also impact the public’s climate of opinion. 
Social media are vulnerable to facilitating a rapid dissemination of disinformation, but exposure seems to be limited
  • There are orchestrated disinformation campaigns online, but data on the actual scope of and exposure to disinformation is scarce. 
  • The few available scientific studies suggest that the extent of the problem is likely to be overestimated since exposure to disinformation seems to be rather limited. 
  • Studies on the effects of disinformation of users show no persuasive effects but a confirmation bias: disinformation may therefore widen existing gaps between users with opposing worldviews because it is able to confirm and strengthen pre-existing attitudes and (mostly right-wing) worldviews. In this context political microtargeting poses a concern, as it can be used to disseminate content tailored to target groups particularly susceptible to disinformation. 
  • More research on the scope of, interaction with and individual and societal effects of disinformation is crucial to better assess the actual extent of the problem regarding disinformation. 
Social media contribute to the dissemination of incivility and hate speech
  • Incivility appears to be significantly widespread online and has real, negative effects on individual attitudes and the discourse climate. 
  • A potentially serious problem is the indirect effects of incivility on recipients and journalists: the credibility of journalistic content is reduced by incivility, including hate speech, in comment sections, which can have detrimental effects on trust in journalism as an institution of social cohesion in the long term. 
  • In addition, empirical evidence indicates that journalists react to incivility directed at them by shying away from controversial reporting or trying to hide controversies as a reaction to incivility. This is worrying because it hinders the free development of democratic discourse. 
  • A related problem is that especially women, who have been victims of hate speech, stop participating in discussions. This again backfires on the free development of public discourse on the macro-level, if whole groups of the population are cast out.  
  • Discourse moderation in comment sections that focuses on sociable replies on comments by journalists seems to be an effective tool in containing and preventing incivility including hate speech. 
  • Measures inhibiting freedom of expression have to be carefully applied and can only be used to combat illegal content such as hate speech. 
Research agenda for platform governance 
It can be stated that fears of filter bubbles and echo chambers seem overstated. Echo chambers and polarization seem to emerge only at the fringes of the political spectrum. Worrisome, however, are indications that social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion. 
There are reasons for vigilance in the cases of disinformation and incivility. A serious assessment of the extent of the problem of disinformation is hampered by the scarcity of the available scientific data. The available studies suggest that an excessively alarmist political and societal debate is to be avoided, but the actual scope of the issues remains unclear. Incivility and hate speech are prevalent phenomena that should be tackled with evidence-based policy measures. That means that (further) regulation, bans or deletion of content, which entails legal problems, are not necessarily the best solution. From the perspective of communication science, the main goal of any intervention should be to strengthen a reasonable, fruitful and free democratic discourse. 
In this context, we emphasize that existing successful approaches (e.g., community management in the form of moderation that does not entail deleting content to contain and prevent incivility) should be extended and consistently applied. In addition, further scientific evidence is needed, in particular on disinformation, in order to investigate the extent of the phenomenon and its consequences for public discourse and society in more detail, so that evidence-based measures can be developed. From a communication science perspective, it is precisely at this point that regulation appears most useful, especially with respect to demystifying the inner workings of “black box” algorithms and providing relevant data for research purposes. Hence, without access to Google’s or Facebook’s internal data, it is hard to reach firm conclusions. Therefore, it is critical to continue monitoring the evolution of digital news markets and the ways in which users are exposed to news on social media platforms. In particular, structural changes in the news market require the attention of regulators and policy makers. Intermediaries establish economic power and create new dependencies.
The 69 page study concludes
Platform Transparency Deficits 
Information intermediaries are increasingly important actors in high-choice media environments. They change the structures and processes of how communication in digitalized societies proceeds—with potentially profound consequences for the functioning and stability of our democracies. In contrast to existing media organizations, intermediaries wield far broader power because they internalize markets: Users of intermediaries choose their sources within an environment whose logic is set by the platform itself. The open market—whether the selection of newspapers at a newsstand or the list of channels available on TV— is typically regulated to prevent anti-competitive behavior, and it guarantees some degree of transparency and a level playing field. Market participants can evaluate the behavior of competitors simply by “walking over and having a look”. Internal markets for news, however, are opaque to individual and institutional observers. Within them, users of intermediaries are presented with a personalized pre-selection of content, but neither other users nor content producers can easily identify what those are. This implies two transparency deficits:
(1) Individual users only see their own recommendations. They have no way of knowing what information was hidden from them, and 
(2) they cannot observe what information was presented to other users.
Outside actors (such as competitors, content suppliers, media authorities, and researchers) suffer from these limitations: They have no way of observing the treatment and behavior of individuals or groups of users. Information intermediaries, therefore, create new potential impacts (through personalization), along with detailed measurements thereof (creating what Webster, 2010 termed “user information regimes”)—but hide both within a proprietary product. As intermediaries’ importance to public opinion formation and political processes grows, societies will need to encourage effective transparency in order to safeguard a level playing field in information dissemination. As identified above, two elements are crucial for such transparency to work:
(1) Individual users should be empowered with regard to the recommendations presented to them—they should be able to download everything that was presented, along with a sensible description of the processes that produced this exact set. Doing so would help individuals understand whether they were the subject of a biased selection and enable them to seek legal recourse in that case. Such a model is unproblematic from the perspective of user privacy, as users could only receive information which they can view anyway, yet it would facilitate the act of “whistleblowing” in the case of perceived wrongdoing. 
(2) An arguably more difficult goal would be the creation of some form of transparency that encompasses not individual users but the overall impact of intermediaries. Such a measure is necessary for monitoring the societal influence that arises from the use of platforms, including endogenous algorithmic effects but also encompassing external factors, such as manipulation attempts from malevolent third parties (such as hackers and foreign government agencies). Transparency on this level could, for example, take the form of providing privileged parties (state offices, researchers or trusted NGOs) with accurate aggregate information on the demographic makeup of user populations, the prevalence of news usage and other key insights. It might also be worth considering making less detailed data publicly available, but such a decision would need to be weighed against the potential negative effect of facilitating targeted manipulation.
 In general, transparency should help users make informed choices. But it is clear that “digital transparency—whether it is enacted through technological solutions or more classical administrative and organizational means — will not on its own provide an easy solution to the challenges posed by the growing role of platforms in political and public life” (Gorwa and Ash, 2020, p. 20). The ongoing debates on the implementation of the transparency rules in the German Media State Treaty clearly support this statement (Dogruel, Stark, Facciorusso, and Liesem, 2020).
Key Data Needs 
The combination of novel impacts, and a lack of transparency, create three distinct classes of threat to societies. Protecting against each threat requires access to specific data—most crucially the individual personalized results presented to users, the actions taken by intermediaries to change the flow of information, and the overall impact on the whole user population—that is currently unavailable (Tucker et al., 2018). 
(1) As illustrated by existing anti-trust cases against technology companies, it is possible that (for whatever reason) intermediaries fail to adhere to internal or external guidelines, resulting in detrimental treatment of users, advertisers, content providers, or other actors. In contrast to past well-documented legal disputes (e.g., in competition regulation), affected parties will have difficulties monitoring for, detecting, and collecting evidence of unfair treatment because (due to personalization and highly individual behavior) they do not have access to the recommendations that intermediaries produce for their users. Assessing (and proving) unfair bias in intermediaries would require access to a representative set of recommendations, so that differences in consumption could be clearly attributed. Consider, for example, a hypothetical search engine that systematically alters access to political cross-cutting information—displaying only conservative results to conservatives and only liberal results to liberals. Creating a legal case against such a platform would require access to a representative set of users (anecdotal evidence could always be discounted as spurious. Or chance findings). For each of those users, researchers should identify the political leaning, and then record the search results they obtained. 
Such data collection is typically unfeasible in practice for two reasons: (a) Platforms offer no way of accurately recording the output of an intermediary for a single user. Neither third parties, nor the users themselves, have access to technical interfaces that would show a comprehensive dataset of personalized recommendations, such as the personal news feed on Facebook. Even though users can access the feed visually in their browser, considerable effort would be required to extract it in an automated fashion (i.e., through web scraping). Furthermore, there is currently no company that provides such data, and researchers’ capabilities to obtain them independently are increasingly limited by the locked nature of proprietary smartphones (Jürgens, Stark and Magin, 2019). The second factor that encumbers research is (b) the unavailability of a suitable sampling strategy. Intermediaries are, of course, cognizant of their entire population of users along with key socio-demographic data (which is required i.e., for selling personalized advertisement spaces). Researchers, on the other hand, usually have no way of creating random samples from a platform’s population. Since the available information on the socio-demographic makeup of the userbase is typically drawn from (moderately sized, i.e., N in the thousands) surveys, attempts to create samples which are representative, with regard to the intermediary’s national users, offer somewhat limited precision (Jürgens et al., 2019). 
(2) Outside actors (individual, institutional and state-sponsored) frequently attempt to manipulate intermediaries in order to gain influence over citizens—e.g., through misinformation, disinformation, and the manipulation of public opinion perception (Lazer et al., 2017; Magin et al., 2019). Although intermediaries spend commendable resources on the containment and removal of such attempts, two risks remain which are outside the reach of the companies themselves: (a) Without access to large-scale data containing purported manipulation attempts, external watchdogs cannot perform independent audits in order to identify overlooked external influences. (b) Problematic content is also routinely deleted, so that external watchdogs cannot scrutinize and understand those attacks. An exception is Twitter, which regularly publishes datasets on malevolent campaigns). Intermediaries should provide trustworthy actors with a way to perform their own, independent attempts at large-scale detection of manipulation, including data that were already removed by in-house systems. The simplest strategy that would enable such attempts is simply making the raw public flow of information accessible through a technical interface (an API), as Twitter has done: It not only offers a true random sample of tweets, but includes information about which of them are removed later on. Furthermore, the company offers a growing list of datasets containing all content produced by bot networks and state-sponsored manipulation campaigns. Platforms with more restrictive privacy contexts such as Facebook (where much of the flow of information is not visible to the broader usership or public) could still allow automated analyses, for example by offering to run researchers’ models without providing access to the data itself. 
(3) In addition to institutional actors, harmful dynamics (such as radicalizing echo chambers, Stark et al., in press) may develop within intermediaries, even in the absence of external influence. Such dynamics need not have a clearly identifiable culprit; they could equally arise from the interaction of multiple individuals that leads to a mutual reinforcement of harmful tendencies. Detecting such structural phenomena is contingent on a complete picture of users’ interaction networks. Furthermore, singular snapshots do not provide much insight; instead, the development must be traced over time in order to assess the true impact as well as causes. Researchers should, therefore, gain access to representative, longitudinal, and detailed data on those (semi-public) parts of intermediaries that pertain to public debates. This includes first and foremost the discussions surrounding media, politics, and social issues. 
New Models for Partnerships 
Some attempts have been made to increase transparency; while they attempt to address the issues outlined above, they have not yet achieved any significant success. Following an initiative from King and Persily (2019), a consortium of scientists cooperated with Facebook in order to create an institution (Social Science One) and a process that would allow scholarly usage of a limited collection of pre-defined datasets. The project received much criticism from scientists, who warned that it would decrease transparency in research practices, lead to a dependence on Facebook that would encumber critical research, and create divisions between privileged “insiders” with access to data and the rest of the field (Bruns, 2019c). So far, Facebook also failed to facilitate the agreed upon access, frustrating even the existing scientific partners (Statement from the European Advisory Committee to Social Science One). Despite its pragmatic appeal, the cooperative model underpinning Social Science One has a fatal conceptual flaw: Even though it provides some access to some data, that access is pre-defined and limits researchers to a specific approach in tackling the above-mentioned threats. Participating teams are prevented from finding problems, answers, and solutions that intermediaries themselves did not identify. A cooperation between an intermediary and scientific partners can only succeed in generating trust if researchers are given the freedom to seek and find potential negative effects. Where such inquiries are prohibited ex ante, through pre-defined datasets or topical questions, both sides suffer from a lack of credibility. 
There is also a deeper issue to the proposed cooperative model: Just as independent scholarly work from different institutes is required for long-term trustworthy, rigorous, and reliable scientific insights, independence is a defining feature of work on intermediaries. Only when external observers are free to implement an autonomous process for data collection, analysis, and interpretation can they serve as the much-needed check and balance that society requires (and demands). Researchers’ ability to do so ultimately hinges on two key ingredients, both mentioned above: The capacity to obtain or create high- quality representative samples, and the availability of tools that record digital content, recommendations and user behavior within intermediaries. While the first is certainly possible (if perhaps expensive), the second remains under strong pressure from the progressive “lock-down” of platforms and mobile devices (Jürgens et al., 2019).  
Diversity as Policy Goal 
From a normative point of view, diversity is the key term: “Media pluralism and diversity of media content are essential for the functioning of a democratic society,” as the Council of Europe (2007) put it, because functioning democracies require all members of society to be able to participate in public debate — and to be able to fully participate in this democratic debate, citizens need to encounter a diversity of high-quality information and opinions. Or, as the Council of Europe (2007) adequately phrased it, the right to freedom of expression “will be fully satisfied only if each person is given the possibility to form his or her own opinion from diverse sources of information.” The close link between media diversity and democratic participation may also explain the scope of the public debate about the rise of platforms and their growing influence in the information landscape. The advances of at least some of these platforms into the business of distributing and aggregating media content have fundamentally changed our news ecosystem. In this context, a very important aspect is the newly created economic dependencies. The better part of the advertising revenue flows to the platform providers, Google and Facebook. This raises the normative question: to what extent should diversity matter in the context of social media platforms? 
Communications researchers (e.g., Moeller, Helberger, and Makhortykh, 2019) emphasize that to ensure the preservation of the news ecosystem that has come under considerable strain, the public needs to receive the curated news supply provided by traditional mass media. The length of time that traditional mass media can still assume this function is uncertain. If, at some point, this is no longer the case, the underlying conditions of the news ecosystem and consequently opinion-formation will fundamentally change. If traditional mass media disappears, high- quality news products will no longer be available, and the impact of low-quality information on processes of opinion formation will rise. If only softened and highly personalized content will be distributed, the news ecosystem will change dramatically with potentially negative consequences for democratic societies. 
Against this background, the social dynamics of media diversity become apparent (Helberger, 2018, p. 156). Thus, “media diversity on social media platforms must be understood as a cooperative effort of the social media platform, media organizations, and users. The way users search for, engage with, like, shape their network, and so forth has important implications for the diversity of content, ideas and encounters that they are exposed to. Similarly, the way the traditional media collaborate with information intermediaries to distribute content and reach viewers impacts structural diversity” (Helberger, 2018, p. 171). Put differently, future diversity policies must therefore go beyond the traditional framework and generate a new conception of media diversity, which addresses the different actors (platforms, users, and media organizations) together. These future policies must, first and foremost, ensure that diversity reaches users. 
A potential way of increasing exposure diversity could be to employ a design that focuses on serendipity and/or on diversity as a principle (see considerations on “diversity by design”: Helberger, 2011). Such a design would, for example, focus less on search engine ranking and would encourage users to have a closer look at different teasers and click on more results. Besides, users should have the opportunity to choose between, or weight, different filtering and sorting criteria. Such changes could also create more diversity in Facebook’s news feed, e.g., Facebook could implement the ability for users to adopt a different point of view, exposing them to a feed with totally new perspectives (Ash, Gorwa, and Metaxa, 2019). 
As the debate about the impact of algorithmic news recommenders on democracy is still an ongoing process, diversity-sensitive design as part of a possible solution should be taken into account. For such solutions to work, it should be clear that different perspectives on the democratic role of news recommenders imply different design principles for recommendation systems (Helberger, 2019), i.e., an explicit normative conception of the democratic potential is critical. It may also become clear, that we need to work towards a coherent mix of appropriate government regulation, co-regulation, and platform-specific self-regulation in order to minimize the negative effects of the discussed threats.

Currencies

'After Libra, Digital Yuan and COVID-19: Central Bank Digital Currencies and the New World of Money and Payment Systems' (European Banking Institute Working Paper Series 65/2020) by Douglas W. Arner, Ross P. Buckley, Dirk A. Zetzsche and Anton Didenko comments
Technology, money and payment systems have been interlinked from the earliest days of human civilization. But of late technology has reshaped money and payment systems to an extent and speed never before seen. Milestones include the establishment of M-Pesa in Kenya in 2007 (creating mobile money systems), Bitcoin in 2009 (triggering in time the explosive growth in distributed ledger technology and blockchain), the announcement of Libra in 2019 (triggering a fundamental rethinking of the potential impact of technology on global monetary affairs), and the announcement of China’s central bank digital currency – the Digital Currency / Electronic Payment (DCEP) referred herein to as Digital Yuan (marking the first launch by a major economy of a sovereign digital currency). 
The COVID-19 pandemic and crisis of 2020 has spurred electronic payments in ways never before seen. In this paper, we ask the question: In the context of the crisis and beyond, what role can technology play in improving the effectiveness of money and payment systems around the world? 
This paper analyses the impact of distributed ledger technologies and blockchain on monetary and payment systems. It particularly considers the policy issues and choices associated with cryptocurrencies, stablecoins and sovereign (central bank) digital currencies. We examine how the catalysts reshaping monetary and payment systems around the world – Bitcoin, Libra, China’s DCEP, COVID-19 – challenge regulators and give rise to different levels of disruption. While the thousands of Bitcoin progenies were able to be ignored, safely, by regulators, Facebook’s proposed Libra, a global stablecoin, brought an immediate and potent response from regulators globally. This proposal by the private sector to move into the traditional preserve of sovereigns – the minting of currency – was always likely to provoke a roll-out of sovereign digital currencies by central banks. China has moved first, among major economies, with its Digital Yuan – the initiative that may well trigger a chain reaction of central bank digital currency issuance across the globe. 
In contrast, in the COVID-19 crisis, we argue most central banks should focus not on rolling out novel new forms of blockchain-based money but rather on transforming their payment systems: this is where the real benefits will lie both in the crisis and beyond. Looking forward, neither the extreme private nor public model is likely to prevail. Rather, we expect the reshaping of domestic money and payment systems to involve public central banks cooperating with (new and old) private entities which together will provide the potential to build better monetary and payment systems at the domestic and international level. Under this model, for the first time in history, technology will enable the merger of the monetary and payment systems.

'Cryptocurrency Is Garbage. So Is Blockchain' by David Golumbia comments 

The entire space cryptocurrency/blockchain space is dominated by false claims, conspiracy theories, muddled thinking, and outright fraud of many kinds. It is remarkable how much academic, journalistic and popular writing on blockchain accepts at face value dogma that any dispassionate investigation shows to be false, This paper consists mainly of two lists: falsehoods that nobody who is interested in the world as it really is should ever repeat, at least not without heavy qualification; the second a list of truths and rules of thumb about cryptocurrency and blockchain that have been demonstrated repeatedly (often for many years) but escape notice far too often. Each item in the list is accompanied with some, but only a small subset, of the evidence available to support it.

Datafication

'Datafication and the Welfare State' by Lina Dencik and Anne Kaun in (2020) 1(1) Global Perspectives 12912 comments
 Both vehemently protected and attacked in equal measure, the welfare state as an idea and as a policy agenda remains as relevant as ever. It refers not only to a program of social welfare and the provision of social services, but also to a model of the state and the economy. According to Offe (1984), the welfare state in advanced capitalist economies is a formula that consists of the explicit obligation of the state apparatus to provide assistance and support to those citizens who suffer from specific needs and risks characteristic of the market society, and it is based on a recognition of the formal role of labor unions in both collective bargaining and the formation of public policy. Although actively dismantled in recent decades as globalization and neoliberalism have taken hold of much of the modern world-system, its future continues to be fought over. It serves as a model for society that is seen to privilege a commitment to decommodification, universal access, and social solidarity as a way to overcome the most prominent contradictions of capitalism. A product of the twinned global crises of the Great Depression and the Second World War, the modern welfare state therefore encapsulates a moment of political and economic settlement, a mechanism of stabilization that arguably could emerge only out of such crises. 
From the outset, technology, particularly information and communication technologies, has played a key role in the development of the welfare state (Hobsbawm 1994). It has been instrumental in the creation of bureaucracies and forms of population management that have long been central to the way the welfare state is administered. Gunnar and Alva Myrdal, for example, famously argued for social engineering based on statistics and the use of technology to solve the population crisis of Sweden in the 1930s and 1940s. Their suggestions are now considered central to the ideas and cornerstones of the Nordic welfare state model (Kananen 2014). The creation of databases and the monitoring of citizens was from early on a fundamental part of assessing population needs and determining allocation of resources, a type of surveillance that has been the subject of much critique for creating categories of “deserving” and “undeserving” citizens (Offe 1984). At the same time, the advent of digitization has also been seen as a challenge to the welfare state and its ability to deliver on its promises, disrupting labor relations, undermining social security, and changing the parameters of state governance. With growing trends such as mass data collection, automation, and artificial intelligence, these tensions have only intensified, putting the welfare state into further question (Petropoulos et al. 2019). 
At the time of writing this introduction, the question of not only the future of the welfare state but also how technology intersects with it has gained new pertinence as we find ourselves in the midst of another global crisis. The global pandemic brought about by the rapid spread of COVID-19 has put social welfare questions and the role of the state at the top of the agenda once more. The crisis is seen to have prompted a return of the Leviathan state, a social contract with an absolute sovereign in which the state provides the ultimate insurance against an intolerable human condition (Mishra 2020), and it has provided renewed impetus for demands for universal health care, stable employment, and a basic income (Standing 2020). Certainly, initial responses to the pandemic and ongoing lockdowns across the world have aligned around state interventions in the economy not seen in a generation, with governments designing various packages of increased public spending, which has (re)invited a rhetoric of the importance of economic planning and strong social security. 
Technology is proving to be at the heart of this crisis and how the welfare state might emerge from it. As “social distancing” speeds up the transition to social and economic life online, often presented as a seamless process, Big Tech has quickly (in partnership with governments) established itself as our (new) infrastructure for everything from health to education to work (Bharthur 2020). At the same time, Big Tech is also presented as a solution to the crisis through extensive data collection, contact tracing, and certification. At the time of writing, the big data analytics company Palantir is in talks with a number of governments, including those of the United Kingdom, Germany, and France, to provide data infrastructure for health services during the pandemic, and Google and Apple have announced a joint venture to develop infrastructure for contact-tracing apps that determine if an individual has been in close proximity to someone COVID-19 positive (Fouquet and Torsoli 2020; Kelion 2020). Furthermore, the EU Commission has requested metadata from large mobile phone carriers, including German Telekom and Orange, to calculate mobility patterns and track the spread of the coronavirus across Europe (Scott, Cerulus, and Kayali 2020). It is claimed that only anonymized and aggregated data will be collected and that data will not be used to control or sanction lockdown measures but to use this data to be able to predict where medical supplies will be needed most. 
These initiatives introduce new questions about the nature of surveillance in governance, the place of data protection frameworks such as the EU’s General Data Protection Regulation (GDPR), and the role of private companies in the delivery of public services that all form an important part of the contemporary debate on technology and the welfare state. As Baker (2020, p. para. 13) puts it, “for governments looking to monitor their citizens even more closely, and companies looking to get rich by doing the same, it would be hard to imagine a more perfect crisis than a global pandemic.” Moreover, the turn to data and the reliance on data-driven systems in governance introduces key epistemological and ontological assumptions about what constitutes relevant social knowledge for decision-making and how individuals and populations should be understood and managed. Data, on this premise, needs to be collected in as large a quantity as possible (total information capture), processed through automation, with the view to calculate all possible outcomes—a knowing of all risks—so as to preempt them before they occur (Andrejevic 2019). While a global crisis like the one we are currently in might present itself as a state of exception in these terms, the trend of datafication across social life is one that was already firmly in place. 
What does it mean to organize the welfare state around this trend of datafication? With this special issue, we take stock of this question and explore the multiple ways in which the practices, values, and logics that underpin the advancement of datafication intersect with the practices, values, and logics that form the basis of the public services that we commonly associate with the modern welfare state. The idea for this special issue emerged out of discussions in the Nordic research network Datafication, Data Inequalities and Data Justice, of which we are both members. It is perhaps no surprise that it is a Nordic context that spurred on the engagement with the welfare state as this has long been a central feature of Nordic societies, both as an idea and in practice. However, the question of how datafication impacts public services, particularly in relation to social welfare, is a global one and one that cannot be universalized, whether in terms of data-driven developments or their implications (Milan and Treré 2019). At the same time, the history of the modern welfare state is one that has most frequently been associated with Europe in what Judt (2007) has described as the “social-democratic moment” of the postwar period. This history is reflected in our contributions that predominantly engage with European and Western settings, while doing so in the context of globalization. Many of the issues discussed in our contributions are being raised elsewhere as technology infrastructures globalize and standardize practices (cf. Booth 2019).

Data Localisation

'“Data localization”: The internet in the balance' by Richard D.Taylor in (2020) 44(8) Telecommunications Policy comments
There is a steady global trend towards “Data Localization,” laws by which data is required to be maintained and processed within the geographic boundaries of its state of origin. This development has raised concerns about its possible adverse impacts on emerging data-intensive technologies such as Cloud services/E-commerce, Big Data, Artificial Intelligence and the Internet of Things (collectively, the Embedded Infosphere). The inability to reach an international agreement on rules for cross-border data flows may have significant adverse consequences for all future users of the Internet. 
The basis of Data Localization is grounded in two distinct but inter-related policy models: Data Sovereignty and Trans-Border Data Flows. These two concepts have different origins. “Data Sovereignty” is derived from the historic power of a state of absolute and exclusive control within its geographic borders. Policies behind TBDFs arose in Europe following World War II, primarily motivated by Nazi use of early proto-computers to help round up Jews and others. As they have evolved, TBDF policies have been directed primarily at protecting personal data and privacy. 
This article first examines the issues of: 1) “Information Sovereignty” and 2) TBDFs. It then describes the arguments for and against “Data Localization,” offers some examples of strong localization policies (Russia, China), and summarizes contesting policy proposals. It then contextualizes TBDF with issues of human rights (free flow of information) and privacy. 
While the utility of an international agreement on TBDFs is clear, the differences in approaches are tenacious. For the free-market developed world (e.g, EU, OECD), the path forward seems to lead through policy convergence to compatible rules, with differentiated levels of data protection and accountability. It is far from clear whether these rules will address, in a mandatory way, issues of the “free flow” of information in the human rights sense. At the same time, there are countries (e.g., BRICS), representing a majority of the world's population, in which political and cultural resistance will produce stringent Cyber Sovereignty and Data Localization policies with few if any human rights components. 
The article concludes that the more the Internet is “localized”, the more attenuated its benefits will become. The negative consequences of Data Localization will become increasingly obvious as new, data-intensive technologies become ubiquitous, creating a condition of “Data Dependence”. It is projected that in the future the nations with the least amount of Data Localization and the most open flow of information will be the most successful in benefiting from new data-intensive embedded, networked technologies. This will most likely be characterized by values adopted as policies and practices in the EU.

Apple and Competition Policy

'The Antitrust Case Against Apple' by Bapu Kotapati, Simon Mutungi, Melissa Newham, Jeff Schroeder, Shili Shao and Melody Wang comments
This article explores several potential antitrust claims against Apple - namely tying, essential facilities, refusal to deal and monopoly leveraging. We argue that the Apple ecosystem's large revenue share in terms of app transactions, lock-in effects and consumers' behavioral bias in online markets give the iPhone maker monopoly power as a mobile platform. Apple has exploited its market power to illegally tie the distribution of digital goods to its proprietary in-app purchase system to impose a 30% tax and extract supracompetitive profits, leading to higher app prices and reduced innovation. Moreover, Apple has excluded rivals and favored its own apps by downgrading competitors' discovery and promotions, blocking certain rivals entirely from the App Store, and limiting others' access to key APIs, in some cases right after copying their apps. In conjunction with the discriminatory application of the 30% tax, Apple's conduct towards major multi-homing apps such as Spotify reduces cross-platform competition with Android. These anticompetitive practices prolong and expand Apple's monopoly at the expense of competition.

02 July 2020

Spam

The Australian Communications and Media Authority (ACMA) has announced that dominant retailer Woolworths Group Limited has paid a $1,003,800 infringement notice and agreed to a court-enforceable undertaking in response to significant breaches of the Spam Act 2003 (Cth). The penalty is the largest issued by ACMA under that Act.

 ACMA found over five million breaches of the Act by Woolworths in marketing emails to consumers   between October 2018 and July 2019 after those people had unsubscribed from previous messages. 

ACMA states its investigation found Woolworths’ systems, processes and practices were inadequate to comply with spam rules, with ACMA executive Nerida O’Loughlin  commenting
The spam rules have been in place for seventeen years and Woolworths is a large and sophisticated organisation. The scale and prolonged nature of the non-compliance is inexcusable. Woolworths failed to act even after the ACMA had warned it of potential compliance issues after receiving consumer complaints. 
Consumers claimed that they had tried to unsubscribe on multiple occasions or for highly personal reasons, but their requests were not actioned by Woolworths because of its systems, processes, and practices.

 In its court-enforceable undertaking Woolworths has committed to appoint an independent consultant to review its systems, processes, and procedures, to implement improvements, and to report to ACMA. Woolworths has also committed to undertake training, and to report all non-compliance it identifies to ACMA for the term of the undertaking.

O'Loughlin comments
Our enforcement action, a substantial infringement notice and a comprehensive three-year court-enforceable undertaking, is commensurate to the nature of the conduct, number of consumers impacted and the lack of early and effective action by Woolworths.  CMA’s actions should serve as a reminder to others not to disregard customers’ wishes when it comes to unsubscribing from marketing material.