Showing posts with label Digital Platforms. Show all posts
Showing posts with label Digital Platforms. Show all posts

07 March 2025

Platforms

'Consent as Friction' by Nikolas Guggenberger in. (2025) 66 B.C. Law Review comments 

The leading technology platforms generate several hundred billion dollars annually in revenue through algorithmically personalized advertising—with pernicious effects on our privacy, mental health, and democracy. To fuel their data-hungry algorithms, these platforms have long conditioned access to their services on far-reaching authorizations, embedded in boilerplate terms, to extract their users’ data. Until recently, privacy-sensitive alternatives were unavailable—even for a premium. Users faced a stark choice: submit to surveillance or forgo digital participation. I term this business practice “surveillance by adhesion.” 

In July 2023, however, the European Court of Justice ruled in Meta Platforms Inc. v. Bundeskartellamt that surveillance by adhesion violated the European Union’s General Data Protection Regulation. To comply with the EU’s new regulatory paradigm, the leading (predominantly American) platforms must fundamentally revise their business models by either abandoning personalized advertising or obtaining individuals’ informed consent. In practice, the EU’s stringent guard-rails—which mandate providing users with “real choice” beyond mere consent pop-ups and granular control—may render user consent so onerous to secure, precarious to sustain, restrictive to operationalize, and prone to litigation that they undermine the commercial viability of personalized advertising. Rather than empowering users to exercise control over their data, the consent mechanism may thus manifest as a vehicle for welcome friction, prompting a shift toward less invasive contextual advertising. 

Building on these insights, this Article contends that U.S. policymakers and regulators should, and indeed can, likewise leverage consent as friction to undermine the economic viability of personalized advertising and other harmful surveillance-driven business models. This approach offers a pragmatic alternative to failed notions of user control over data, especially as democratic data governance too often remains beyond reach. Although the EU’s new regulatory paradigm offers one model, there are multiple avenues to harness consent as a source of friction across different legal contexts. In fact, state-level biometric privacy laws exemplify this strategy’s efficacy domestically. Their qualified consent requirements have thrown so much sand in the gears of biometric data collection and use that several leading technology companies have refrained from launching intrusive facial recognition applications altogether. By adopting this friction-based strategy, the Federal Trade Commission and state privacy enforcers can effectively establish potent data usage limitations. 

13 February 2025

Scams

The Scams Prevention Framework Bill has passed through the national legislature. The expectation is that it will set out consistent and enforceable obligations for businesses in key sectors, with overarching principles for compliance by all members of designated sectors. 

The ACCC has announced that the Commission will 'closely monitor regulated entities’ compliance with principles to prevent, detect, disrupt, respond to and report scams. The legislation empowers the ACCC to investigate potential breaches and take enforcement action where entities do not take reasonable steps to fulfill obligations under the principles, with fines of up to $50 million and scope for consumers to seek redress from regulated businesses. The ACCC will be involved in development of the formal designation of sectors, sector codes, and consumer and industry guidance. The initial sectors will be banks, certain digital platforms (including social media) and telecommunications providers. 

 Under the Framework, the ACCC will enforce the digital platforms sector scams code and take enforcement action where digital platforms breach obligations. The Australian Securities and Investments Commission will be the regulator for the banking sector code. The Australian Communications and Media Authority will be the regulator for the telecommunications sector code. There will be a single external dispute resolution body under the new Framework, involving the Australian Financial Complaints Authority (AFCA). 

A Treasury Minister may, by legislative instrument, designate one or more businesses or services to be a regulated sector for the purposes of the Framework. This designation instrument is subject to Parliamentary scrutiny through the disallowance process and sunsetting. The Treasury Minister may designate an individual business or service, or designate businesses or services by class, meaning that the Minister may in effect designate specific entities to be a 'regulated sector' within a designation instrument. 

 Without limiting the businesses or services that may be designated, a Treasury Minister may designate the following classes of businesses or services to be a regulated sector (or a subset of those business or services): 

 • banking businesses, other than State banking (within the meaning of paragraph 51(xiii) of the Constitution) not extending beyond the limits of the State concerned; 

• insurance businesses, other than State insurance (within the meaning of paragraph 51(xiv) of the Constitution) not extending beyond the limits of the State concerned; 

• postal, telegraphic, telephonic or other similar services (within the meaning of paragraph 51(v) of the Constitution), which can include, but is not limited to: - carriage services within the meaning of the Telecommunications Act; - electronic services within the meaning of the Online Safety Act 2021, such as social media services within the meaning of that Act; - broadcasting services within the meaning of the Broadcasting Services Act 1992. 

 The description of the businesses and services are based on the relevant constitutional heads of power and provide flexibility for the Framework to be expanded to a wide range of sectors over time. It is not intended to provide a roadmap of the exact sectors the Government is proposing to designate. The Government's intention is to initially designate telecommunications services, banking services and certain digital platform services. 

 Before designating a sector to be subject to the Framework, the Minister must consider all the following matters: 

 • Scam activity in the sector. For example, the Minister may identify that certain businesses or services experience high levels of scam activity.   

• The effectiveness of existing industry initiatives to address scams in the sector. For example, there may be existing initiatives in a sector seeking to protect against scams but do not appropriately address scam  activity in that sector.   

• The interests of persons who would be Framework consumers of regulated services for the sector if the Minister were to make the designation. For example, designation may be appropriate if the Minister considers that consumers would be better protected against scams arising out of activity in a sector if it is subject to the Framework, rather than relying on existing frameworks.   

• The likely consequences (including benefits and risks) to the public and to the businesses or services making up the sector if the Minister were to make the designation.   

• Any other matters the Minister considers relevant to the decision to designate a sector to be subject to the SPF. For example, this could include the compliance and regulatory costs of designating sectors, the privacy or confidentiality of consumers' information, the regulatory impact of designation, the outcomes of consultation with impacted entities and consumers, and scam activity in the relevant sector in another jurisdiction. 

 Before designating a sector, the Minister must also consult relevant consumer groups and the businesses or services making up the sector, or such associations or other bodies representing them as the Minister thinks appropriate. Given the nature and scope of the requirements under the Framework, this is 'appropriate to ensure consumers and affected entities are given notice of the Government's intention to designate the relevant sector. It will also provide these stakeholders with an opportunity to give feedback on the details of the designation instrument, including on any application provisions or transition period before the SPF comes into effect for the sector'.

What is a 'Scam'? The legislation seeks to provide certainty on the scope of harms intended to be captured by the Framework, with a scam being a direct or indirect attempt (whether or not successful) to engage an  Framework consumer of a regulated service where it would be reasonable to conclude that the attempt: 

 • involves deception; and 

• would, if successful, cause loss or harm including the obtaining of SPF personal information of, or a benefit (such as a financial benefit) from, the SPF consumer or the SPF consumer's associates. 

 The elements of the definition of 'scam' are objective in nature and do not require the scammer's state of mind to be established. This definition is deliberately broad to capture the wide range of activities scammers engage in and their ability to adapt and to adopt evolving behaviours over time. The Framework rules can also provide an appropriate safeguard to exclude conduct that is not intended to be captured under the Framework. 

 The definition of scam captures both successful scams which have caused loss or harm to a Framework consumer, and scam attempts which have not yet resulted in loss or harm to a Framework consumer. This reflects the obligations in the principles, which require regulated entities to take action against scams, regardless of whether the scam has resulted in loss or harm to a Framework consumer or an associate of the consumer. The use of 'attempt' in the definition of scam has its ordinary meaning, which is intended to cover efforts made to engage a Framework consumer. There may be an attempt to engage a Framework consumer even if the attempt is indirect, such as where it is directed at a cohort which includes the consumer or is directed at the public more generally. The attempt to engage an SPF consumer may be a single act or a course of conduct. 

 The legislation introduce the concept of an 'SPF consumer'. The obligations imposed on regulated entities are often in relation to a Framework consumer. This is intended to clearly set out the scope of obligations under the Framework and who they are designed to protect.  A Framework consumer of a regulated service is: • a natural person, or a small business operator, who is or may be provided or purportedly provided the service in Australia; or • a natural person who is ordinarily resident in Australia and is or may be provided or purportedly provided the service outside of Australia by a regulated entity that is either an Australian resident or is providing or purportedly providing the service through a permanent establishment in Australia.  The meaning of 'Australian resident' and 'permanent establishment' with respect to the regulated entity in this context leverages the existing established definitions in the ITAA 1997. 

A Framework consumer is intended to cover any natural person or small business operator who is in Australia when they are provided the regulated service, regardless of where that service is based (for example, the regulated service may be based overseas). This includes natural persons who are only temporarily in Australia. The definition also intends to cover any natural person who is ordinarily resident in Australia but is overseas when they are provided a regulated service that is based in Australia. A Framework consumer could be 

 • an Australian resident in Australia using either an Australian-based or overseas-based messaging service that is offered in Australia; 

• a person ordinarily resident in Australia who is overseas but using an Australian-based banking service; or 

• a tourist visiting Australia using an Australian-based or overseas-based telecommunication service that is offered in Australia. 

 It is not intended that a foreign entity will be regulated with respect to consumers in foreign markets. For example, where an Australian consumer is overseas and is impacted by a scam on a social media service offered by an entity based overseas, this is not intended to be within the scope of the Framework. 

 Small businesses are not excluded from being Framework consumers based on their corporate structure. The small business may be in the form of a sole trader, company, unincorporated association, partnership or trust. Whether a small business is a small business operator for the purposes of the Framework will differ slightly depending on whether the small business is a body corporate or not.  

If a small business is a body corporate, it is a small business operator if it meets all of the following conditions: • the sum of the business' employees and the employees of any body corporate related to the business, is less than 100 employees; • the annual turnover of the business during the last financial year is less than $10 million; and • the business has a principal place of business in Australia.   If a small business is not a body corporate, it is a small business operator if it meets all of the following conditions: • the business has less than 100 employees; • the annual turnover of the business, worked out as if the person were a body corporate, during the last financial year is less than $10 million; and • the business has a principal place of business in Australia.

17 December 2024

Meta Settlement

Amid excitement about today's announcement of a settlement between Meta Inc and the OAIC privacy and regulatory analysts might wonder whether Meta got off lightly: a trivial amount and the ending of litigation in the Federal Court. 

The Enforceable Undertaking reads

1. Background 

1.1. This enforceable undertaking is given by Meta Platforms, Inc. (Meta) to the Australian Information Commissioner (Commissioner) under section 114 of the Regulatory Powers (Standard Provisions) Act 2014 (Regulatory Powers Act) in conjunction with the discontinuance of Federal Court of Australia Proceeding No NSD 246 of 2020 (the Civil Penalty Proceedings) against all Respondents, on a without prejudice basis and without any admission of liability. The Civil Penalty Proceedings followed investigations by the OAIC concerning the Cambridge Analytica Incident, the facts of which are described below together with a background to the Civil Penalty Proceedings. 

1.2. Meta offers this enforceable undertaking in its capacity as the provider of the Facebook service to users in Australia from 14 July 2018 onwards. Prior to 14 July 2018, and during the period in which the Cambridge Analytica Incident described below occurred, Meta Platforms Ireland Limited provided the Facebook service to users in Australia. 

The Cambridge Analytica Incident 

1.3. In April 2010, Meta launched the Graph Application Programming Interface (Graph API). The Graph API allowed third party apps to access, with permission from users who installed the third party app using the Facebook Login tool, certain information, e.g., their name, birthdate, etc., from installers of the app and their friends (if both users’ privacy settings allowed it). Under the first version of Graph API (Graph API Version 1), which was in place from 21 April 2010 to 30 April 2015 for pre-existing apps, third party apps could request access to certain information (1) from the installing user’s account; and (2) that the installing user’s Facebook friends had chosen to share with the installing user. The Graph API would provide the information sought on an automated basis, so long as the installing user authorised the request, the user and their friends had not opted out of the Facebook platform (which would allow the user to opt out of providing access to information to third party apps), subject to the privacy and application settings of the user and their friends. 

1.4. In November 2013, Dr Aleksandr Kogan, a professor at Cambridge University, launched a third party app relevantly known as “thisisyourdigitallife” (the Life App) using Graph API Version 1. Before doing so, Dr Kogan agreed to Meta’s terms of service and its terms for developers of third party apps using the Facebook platform and the Graph API. The Life App, which presented itself to users as a quiz app, requested via a dialog box at the time of installation, installing users’ permission to access certain categories of their information as well as certain categories of information that their Facebook friends shared with them. 

1.5. In December 2015, upon learning from media reports that Dr Kogan and his company, Global Science Research Limited (GSR), may have been transferring user information to Cambridge Analytica (UK) Ltd, a British data analytics company, and its parent company, Strategic Communication Laboratories (together, SCL) (in contravention of contractual obligations owed to Meta), Meta launched an investigation and terminated the Life App’s use of the Graph API and access to Facebook Login. 

1.6. Based on this investigation, Meta concluded that Dr Kogan and GSR had violated its terms in several respects. Meta subsequently obtained certifications that Dr. Kogan, GSR, and other third parties (including SCL) with whom Dr Kogan had shared user information had deleted the information. The information that was transferred to SCL related primarily to users in the United States. Neither Meta, nor Meta Platforms Ireland Limited, are aware of any evidence that Dr Kogan provided SCL with information on Facebook users from Australia. 

The OAIC’s Investigation and the Civil Penalty Proceedings 

1.7. On 5 April 2018, the Commissioner initiated an investigation under section 40(2) of the Privacy Act 1988 (Cth) (Privacy Act) in relation to reports that Australian users’ information may have been improperly shared with Cambridge Analytica (UK) Ltd via the Life App. During the investigation, which extended to Meta, Meta Platforms Ireland Limited and Facebook Australia Pty Ltd, the Commissioner raised concerns that Meta may have interfered with the privacy of Australian individuals in contravention of Australian Privacy Principles (APPs) 1.2, 5, 6, 10 and 11 of the Privacy Act (Investigation). 

1.8. On 9 March 2020, the Commissioner commenced the Civil Penalty Proceedings and concluded the above investigation. In the Civil Penalty Proceedings, as further particularised in the Amended Statement of Claim dated 2 June 2023, the Commissioner alleged that Meta’s systems and practices raised concerns about the protection of personal information of Australian Facebook users in relation to the Cambridge Analytica incident, and that, based on its Investigation, Meta and Meta Platforms Ireland Limited may have contravened section 13G of the Privacy Act through serious or repeated breaches of APPs 6.1 and 11.1. The Commissioner alleged that, throughout the time the Life App was available to Facebook users, approximately: 1.8.1. 53 Facebook users located in Australia installed the Life App; and 1.8.2. 311,074 Facebook users located in Australia could have had their personal information requested by the Life App as friends of installing Facebook users. 

2. Meta’s Response to the Cambridge Analytica Incident 

2.1. Meta acknowledges: 2.1.1. that under the Privacy Act, Meta must not do an act, or engage in a practice, that breaches an APP; 2.1.2. the Commissioner’s concerns identified in paragraphs 1.7 and 1.8. 

2.2. Meta represents, and the Commissioner acknowledges, that: 2.2.1. Meta no longer permits third party app developers to access from Meta an installing user’s friend’s information, unless that friend has also installed the app and authorised it to have access to that information; 2.2.2. since the period relevant to the Civil Penalty Proceedings, being 12 March 2014 to 1 May 2015 (Relevant Period), Meta has dedicated significant and increased resources to monitoring third party apps and enforcing Meta’s terms and policies; 2.2.3. since the Relevant Period, Meta substantially reduced the number of information fields available that third party app developers (via Facebook Login) may request an installing user’s permission to access, examples of information fields that have been removed include: (i) the installing user’s friends’ information, excluding the circumstances specified in paragraph 2.2.1; and (ii) the installing user’s religion, political views and relationship details; 2.2.4. since the Relevant Period, Meta has continued to implement granular data permissions processes to allow a user who installs a third party app to decide which categories of certain information they will share with the third party app; and 2.2.5. Meta monitors the compliance of third party app developers of consumer apps with Meta’s Platform Terms through measures including, but not limited to, ongoing manual reviews and automated scans, and regular assessments, audits, or other technical and operational testing at least once every 12 months. 

3. Meta’s Enforceable Undertaking to the Commissioner 

3.1. Meta offers this enforceable undertaking to the Commissioner under section 114 of the Regulatory Powers Act, including to address the concerns in paragraphs 1.7 and 1.8. 

3.2. This undertaking comes into effect when: 3.2.1. it is executed by Meta; and 3.2.2. this undertaking, so executed, is accepted by the Commissioner (the Commencement Date). 

3.3. This undertaking ceases to have effect upon the completion of the Payment Program (as defined at paragraph 4.1 below). 

4. Undertaking to Establish Payment Program 

4.1. Meta undertakes to implement a payment program open to Eligible Australian Users in recognition of the Commissioner’s concern that those users may have suffered loss or damage as a result of interferences with their privacy arising from the conduct the subject of the Commissioner’s concerns as identified in paragraphs 1.7 and 1.8 above in accordance with Parts 5 and 6 of this enforceable undertaking and fulfill each of its obligations set out in Parts 4 to 7 of this enforceable undertaking (Payment Program). 

4.2. Meta undertakes to: 4.2.1. engage an independent third party administrator (the Administrator); 4.2.2. direct the Administrator to administer the Payment Program in accordance with: 4.2.2.1. Parts 5 and 6 of this enforceable undertaking; and 4.2.2.2. any instructions for the Payment Program given to the Administrator by Meta (Scheme Instructions); and 4.2.3. complete the Payment Program within 2 years from the Commencement Date or such longer period as agreed between the Commissioner and Meta. 

5. Eligible Australian Users 

5.1. A person is an “Eligible Australian User” if the person: 5.1.1. held a Facebook Account at any time during the period of 2 November 2013 and 17 December 2015 ( Eligibility Period) 5.1.2. was located in Australia for 30 days or more during the Eligibility Period; and 5.1.3. during the Eligibility Period, either: 5.1.3.1. installed the Life App using Facebook Login; or 5.1.3.2. did not install the Life App but was Facebook friends with another Facebook user who had installed the Life App using Facebook Login. 

5.2. Subject to paragraphs 5.3 to 5.5, an Eligible Australian User can register with the Administrator as a “ Claimant ” under the Payment Program if they submit to the Administrator within the registration period prescribed by the Administrator (Registration Period) a valid Registration Form and evidence in such form as prescribed, verifying that the person: 5.2.1. is an Eligible Australian User under paragraph 5.1; 5.2.2. holds a genuine belief that as a direct consequence of the conduct the subject of the Commissioner’s concerns identified in paragraphs 1.7 and 1.8, they have suffered loss or damage, being either: 5.2.2.1. specific economic and/or non-economic loss and/or damage (beyond a generalised concern or embarrassment) (Class 1); or 5.2.2.2. a generalised concern or embarrassment (Class 2). 

5.3. The Registration Form will be prepared by the Administrator in consultation with Meta and may set the standard of verification and evidence that a Claimant must provide for each eligibility criterion by the end of the Registration Period, including by way of statutory declaration or identity verification as considered appropriate. 5.3.1. For paragraphs 5.1.3 and 5.2.2.2, Meta must direct the Administrator to not require more than a valid statutory declaration. 

5.4. Notwithstanding paragraphs 5.2 and 5.3, the Administrator may, in its absolute discretion, determine that a person will not be: 5.4.1. an Eligible Australian User where the Administrator is unable to verify that the person meets the requirements of Part 5 of this enforceable undertaking based on the information available to the Administrator; 5.4.2. a Claimant where the Administrator determines that: 5.4.2.1. the person provided the Administrator with false information, or that the person’s registration is otherwise fraudulent; 5.4.2.2. the person has previously registered as a Claimant; 5.4.2.3. if the person registered to receive payment from Meta, or any of its affiliated or related entities, in a proceeding, investigation or other legal action in any jurisdiction outside of Australia that relates to, or arose out of, the factual background detailed in paragraphs 1.3 to 1.6 of this enforceable undertaking, such as the US settlement of In re: Facebook, Inc. Consumer Privacy User Profile Litigation, Case No. 3:18-md-02843-VC (N.D. Cal.); or 5.4.2.4. the person is not otherwise eligible in accordance with the Scheme Instructions. 

5.5. For the avoidance of any doubt, a person: 5.5.1. is not a Claimant if the person has not registered in accordance with paragraphs 5.2 and 5.3 during the Registration Period; and 5.5.2. cannot register as a Claimant in both Class 1 and Class 2. 

6. Payment Program 

6.1. Meta undertakes to, within 60 days of the Commissioner filing a Notice of Discontinuance in the Civil Penalty Proceedings, pay an amount of $50 million (the Contribution Amount) to the Administrator for the Administrator to use to make payments to Claimants (Payments) in accordance with paragraphs 6.2 to 6.9. 

6.2. Following the payment of the Contribution Amount by Meta in accordance with paragraph 6.1, Meta will: 6.2.1. notify the Commissioner that the Contribution Amount has been paid to the Administrator; 6.2.2. direct the Administrator to make information available on a website established by the Administrator regarding the Payment Program, including how Eligible Australian Users can register with the Administrator as a Claimant; 6.2.3. use reasonable best efforts to: 6.2.3.1. identify, based on Meta’s available records, persons that may be Eligible Australian Users; and 6.2.3.2. facilitate electronic notice of the Payment Program to those persons; 6.2.4. direct the Administrator to take reasonable steps to publicise the Payment Program within Australia. 

6.3. The Payment that a Claimant receives will depend on whether the Administrator determines that the Claimant is a Class 1 or Class 2 Claimant. 

6.4. In performing its obligations under Parts 5 and 6, the Administrator will apply any Scheme Instructions, including any cap to apply to Payments made to Claimants and the principle that all Class 2 Claimants be paid the same amount. 

6.5. Subject to the Scheme Instructions, following the end of the Registration Period, the Administrator will: 6.5.1. evaluate and determine, using evidence available to the Administrator at that time, in the Administrator’s absolute discretion whether: 6.5.1.1. a person is an Eligible Australian User (in accordance with Part 5); and 6.5.1.2. if a person registers as a Claimant in Class 1, the person has provided sufficient supporting evidence to substantiate their claim that they have suffered loss or damage in Class 1; 6.5.2. determine the number of Claimants in each of Class 1 and Class 2; 6.5.3. commence the process for determining the Payment that each Class 1 and Class 2 Claimant is entitled to receive, in accordance with this Part 6; and 6.5.4. notify Meta that the process referred to in paragraph 6.5.3 above has begun, at which point Meta will within 24 hours notify the Commissioner thereof. 

6.6. The Scheme Instructions will provide for the Administrator to include a timely internal review avenue for: 6.6.1. any decision by the Administrator to reject a Claimant’s Class 1 registration and allocate the Claimant to Class 2; and 6.6.2. assessment of any Payment amount that is to be made to a Claimant in Class 1. 

6.7. Following the conclusion of the process in 6.5, in accordance with paragraphs 6.3 and 6.4, the Administrator will: 6.7.1. finalise its determination including any internal review of any Payment that is to be made to a Claimant in either Class 1 or Class 2; 6.7.2. once all determinations are completed in accordance with paragraph 6.7.1, notify Meta of: 6.7.2.1. the total number of Claimants; and 6.7.2.2. the aggregated amount to be distributed to all Claimants; and 6.7.3. make a timely Payment to each such Claimant. 

6.8. Following receipt of the notification set out at paragraph 6.7.2, Meta will within 24 hours notify the Commissioner thereof. 

6.9. If the total aggregate sum of Payments made to Claimants under paragraph 6.7 is less than the Contribution Amount, Meta will direct the Administrator to pay the residual amount to the Australian Government’s Consolidated Revenue Fund. 

6.10. If, when performing its obligations under Parts 5 and 6 of this enforceable undertaking, the Administrator informs Meta that it will not be able to comply with any deadline specified in this undertaking, Meta will: 6.10.1. promptly inform the Commissioner, and the OAIC, of the extent and reasons for the delay; 6.10.2. in consultation with the Administrator, determine a date by which the Administrator will reasonably be able to complete the actions specified; 6.10.3. propose the modified date(s) to the Commissioner and seek to agree any necessary extension; and 6.10.4. cause the Administrator to notify Claimants of the delay and the amended date(s) agreed with the Commissioner (if applicable). 

7. Compliance 

7.1. Subject to any confidentiality obligations owed by Meta, the OAIC may request in writing from time to time and Meta will provide to it, documents and information that are reasonably necessary for the purpose of assessing Meta’s compliance with Parts 4 to 6 of this enforceable undertaking. 

7.2. Meta will use its best endeavours to provide documents and information in response to any request under paragraph 7.1 within 14 days of the request. 

8. Other matters 

8.1. Meta acknowledges that the Commissioner: 8.1.1. will publish this enforceable undertaking as well as a summary of the undertaking, on the OAIC website; 8.1.2. may issue a statement on acceptance of this enforceable undertaking referring to its terms and to the circumstances which led to the Commissioner’s acceptance of the undertaking; and 8.1.3. may from time to time publicly refer to this enforceable undertaking, including any breach of this enforceable undertaking by Meta. 

8.2. Meta acknowledges that: 8.2.1. The Commissioner’s acceptance of this enforceable undertaking does not preclude the Commissioner’s power to investigate, power not to investigate further, or the exercise of any of the Commissioner’s functions under the Privacy Act in relation to: (i) the representative investigation opened by the Commissioner under sub-section 40(1) of the Privacy Act on 21 October 2019 (referred to by the Commissioner using the reference number CP18/01262); or (ii) any contravention that concerns conduct that is outside the scope of the Civil Penalty Proceedings or Investigation. 8.2.2. If the Commissioner considers that Meta has breached this enforceable undertaking, the Commissioner may apply to the Federal Court or Federal Circuit Court to enforce the undertaking under s 115 of the Regulatory Powers Act. 

8.3. The Commissioner’s acceptance of this enforceable undertaking is not a finding that Meta has contravened the Privacy Act or the APPs. 

8.4. Meta gives this enforceable undertaking on a without prejudice basis, and without any admission of liability as to the matters raised in the Investigation or Civil Penalty Proceedings. Any representations made or acknowledgments given by Meta in this enforceable undertaking, whether express or implied, are made without prejudice or admission of liability. In giving this enforceable undertaking, neither Meta nor any of its affiliated or associated entities are precluded from taking any position or relying on any facts or factual statements in any legal or regulatory proceedings in Australia or in any other jurisdiction in relation to any matter that was within the scope of the Commissioner’s investigations referred to in paragraphs 1.7 and 8.2.1, the Civil Penalty Proceedings or which otherwise relate to the Cambridge Analytica Incident described at paragraphs 1.3 to 1.6. 9. 

Confidentiality 

9.1. The Commissioner acknowledges that information provided by Meta, or the Administrator, to the Commissioner and OAIC in accordance with this enforceable undertaking may contain sensitive commercial information (Commercial-in-confidence Information). 

9.2. The Commissioner acknowledges that any such Commercial-in-confidence Information is provided by Meta, or the Administrator, in confidence. 

9.3. The Commissioner: 9.3.1. will only publish or otherwise disclose any Commercial-in-confidence Information with Meta’s written agreement, unless otherwise required by law; and 9.3.2. will only use any Commercial-in-confidence Information for the purpose of exercising the Commissioner’s powers, or performing functions or duties in the Privacy Act.

16 October 2024

Regulating Platforms

'“Digital Colonization” of Highly Regulated Industries: An Analysis of Big Tech Platforms’ Entry into Health Care and Education' by Hakan Ozalp, Pinar Ozcan, Dize Dinckol, Markos Zachariadis and Annabelle Gawer in (2024) 64(4) California Management Review comments 

Digital platforms have disrupted many sectors but have not yet visibly transformed highly regulated industries. This study of Big Tech entry in healthcare and education explores how platforms have begun to enter highly regulated industries systematically and effectively. It presents a four-stage process model of platform entry, which we term as “digital colonization.” This involves provision of data infrastructure services to regulated incumbents; data capture in the highly regulated industry; provision of data-driven insights; and design and commercialization of new products and services. The article clarifies platforms’ sources of competitive advantage in highly regulated industries and concludes with managerial and policy recommendations. 

Over the past few years, digital platforms have disrupted competition and innovation across many sectors, including retail, entertainment, hospitality, transportation, gaming, and music. Platform firms are now dominating the list of biggest firms by market capitalization, often referred to as “Big Tech” players. Recently, the prevalence of digital platforms has further increased in various industries as the COVID-19 pandemic amplified the role of digital services in people’s lives, reshaping customer habits from how they shop, work, and entertain while skyrocketing the revenues of digital platforms. These Big Tech firms are under scrutiny regarding how much value they return to end customers as they acquire, analyze, and take advantage of their data to boost profits and influence markets. 

While platform firms have now become prevalent in many industries, highly regulated industries such as healthcare and education had lagged behind until recently, but there are clear signs that this has started to change. Considering these changes, we explore the entry paths of Big Tech platforms (more specifically Google (Alphabet), Amazon, Facebook (Meta), Apple, and Microsoft, also known as GAFAM) into highly regulated industries by looking at the prominent examples of healthcare and education in the context of the United States and United Kingdom, where they have been most active in these industries so far. 

The Platform Business Model and the Role of Data 

A platform creates value thanks to its advantages in connecting different users through enhanced matchmaking and facilitating transactions among them (e.g., by connecting customers and complementors). Platforms can achieve rapid growth through highly scalable technological intermediation and reduction of various costs for transacting, matching, and innovating. Platform growth is further fueled by network effects, and this mechanism underpins how the value a user receives from a platform increases with each new user on the same side of the platform (i.e., direct network effects) and the other side of the platform (i.e., indirect network effects). More recently, there has also been the growing importance of data network effects, which refer to the increasing value users obtain from the platform in parallel with the amount of data the platform accumulates, such as better recommendations on Netflix. Thanks to their digital nature, platforms can connect various platform sides via digital interfaces4 and, in the process, accumulate/leverage external resources (i.e., data) to develop relevant capabilities (i.e., algorithm-driven data analysis) to improve further and expand their offering. 

Due to their digital properties, use of data, and platform business models, certain technology companies have rapidly grown, becoming some of the largest and most influential firms globally (see Figure 1 for Big Tech firms’ market caps). Big Tech firms started out as platforms with a single and focused intermediation activity (e.g., search engine). From there, they grew significantly in scope and entered new industries. Initially, they typically expanded into the space of their own complementors within their platform ecosystems (e.g., AmazonBasics competing with its own third-party sellers). Following this, they have entered related or adjacent sectors (e.g., Facebook acquisition of Instagram) or what may at first seem to be unrelated markets (e.g., Google acquisition of Waymo). ... 

Data sit at the heart of every digital platform. As such, the main logic underpinning the various market segment entries by platforms seems to aim to maximize data collection; enhance data network effects that they have already built across industries7 to create more value; apply their data analysis capabilities; and take precedence over existing firms while improving products/services for consumers. 

This data-centric approach to platform growth and industry entry, however, regularly raises questions on data privacy, fair competition, and the balance of value creation and value capture in industries where platforms enter. These issues become even more critical in highly regulated industries where value creation becomes extremely important (e.g., patient lives saved by new technologies), and concerns around data privacy and fair competition are even more salient (e.g., medical or learning records already used by Google and others). 

Platforms in Highly Regulated Industries 

Despite the penetration and dominance of digital platforms in several industries, highly regulated sectors such as education, energy, finance, and healthcare appeared to have been left behind9 due to high regulatory control creating barriers to entry for platforms. Highly regulated industries typically have high entry barriers and high operational and compliance costs, as visible from the various regulations for the healthcare and education industries in Table 1. Compared with other industries where regulatory interventions are typically “lighter” (e.g., taxi and transportation services), industries such as healthcare and education are characterized by the heavy involvement of state and government actors. This is mainly because of the crucial strategic role these industries play in ensuring social welfare and boosting the country’s economic growth and development, but also due to the associated social ramifications in terms of access, fairness, equality, privacy, and data sensitivity, as these factors directly tie to human and constitutional rights (e.g., “right to education”). Such state-controlled apparati, in addition to imposing a large set of rules and procedures upon private firms, often leave limited room for private actors to operate in, which presents a distinct challenge to market entrants. ... 

Then, there is the thorny issue of data. Prospective digital platform entrants require data to develop new products or services, which calls for different strategies in highly regulated industries due to the need to capture and process sensitive personal data. If leaked or misused, such data can cause harm to individuals—for example, biometric data, genetic data, health-related data, race, or ethnicity data (typically held by healthcare providers), religious or philosophical beliefs (typically expressed in the context of education and recorded in essays, online educational platform discussions, and so on), and student education records.  This tends to raise the level of regulation further, thus exacerbating inhibition of new entry. Due to such considerations, digital platforms have, until recently, mostly been absent from highly regulated industries. However, this is changing. Despite the challenges noted above, we observe that Big Tech firms are expanding their platforms into some of these highly regulated industries. Recent examples include Amazon acquiring U.S. online pharmacy Pillpack, Alphabet-Google partnering with the United Kingdom’s National Health Service (NHS) for data sharing and developing AI-powered healthcare services, and U.S. universities partnering with Amazon to install Alexa in the dormitories and elsewhere. In 2020, the COVID-19 pandemic accelerated this trend further by causing the emergence of new initiatives. Examples include Google’s subsidiary Verily offering COVID-19 testing and tracing, Google and Apple cooperating on mobile operating systems for COVID-19 contact tracing, Google Education expanding to support remote education, and Amazon offering COVID-19-specific Amazon Web Services (AWS) solutions for hospitals and research institutes. 

Building on these trends, this article explores how Big Tech platforms enter and compete in highly regulated industries. Focusing on healthcare and education industries, we identify an entry pattern for these digital platforms, in which they typically begin as suppliers of data-infrastructure services to incumbents in the first phase. As incumbent service providers such as hospitals, schools, healthcare conglomerates typically lack capabilities in data management, they contract out these activities to Big Tech firms as technology service providers, aiming to reduce costs and improve services. In the second phase, Big Techs leverage their existing relationships as well as their data analysis capabilities (which they use to produce data-driven insights) to get access to the data already held by incumbent service providers. This indirect data capture (e.g., access to already collected data in a hospital), which they combine with their own direct data capture activities (e.g., through proprietary hardware such as Apple Watch, Google Tablet), then becomes an essential component of Big Tech firms’ entry pathway into the targeted highly regulated industry. As Big Tech firms combine the data they captured directly and indirectly, they can provide superior data-driven insights, which can add significant value to incumbent service providers (e.g., through saved lives, better learning outcomes, and lower costs). We find that a final component of entry for Big Tech firms is the design and commercialization of new products and services for the highly regulated industry target, where they may end up competing with their former clients over time. 

Overall, our research suggests that Big Tech entry in highly regulated industries occurs via a process that we name “digital colonization,” which we specify as composed of four stages: provision of data infrastructure services to incumbents; direct and indirect data capture in industry; provision of data-driven insights; and design and commercialization of new products and services. While Big Tech firms rarely end up directly offering the “primary service” (e.g., providing school education or becoming primary healthcare providers) in highly regulated industries, they change the power dynamics in these industries over time by commoditizing incumbent service providers, turning them into mere complementors while Big Tech firms control the data and become unique providers of critical, data-driven value.

20 September 2024

Social Media Surveillance Practice

The US Federal Trade Commission 'A Look Behind the Screens Examining the Data Practices of Social Media and Video Streaming Services' report comments 

Social Media and Video Streaming Services (“SMVSSs”) have become a ubiquitous part of our daily lives and culture. Various types of SMVSSs provide places where people can connect, create, share, or stream everything from media content like videos, music, photos, and games; comment on or react to content; connect with and send messages to other users; join, participate in, or subscribe to groups, message boards, or content channels; read or watch news; and consume advertisements for consumer products. Unsurprisingly, this ease of accessing information and connecting others has transformed our society in many ways. 

These types of services let you connect with the world from the palm of your hand. At the same time, many of these services have been at the forefront of building the infrastructure for mass commercial surveillance. Some firms have unique access to information about our likes and dislikes, our relationships, our religious faiths, our medical conditions, and every other facet of our behavior, at all times and across multiple devices. This vast surveillance has come with serious costs to our privacy. It also has harmed our competitive landscape and affected the way we communicate and our well-being, especially the well-being of children and teens. Moreover, certain large SMVSSs may enjoy significant market power and therefore face fewer competitive constraints on their privacy practices and other dimensions of quality. 

In December 2020, the Federal Trade Commission (“Commission” or “FTC”) issued identical Orders to File Special Reports under Section 6(b) of the FTC Act to a cross-section of nine companies in the United States in order to gain a better understanding of how their SMVSSs affect American consumers. Appendix A to this report (hereinafter “Appendix A”) is a copy of the text of the Order that the Commission issued to these nine Companies. 

This report is a culmination of that effort. Based on the information provided in response to the Commission’s Orders, publicly available materials, and the Commission’s long experience with SMVSSs, this report highlights the practices of the Companies’ SMVSSs, which include social networking, messaging, or video streaming services, or photo, video, or other content sharing applications available as mobile applications or websites. The report contains five sections relating to the following topics: (1) data practices, such as collection, use, disclosure, minimization, retention, and deletion; (2) advertising and targeted advertising; (3) the use of automated decision-making technologies; (4) practices relating to children and teens; and (5) concerns relating to competition. 

1. Summary of Key Findings 

This report makes the following general findings, although each finding may not be applicable to every one of the Companies in every instance: 

• Many Companies collected and could indefinitely retain troves of data from and about users and non-users, and they did so in ways consumers might not expect. This included information about activities both on and off of the SMVSSs, and included things such as personal information, demographic information, interests, behaviors, and activities elsewhere on the Internet. The collection included information input by users themselves, information gathered passively or inferred, and information that some Companies purchased about users from data brokers and others, including data relating to things such as household income, location, and interests. Moreover, many Companies’ data practices posed risks to users’ and non-users’ data privacy, and their data collection, minimization, and retention practices were woefully inadequate. For instance, minimization policies were often vague or undocumented, and many Companies lacked written retention or deletion policies. Some of the Companies’ SMVSSs did not delete data in response to user requests—they just de-identified it. Even those Companies that actually deleted data would only delete some data, but not all. 

• Many Companies relied on selling advertising services to other businesses based largely on using the personal information of their users. The technology powering this ecosystem took place behind the scenes and out of view to consumers, posing significant privacy risks. For instance, some Companies made available privacy-invasive tracking technologies such as pixels, which have the ability to transmit sensitive information about users’ actions to the SMVSSs that use them. Because the advertising ecosystem is complex and occurs beneath the surface, it is challenging for users to decipher how the information collected from and about them is used for ad targeting—in fact, many users may not be aware of this at all. Some Companies’ ad targeting practices based on sensitive categories also raise serious privacy concerns. 

• There was a widespread application of Algorithms, Data Analytics, or artificial intelligence (“AI”), to users’ and non-users’ personal information. These technologies powered the SMVSSs—everything from content recommendation to search, advertising, and inferring personal details about users. Users lacked any meaningful control over how personal information was used for AI-fueled systems. This was especially true for personal information that these systems infer, that was purchased from third parties, or that was derived from users’ and non-users’ activities off of the platform. This also held true for non-users who did not have an account and who may have never used the relevant service. Nor were users and non-users empowered to review the information used by these systems or their outcomes, to correct incorrect data or determinations, or to understand how decisions were made, raising the potential of further harms when systems may be unreliable or infer sensitive information about individuals. Overall, there was a lack of access, choice, control, transparency, explainability, and interpretability relating to the Companies’ use of automated systems. There also were differing, inconsistent, and inadequate approaches relating to monitoring and testing the use of automated systems. Other harms noted included Algorithms that may prioritize certain forms of harmful content, such as dangerous online challenges, and negative mental health consequences for children and teens. 

• The trend among the Companies was that they failed to adequately protect children and teens—this was especially true of teens, who are not covered by the Children’s Online Privacy Protection Rule (“COPPA Rule”). Many Companies said they protected children by complying with the COPPA Rule but did not go further. Moreover, in an apparent attempt to avoid liability under the COPPA Rule, most SMVSSs asserted that there are no child users on their platforms because children cannot create accounts. Yet we know that children are using SMVSSs. The SMVSSs should not ignore this reality. When it comes to teens, SMVSSs often treat them as if they were traditional adult users. Almost all of the Companies allowed teens on their SMVSSs and placed no restrictions on their accounts, and collected personal information from teens just like they do from adults. 

The past can teach powerful lessons. By snapshotting the Companies’ practices at a recent moment in time (specifically, the Orders focused on the period of 2019–2020) and highlighting the implications and potential consequences that flowed from those practices, this report seeks to be a resource and key reference point for policymakers and the public. 

2. Summary of Competition Implications 

• Data abuses can fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers. In digital markets, acquiring and maintaining access to significant user data can be a path to achieving market dominance and building competitive moats that lock out rivals and create barriers to market entry. The competitive value of user data can incentivize firms to prioritize acquiring it, even at the expense of user privacy. Moreover, a company’s practices with respect to privacy, data collection, data use, and automated systems can comprise an important part of the quality of the company’s product offering. A lack of competition in the marketplace can mean that users lack real choice among services and must surrender to the data practices of a dominant company, and that companies do not have to compete over these dimensions—depriving consumers of additional choice and autonomy. In sum, limited competition can exacerbate the consumers harms described in this report. 

3. Summary of Staff Recommendations 

• Companies can and should do more to protect consumers’ privacy, and Congress should enact comprehensive federal privacy legislation that limits surveillance and grants consumers data rights. Baseline protections that Companies should implement include minimizing data collection to only that data which is necessary for their services and implementing concrete data retention and data deletion policies; limiting data sharing with affiliates, other company-branded entities, and third parties; and adopting clear, transparent, and consumer-friendly privacy policies. 

• Companies should implement more safeguards when it comes to advertising, especially surrounding the receipt or use of sensitive personal information. Baseline safeguards that Companies should implement include preventing the receipt, use, and onward disclosure of sensitive data that can be made available for use by advertisers for targeted ad campaigns. 

• Companies should put users in control of—and be transparent about—the data that powers automated decision-making systems, and should implement more robust safeguards that protect users. Changing this would require addressing the lack of access, choice, control, transparency, explainability, and interpretability relating to their use of automated systems; and implementing more stringent testing and monitoring standards. 

• Companies should implement policies that would ensure greater protection of children and teens. This would include, for instance, treating the COPPA Rule as representing the minimum requirements and providing additional safety measures for children as appropriate; recognizing that teen users are not adult users and, by default, afford them more protections as they continue to navigate the digital world; providing parents/legal guardians a uniform, easy, and straightforward way to access and delete their child’s personal information. 

• Firms must compete on the merits to avoid running afoul of the antitrust laws. Given the serious consumer harms risked by lackluster competition, antitrust enforcers must carefully scrutinize potential anticompetitive acquisitions and conduct and must be vigilant to anticompetitive harms that may manifest in non-price terms like diminished privacy.

The FTC notes

The SMVSSs in our study demonstrate the many ways in which consumers of all ages may interact with or create content online, communicate with other users, obtain news and information, and foster social relationships. For example:

• Amazon.com, Inc. is the parent company of the Twitch SMVSS, wherein users can watch streamers play video games in real time. In 2022, Twitch reported an average of 31 million daily visitors to its service, most of whom were between 18 and 34 years old.  

• ByteDance Ltd. is the ultimate parent company of TikTok LLC, the entity that operates the TikTok SMVSS. TikTok enables users to watch and create short-form videos.  TikTok reported having 150 million monthly active users in the United States in 2023, up from 100 million monthly active users in 2020. 

• Discord Inc. operates the Discord SMVSS that provides voice, video, and text communication capabilities to users, by means of community chat rooms known as “servers.”  In 2023, Discord reported having 150 million monthly active users, with 19 million active community chat rooms per week. 

• Meta Platforms, Inc., formerly known as Facebook, Inc., operates multiple SMVSSs. In 2023, Meta reported having 3 billion users across its services. WhatsApp Inc. is part of the Meta Platforms, Inc. corporate family. WhatsApp Inc. received a separate Order, and is therefore treated as a separate Company for purposes of this report. 

 o The Facebook SMVSS provides users with a communal space to connect to a network of other users by sharing, among other things, text posts, photos, and videos.  In March 2023, Meta reported an average of more than 2 billion daily active users and almost 3 billion monthly active users. 

o The Messenger SMVSS is a messaging application that allows users to communicate via text, audio calls, and video calls. Users of Messenger must have a Facebook account to use Messenger’s services. 

o The Messenger Kids SMVSS is a children’s messaging application that allows users to communicate via text, audio calls, and video calls.Parents of Messenger Kids users create accounts for their children through a parent’s Facebook account. 

o The Instagram SMVSS, acquired by Meta Platforms, Inc. in 2012,72 allows users to share photos and videos with their networks. News reports estimated that as of 2021 there were 1.3 billion users on Instagram. 

o The WhatsApp SMVSS, acquired in 2014 by Meta Platforms, Inc.,   is a messaging platform.WhatsApp reportedly had more than 2 billion users in 2023. 

• Reddit, Inc. operates the Reddit SMVSS, which provides communities wherein users can discuss their specific interests.News outlets reported that, as of April 2023, approximately 57 million people visit the Reddit platform every day. 

• Snap Inc. operates the Snapchat SMVSS, which it describes in part as a “visual messaging application that enhances your relationships with friends, family, and the world.”  Snapchat also includes “Stories,” which provides users the ability to “express themselves in narrative form through photos and videos, shown in chronological order, to their friends.”  Snap Inc. reported having 375 million average daily active users in Q4 2022. 

• Twitter, Inc. was a publicly traded company until October 2022, at which time it became a privately held corporation called X Corp. Since that time, X Corp. has operated X, formerly known as the Twitter SMVSS, which provides users with the ability to share short posts.  Twitter, Inc. reported having 217 million average daily users in Q4 2021. 

• YouTube, LLC is wholly owned by Google LLC, with Alphabet Inc. as the ultimate parent. Google LLC operates YouTube’s two SMVSSs. 

o The YouTube SMVSS is a video sharing product. As of February 2021, YouTube, LLC reported that “over two billion logged in users [come] to YouTube every month . . . . ” 

o The YouTube Kids SMVSS, first introduced in 2015, is a children’s video product with family-friendly videos and parental controls.88 As of February 2021, YouTube, LLC reported that YouTube Kids had more than 35 million weekly viewers.

 While the SMVSSs in this report are generally “zero price” (or have free versions available) for the end user – meaning they require no money from consumers to sign up, or to create an account, for the basic version of the product – firms monetize (or profit off of) these accounts through data and information collection. 

27 July 2024

Quackery

'Vaccine Misinformation for Profit: Conspiratorial Wellness Influencers and the Monetization of Alternative Health' by Rachel E Moran, Anna L Swan and Taylor Agajanian in (2024) 18 International Journal of Communication 1202–1224 comments 

Influencers in the alternative health and wellness space have leveraged the affordances of social media to make posting misleading content and misinformation a lucrative endeavor. This research project extends knowledge of antivaccine misinformation through an examination of the role of social media influencers and the parasocial relationships they build with audiences in the spread of vaccine-opposed messaging and how this information is leveraged for profit. Through digital ethnography and media immersion, we focus on three prominent antivaccine influencers—the Wellness Homesteader, Conspiratorial Fashionista, and Evangelical Mother—analyzing how they build community on Instagram, promote antivaccination messaging, and weaponize this information to direct their followers to buy products and services. 

Misinformation is an immensely profitable endeavor. Amplifiers of misinformation have found routes to monetize their digital content by using it to direct their online followers to purchase the products and services they endorse. Far-right news and opinion site Infowars, for instance, made $165 million between 2015 and 2018, selling health supplements and merchandise through the Infowars store (Vaillancourt, 2022) advertised during Alex Jones’ talk radio shows, often attached to misinformation narratives or in the context of discussing conspiracy theories (Locker, 2017). This project explores how misinformation is monetized, focusing specifically on how influencers within the antivaccination movement use social media to amplify misleading information about vaccinations and leverage this information for profit. 

Although vaccine misinformation far predates COVID-19, its scale and prominence have increased immensely because of the pandemic (Wardle & Singerman, 2021). Extant research has identified a range of vaccine-related misinformation, including spurious claims that the vaccine contains microchips (Virality Project, 2022) and broader attacks on the safety, efficacy, and necessity of COVID-19 vaccines (Brennen, Simon, Howard, & Nielsen, 2020). Further research has explored the dominant sources of vaccine misinformation, identifying the spread of vaccine opposition from antivaccine influencers (Center for Countering Digital Hate [CCDH], 2021)—in addition to a top-down amplification of misinformation from political elites (Enders, Uscinski, Klofstad, & Stoler, 2020). 

Alternative health and wellness influencers were a cause for concern during the COVID-19 pandemic because of their ties to misinformation and vaccine hesitancy (Maloy & De Vynck, 2021). Leveraging a lack of trust in Western institutionalized medicine, some wellness influencers have pushed hyperindividualistic frameworks that dispute the need for collective vaccine uptake in favor of natural wellness (Kale, 2021). Furthermore, the sociotechnical savvy of wellness influencers affords them significant reach for their content. A report from the CCDH (2020) noted that the top 12 antivaccine influencers gained 877,000 followers between December and June 2020 (p. 5). Beyond numerical reach, the parasocial relationships built via social media exacerbate the impact of vaccine misinformation. Moreover, influencers well-versed in the economic and technical infrastructures of social media are well positioned to financially benefit from the misinformation they share. 

This article opens by discussing research on the spread of vaccine-related misinformation on social media and within the health and wellness space, as well as the role of parasocial relationships in this spread. By highlighting the role of gender in both the saliency of health-related misinformation and the monetization of wellness content, we offer insight into the gendered dimension of misinformation spread. We then present our methods, drawing on a digital ethnography of three wellness influencers on Instagram. Ultimately, our analysis reveals how influencers take advantage of the platform’s sociotechnical infrastructure and attempt to profit from misinformation while normalizing antivaccine sentiment and conspiratorial rhetoric.

26 July 2024

Platforms

'Platform Administrative Law A Research Agenda' by Moritz Schramm comments 

Scholarship of online platforms is at a crossroads. Everyone agrees that platforms must be reformed. Many agree that platforms should respect certain guarantees known primary from public law like transparency, accountability, and reason-giving. However, how to install public law-inspired structures like rights protection, review, accountability, deference, hierarchy and discretion, participation, etc. in hyper capitalist organizations remains a mystery. This article proposes a new conceptual and, by extension, normative framework to analyze and improve platform reform: Platform Administrative Law (PAL). Thinking about platform power through the lens of PAL serves two functions. On the one hand, PAL describes the bureaucratic reality of digital domination by actors like Meta, X, Amazon, or Alibaba. PAL clears the view on the mélange of normative material and its infrastructural consequences governing the power relationship between platform and individual. It allows us to take stock of the distinctive norms, institutions, and infrastructural set ups enabling and constraining platform power. In that sense, PAL originates-paradoxically-from private actors. On the other hand, PAL draws from 'classic' administrative law to offer normative guidance to incrementally infuse 'good administration' into platforms. Many challenges platforms face can be thought of as textbook examples of administrative law. Maintaining efficiency while paying attention to individual cases, acting proportionate despite resource constraints, acting in fundamental rights-sensitive fields, implementing external accountability feedback, maintaining coherence in ruleenforcement, etc.-all this is administrative law. Thereby, PAL describes the imperfect and fragmented administrative regimes of platforms and draws inspiration from 'classic' administrative law for platforms. Consequentially, PAL helps reestablishing the supremacy of legitimate rules over technicity and profit in the context of platforms. 

'Power Plays in Global Internet Governance' (GigaNet: Global Internet Governance Academic Network, Annual Symposium 2015) by Madeline Carr comments 

The multi-stakeholder model of global Internet governance has emerged as the dominant approach to navigating the complex set of interests, agendas and implications of our increasing dependence on this technology. Protecting this model of global governance in this context has been referred to by the US and EU as ‘essential’ to the future of the Internet. Bringing together actors from the private sector, the public sector and also civil society, multi-stakeholder Internet governance is not only regarded by many as the best way to organise around this particular issue, it is also held up as a potential template for the management of other ‘post-state’ issues. However, as a consequence of its normative aspirations to representation and power sharing, the multi-stakeholder approach to global Internet governance has received little critical attention. This paper examines the issues of legitimacy and accountability with regard to the ‘rule-makers’ and ‘rule-takers’ in this model and finds that it can also function as a mechanism for the reinforcement of existing power dynamics.

14 July 2024

Social Media TOC overreach

Social networking sites' licensing terms: A cause of worry for users?' by Phalguni Mahapatra and Anindya Sircar in (2024) The Journal of World Intellectual Property comments 

Terms of service (ToS) for social networking sites (SNS) like Instagram, Meta, X, and so on, is a clickwrap agreement that establishes a legal relationship between platform owners and users, yet probably it is the most overlooked legal agreement. The users of these sites often overlook the ToS while registering themselves on these sites and even if users (especially those with no legal background) are attempting to read them, it is difficult for them to understand because of the legal jargon. As a result, they end up signing away legal rights about which they are unaware. According to these sites' ToS, though the ownership of the user-generated content is bestowed upon the user but the users grant to these sites “a non-exclusive, royalty-free, transferrable, sub-licensable, worldwide license” and this license can be used “to host, use, distribute, modify, run, copy, publicly perform or display, translate and create derivative works of user's content.” These sites even bestow on themselves the right to modify the content which poses challenges to the right-holders' moral rights. The fact that these platforms can sublicense the user's work creates complexities when a user intends to grant an exclusive license of his work. There is no clarity on the language of the terms like the manner of exploiting the user's content, what happens if the sublicensing is for a wrongful purpose? The problem magnifies as there is neither explicit indication about the duration of the license nor about the territorial extent. This would suggest that these sites can get a perpetual license on the content of the users. These SNS have consumers spread worldwide but in their ToS, they have forum selection clauses that list out the courts and districts in California. This means users will be discouraged to bring a copyright suit due to the lack of an option to file a claim in their home country. The US case Agence France Presse (AFP) v. Morel helps us conclude twofold mainly there is a hope that SNS will not take ToS to shield themselves from further use of the user's work and strengthen the idea that these platforms may choose to license to their partners. Further, in 2018, the Paris Tribunal declared most clauses of Twitter “null and void” due to the nature of the license and also, because it was not in compliance with French Intellectual Property Code. This gives a faint hope for a positive shift in the legal treatment of user-generated content. Though these sites claim to retain the sublicensing right to run their sites smoothly but the licensing is very broad and carries the possibility of many usages of the content that too without paying compensation to the user. Therefore, this paper aims to highlight and give insight into the unfair licensing terms of the most often used social networking sites and its implications.

22 February 2024

Fake News

'Tackling online false information in the United Kingdom: The Online Safety Act 2023 and its disconnection from free speech law and theory' by Peter Coe in (2024) Journal of Media Law comments  

In the UK, there has been consistent recognition from a variety of actors, including the UK government, that the dissemination of false information can be harmful to individuals and the public sphere. It has also been acknowledged that this problem is being exacerbated by the role played in our lives by the likes of Google, Facebook, Instagram, and X, and because the systems that were in place for dealing with this type of content (and other illegal and/or harmful content), prior to the introduction of the Online Safety Act 2023 (OSA), were designed for the offline world, and were (and in some cases, still are) outdated and no longer fit for purpose. 

The UK’s online harms regime has intensified this debate. The regime began life in April 2019 as the Online Harms White Paper, morphing into multiple iterations of the Online Safety Bill (OSB), published in its original form in May 2021, and finally crystallising as the OSA, which was enacted on the 26th of October 2023. On the one hand, it is acknowledged that legislation placing statutory responsibilities on internet services to prevent the publication of false information (and other illegal and harmful content) may benefit society and public discourse. This is because, in theory at least, by helping to decrease the volume of false information we are exposed to, such laws should reduce the opportunities for the public sphere to become distorted. As citizens we should be able to assess, with greater confidence, the veracity of information available to us, and in turn, use this information, and the trust we have in it, to make positive contributions to public discourse. 

But, on the other hand, the OSA has been (and before it, the OSB was) met with significant resistance from a variety of actors because of the potential threats to free speech that it presents.  Indeed, since the publication of the White Paper, and the initial draft of the OSB, the regime has been shrouded in controversy. The OSB was subject to numerous amendments, and at one stage, it looked as though it would be scrapped altogether. Yet despite this, at the time of writing, the OSA has recently been enacted, albeit the overall shape of the regime remains unclear, because much of the legal detail will be contained in secondary legislation. Therefore, debates on the efficacy of the OSA will continue, and only time will tell what its ultimate impact on free speech will be. 

Notwithstanding this uncertainty, the purpose of this article is to interrogate the regime’s compatibility with free speech law and theory. In doing so, it begins with an explanation of what is meant by false information, and how the phenomenon has been exacerbated by the internet. This is followed by analysis of the pre-OSA system for dealing with this content, and an explanation of why it did not work, as aspects of it have a bearing upon the OSA regime. Next, the contours of the free speech framework are sketched, including relevant jurisprudence of the European Court of Human Rights (ECtHR), and the theories underpinning it that are particularly relevant to online false information. In this section I explain why these theories are flawed in this context, and therefore how these flaws could justify the creation of laws to tackle online false information. Yet, as I go on to suggest in my analysis of the OSA, which follows, this creates a paradoxical disconnect between theory and law, in that although the flaws in the theories may justify the creation of such laws – which manifests as the OSA – its creation arguably conflicts with the ECtHR’s jurisprudence, and the spirit of its theoretical foundations, and could inadvertently interfere with free speech. Finally, the article concludes with some potential solutions for meeting this challenge that do not erode one of the core fundamental human rights.

01 December 2023

Platform Regulation

The Senate Standing Committee on Economics in its Influence of international digital platforms report comments 

Regulatory fragmentation 

Submitters raised concerns that the current digital platforms sphere involves duplicated and overlapping regulations, and there is a need for a coordinated approach to digital regulation. 

BSA – The Software Alliance (BSA) suggested ‘the Committee consider the broader issue of how to reduce regulatory overlap, including by promoting improved coordination between regulators, policymakers, and the private sector’. 

The Tech Council of Australia stated regulation: … should aim to deliver a more coordinated and cohesive approach to digital regulation that enables long-term growth of the Australian technology sector in the national interest, including by avoiding overly broad or piecemeal approaches to regulation, which our research has found to be a key barrier to innovation and capturing the benefits of new technologies. 

Significant regulatory gaps were also highlighted in evidence to the committee. For example, the committee questioned Digital Platforms Regulators Forum (DP-REG) members about which agency held responsibility for protecting Australians from spending on unused subscription fees. The DP-REG members were unable to point to an agency that would hold that remit. 

Senator Shoebridge asked: Isn't this one of the problems in this whole space? There's no lead agency. There's no-one who's ultimately responsible. It must frustrate you no end, which is one of the reasons you've brought together this informal forum. There's just no lead agency, is there?

Ms Elizabeth Hampton, Deputy Commissioner, Office of the Australian Information Commissioner (OAIC), responded: I don't agree that it's the frustration around a lack of a lead agency that caused us to coalesce and come together. Instead, we've reflected on the fact that we each have an important different lens to bring to a set of issues, and it's the coordination of those different lenses that results in a really good outcome for Australians. 

In response to committee concerns that there is no agency with ultimate responsibility when regulatory gaps are identified, Ms Creina Chapman, Deputy Chair, the Australian Communications Media Authority, explained: The gap is not in the regulators; the gap is not in the fact that there is not a regulator that has responsibility for it. If there is a gap, it is a legal gap. 

Proposed solutions 

Upskill and empower existing regulators 

Submissions recommended upskilling existing regulators, such as the Australian Competition and Consumer Commission (ACCC) or OAIC, so they have the adequate skills and resources to regulate the behaviour of digital platforms. 

The Consumer Policy Research Centre stated regulators need specific expertise to regulate digital platforms: Monitoring and surveillance by regulators in this complex environment needs a diverse workforce that not only understands the implications of the law but also the technical architecture on which these business models are built upon. Experts such as data scientists, artificial intelligence engineers, information security analysts and other technical professionals need to be in the mix to support upstream regulation and mitigate the risk to consumers, potentially before widespread harm has occurred.[ 

Similarly, the Human Rights Law Centre (HRLC) supported a comprehensive regulatory framework that includes ‘broad information-gathering and enforcement powers for an independent, well-resourced and integrated regulator’. The HRLC told the committee this regulator should be empowered and have robust information-gathering powers. 

Better coordination between regulators and policy makers 

Evidence to the committee also recommended the creation of a new model of coordination between existing regulators and policy makers. 

.au Domain Administration Ltd (auDA) told the committee that closer engagement with stakeholders is needed at all stages of policy development. It advocated for coordinated efforts between regulators, policy makers, the private sector, technical community, academia, and civil society. It recommended: … all relevant regulators and government departments actively participate in a multi-stakeholder policy development approach. This would help to avoid siloes and overlapping consultation processes facilitated by different government entities and drive greater certainty amongst industry and consumers. 

Similarly, BSA stated DP-REG is comprised of only regulators, with policy makers and industry representatives absent from the conversation. It recommended considering the Australian National University’s Tech Policy Design Centre’s (ANU Tech) proposed model to increase involvement from industry representatives and independent technical expertise. BSA argued: The increased involvement of industry representatives will provide the government with access to independent technical expertise and a regular platform for consultations. More importantly, it will discourage taking a reactionary approach when addressing emerging concerns and ultimately will pave the way for a more certain regulatory environment. 

ANU Tech’s proposed ‘Tech policy coordination model’ includes the following layers of coordination: The Tech Policy Ministerial Coordination Meeting is the peak Ministerial coordination body in the Australian tech-ecosystem. Its objective is to facilitate cross-portfolio Ministerial coordination before tech policy proposals are taken to Cabinet. The Tech Policy Council is the peak senior officials’ coordination body in the Australian tech-ecosystem. Its objective is to improve coordination among and between policymakers and regulators. The Tech Regulators Forum is the peak regulator coordination body in the Australian tech-ecosystem. Its objective is to improve coordination among tech regulators. 

6auDA supported ANU Tech’s model as it ‘does not change any existing mandates of Ministers, departments or agencies, but helps cultivating coordination at all stages of tech policy development’. 

The Australian Information Industry Association noted ANU Tech’s proposal and similarly recommended establishing a Council of Tech Regulators which: … would work to a similar model as the Council of Financial Service Regulators and be comprised of authorities such as the eSafety Commissioner, the Australian Information Commissioner, the Digital Transformation Agency, the Department of Home Affairs, Treasury, the Attorney General’s Department and the Australian Cyber Security Centre. The Council would ensure that, as far as possible, regulation is streamlined and rationalised to mitigate overregulation, red tape, duplicative reporting requirements and parallel consultation timeframes. Breaking down silos and ensuring that in respect of technology – the all-pervasive, innovative and value-creating engine at the heart of the economy – the left hand of government knows what the right is doing as far as regulation and reporting is concerned, and regulatory impost is contained as far as possible. 

Parliamentary committee 

The HRLC recommended a dedicated Parliamentary committee on digital matters be established to acknowledge the ongoing attention required on emerging tech issues and policy coordination across Government. 

A joint submission from multiple research organisations similarly proposed Parliament establish a Joint Standing Committee on Digital Affairs. They stated: A dedicated standing Committee would allow for a better allocation of time, resources and expertise and help develop a more sophisticated understanding of digital and technology policy. Existing portfolio committees are overworked and their broad remits mean that they neither have the capacity nor time to proactively interrogate emerging tech issues. 

A digital platforms specific body 

Some submissions raised support for a new digital platforms specific body. 

Ben Blackburn Racing recommended consideration of ‘the introduction of a new Australian Government agency which could bring more independence to oversight of the influence and decision-making structures of Big Tech companies and their impacts in Australia’. 

Mr Rupert Taylor-Price, Chief Executive Officer, Vault Cloud, discussed how there is no clear regulator in the digital platforms space, and there needs to be one: It's a bit like when you get on a plane. To some degree, you don't have to worry too much about who's providing you that service. You know that it's a well-regulated industry. You know that there's a degree of safety by getting on that plane. That's what CASA and other regulators in that space affect in the outcome that they get for their citizens. In the technology space, say that you didn't like the way an algorithm had worked for you in some way on one of these platforms. How do you deal with that? If you go to a bank, you go to APRA. If you get on a plane, you go to CASA. Who do you go to as a citizen when you have an issue with a technology platform? 

The Law Institute of Victoria recommended: … the introduction of a new government regulatory authority, or the establishment of a collaborative team across existing regulatory bodies, tasked with overseeing the regulation of Big Tech companies specifically … [it] would need to be sufficiently resourced in order to provide any meaningful opportunity for appropriate regulation. 

Ms Kate Pounder, Chief Executive Officer, Tech Council of Australia, stated the US National Institute of Standards and Technology (NIST) could be examined as a model that brings together competition, consumer and data issues. NIST is an agency of the US Department of Commerce, that produces standards and guidelines with expertise. Ms Pounder stated: … often in these new areas, particularly when technology is moving fast, there's not a high degree of expertise. So I think centralising that in one body, which can provide expert guidance to governments and work fairly rapidly to get standards and guidance material out, is vital. It can take a science based and evidence based model. Often the work of NIST ends up being utilised in other markets. I think there's also an opportunity for Australia to simply leverage that a bit better and aim for coherence with some of the guidelines that come out there. It often tends to happen in the private sector, because an Australian company that's successful in the tech sector will be selling globally, so they might look to those guidelines and try to adhere to them. 

Digital Rights Watch recommended a Minister for Digital Capabilities be appointed. 

Committee view 

This section provides the committee’s view on key themes and concerns raised throughout this inquiry and the committee’s recommendations. 

Regulation 

Throughout this report and particularly earlier in this chapter, evidence was presented that the current regulatory system is not working effectively. Regulation of digital platforms is split across various agencies, in some cases with competing priorities. 

The committee found that the current legislative and regulatory framework is not sufficient to ensure positive outcomes for consumers and competition. In short, it is fragmented. 

The committee acknowledges the importance of well-resourced and appropriately skilled regulators to ensure adequate enforcement efforts achieve the desired outcomes. The committee is concerned that upskilling existing regulators alone will not resolve regulatory gaps or provide the expertise needed to address emerging competition and consumer risks. 

Stakeholders highlighted that despite the market power of Big Tech and potential for harm, digital platforms are not regulated like other significant industries, such as banks, telecommunications providers and airlines. The committee considers that a new regulatory regime could address fragmentation and bolster regulatory efficacy. 

Evidence to the committee also highlighted the need for better coordination between regulatory bodies and policymakers. Improved coordination would streamline legislation and regulatory efforts. Further, a coordinating body would give consumers and digital platforms certainty about where to turn to when issues arise. 

Accordingly, the committee recommends a new coordination body be established, which does not alter or acquire the day-to-day functions of the four main DP-REG agencies but coordinates collaboration efforts, common responsibilities and tasks.

The Committee's recommendations are 

R1 The committee recommends that the Australian Government establish a digital platforms coordination body. 

Competition 

Chapters 3 and 4 considered issues that have arisen due to the concentrated market power of Big Tech. The committee heard evidence that the dominant market power of Big Tech has allowed these firms to engage in anticompetitive behaviours and exploit power imbalances to the detriment of small businesses and consumers. 

A range of submitters told the committee that the market power of Big Tech allows these firms to engage in anticompetitive tying and self-preferencing. These practices make it difficult for other companies, particularly small businesses, to compete, resulting in reduced competition, less choice for consumers and increased prices. 

The committee has heard that Big Tech platforms may impede consumers from switching products or services through tying practices that lock consumers in to one provider.

Submissions raised concerns that app store providers tie the use of app store services to the use of their in-app payment (IAP) services. App stores take up to a 30 per cent commission on every IAP and restrict app-developers from providing their own IAP mechanisms. 

The committee is concerned that the tying of IAPs creates a barrier to entry for competitors and limits the choices available to consumers. Further, the committee believes there is a lack of transparency in how commission fees are determined, and how app stores use the IAP data they collect. 

Furthermore, the committee has heard that regulation of near-field communication mobile device components and mobile wallets is needed to ensure consumers have similar rights against large digital platforms compared to regulated financial institutions that provide payment services. 

Other jurisdictions such as the European Union (EU) and South Korea have introduced measures that require major app store operators such as Apple and Google to unbundle the use of their proprietary in-app payment systems from the use of app distribution services. 

Accordingly, the committee supports introduction of legislation that will address anti-competitive tying by Big Tech platforms to ensure a level and competitive playing field. 

R2  The committee recommends that the Australian Government introduce legislation to prevent anti-competitive practices through the bundling of payment services and products by large digital platforms.  

The committee is concerned that self-preferencing conduct may be anti‑competitive and create barriers to entry for small businesses. 

Multiple submissions called for regulation that tackles anti-competitive self‑preferencing by gatekeeper companies and referred to international approaches that could be adopted. For instance, the United Kingdom (UK) has proposed a pro-competition regime for digital markets. This regime will include measures to address anti-competitive self-preferencing by requiring digital platforms to not influence competitive processes or outcomes in a way that unduly self-preferences a platform’s own services over that of its rivals. 

The committee is of the view that there needs to be greater transparency on the part of large digital platforms regarding the practice of self-preferencing their own products. 

The committee believes this warrants mandatory public disclosure by large international platforms when they engage in self-preferencing behaviour for their own products on app-stores and other digital markets. Furthermore, large digital platforms should disclose aggregate information on the data collected from customers and business users for reasons other than the app review process. 

R3  The committee recommends that the Australian Government require mandatory disclosure by large digital platforms of self-preferencing conduct. 

Dispute resolution 

In Chapter 4, the committee considered consumer redress options within the digital economy. While Big Tech firms invest in a range of mechanisms to prevent and minimise problems for consumers, a significant number of problems and disputes are unable to be resolved within existing systems. 

Internal dispute resolution mechanisms provided by digital platforms are an important first point of redress. However, consumers encounter many difficulties navigating these mechanisms and the power imbalance between Big Tech providers and consumers is evident. 

The committee supports the introduction of mandatory digital platform internal dispute resolution standards. 

R4 The committee recommends the Australian Government implement mandatory dispute resolution requirements for large digital platforms via regulation. 

Judicial escalation of disputes with digital platforms is generally not financially accessible for most consumers, nor expeditious enough to address problems before serious harm occurs. Small businesses and consumers are therefore reliant on a regulator choosing to prosecute their case; however, regulators such as the ACCC focus their resources on systemic issues. 

The committee is concerned that consumers are left with no realistic escalation options once business-to-business dispute resolution, perhaps with the assistance of an independent advocate or mediator, has been exhausted. 

The committee considers the proposal for a judicial escalation option akin to a state-level small claims tribunal has merit. 

R5  The committee recommends the Australian Government establish a tribunal for small disputes with digital platforms. 

Transparency 

Chapters 5 and 6 highlighted concerns about transparency of data use by Big Tech, including by algorithms and in automatic decision-making. 

Data collection by digital platforms occurs on a grand scale, often without explicit consent from users. Data brokers aggregate data to on-sell for commercial use, such as targeted advertising. Submissions raised concerns that consumer data can be used for profiling and discrimination, without consumers being aware that their data was collected. 

The committee suggests measures be implemented to ensure customers are aware of what personal data is being collected by digital platforms and what it is used for. A greater effort should be made by digital platforms and the Australian Government to ensure personal data of individuals is adequately protected. 

The committee proposes implementation of a public data reporting regime requiring Big Tech firms to: provide details of the targeting criteria for advertising and data determining which users are exposed to particular ads; and provide key metrics on demographic data collected for the purposes of targeting advertising, particularly children’s data. 

The committee notes that the EU Digital Services Act requires platforms that display advertising material on their online interfaces to ensure users can identify, for each advertisement displayed, that the information is an advertisement, who the advertisement is on behalf of and the parameters selecting recipients of the advertisement. Some digital platforms have responded to this by creating an online repository of advertisers. This model could be considered by the government. 

Mandatory reporting of data collection by digital platforms should be modelled on the obligations imposed on superannuation funds to disclose certain information in notices for annual members’ meetings. 

Chapter 6 discussed concerns that algorithms used by digital platforms may not operate in a way that adequately supports community values, such as fairness, accuracy, privacy and user safety. 

Evidence supported international approaches to strengthen the transparency of algorithmic use by digital platforms. In particular, the UK and the EU have implemented transparency standards for the use of algorithmic tools. 

Large digital platforms should be subject to data access obligations and transparency measures which extend to algorithms used for content recommendation and for targeted marketing. 

The committee supports the development of a risk-based regulatory framework by the proposed digital platforms coordination body. The framework should place the onus on digital platforms to identify risks created by their use of algorithms and outline how they will address those risks. 

R6 The committee recommends the Australian Government implement a requirement for designated digital platforms to report advertising material via a public register, based on turnover, and that it implement mandatory reporting on algorithm transparency, data collection and profiling by very large platforms, particularly identifying what personal data is collected and how it is used. 

The committee notes the Privacy Act Review proposal to create a right of data erasure. 

Submissions highlighted that individuals have limited rights when it comes to how their data is used. A right to erase personal data would give individuals more control over their own information when engaging with digital platforms. 

The committee notes any right of erasure must extend beyond an individual’s ability to delete data, such as photos or posts, which they have voluntarily shared online to also encompass biographical, geolocation, browsing habits, ‘likes’ and other data surreptitiously collected and collated by digital platforms. 

R7  The committee recommends that the Australian Government regulate an individual’s right to delete personal data. 

Children’s data 

As highlighted in Chapter 8, children’s online data collection raises particular security and personal risks. Evidence suggested that the changes in digital platforms’ practices required to protect children online will only occur when mandatory codes with penalties for non-compliance are introduced and enforced. 

The committee considers that additional regulation of children’s data protection and privacy rights is necessary. The committee recommends implementing a mandatory code for the protection of children online, addressing regulatory fragmentation and aligning the rights of Australian children with international jurisdictions. 

R8  The committee recommends the Australian Government legislate for mandatory industry codes on the collection, use and retention of children’s data.