Showing posts with label Internet and Telco. Show all posts
Showing posts with label Internet and Telco. Show all posts

19 March 2026

TIA

Commonwealth Ombudsman Oversight of Covert Electronic Surveillance – 

Report to the Minister for Home Affairs on agencies’ compliance with the Telecommunications (Interception and Access) Act 1979 and the Telecommunications Act 1997 from Commonwealth Ombudsman inspections conducted from 1 July 2024 to 30 June 2025 

 https://www.ombudsman.gov.au/__data/assets/pdf_file/0022/325390/Oversight-of-Covert-Electronic-Surveillance-Report-2024-2025-AMENDED.pdf

13 December 2024

Information Privacy

The Privacy and Other Legislation Amendment Bill 2024 (Cth) amends the Privacy Act 1988 (Cth) to implement an initial tranche of reforms arising from the proposals of the 2022 review of the Privacy Act. 

Key reforms include a statutory tort for serious invasions of privacy, establishing a Children’s Online Privacy Code, new powers for the Minister to direct the Commissioner to develop and register Australian Privacy Principles (APP) codes and conduct public inquiries, a new civil penalty for acts and practices which interfere with the privacy of individuals (but fall below the threshold of ‘serious’ invasion) and a civil penalty infringement notice scheme for specific APP and other obligations. The Criminal Code Act 1995 (Cthj) is amended to introduce two offences for doxxing, ie the menacing or harassing release of personal data using a carriage service. 

The Bill does not remove the current small business exemption in the Privacy Act. 

 APP codes and temporary APP codes developed 

The APP codes are written codes of practice about information privacy that set out how the APPs are to be applied or complied with by specified APP entities and can impose additional requirements. Once registered, APP codes are binding on the specified APP entities. Currently APP codes may be developed by ‘code developers’ (such as a body representing a group of APP entities) on their own initiative or by request of the Commissioner (if satisfied it is in the public interest). If the Commissioner’s request has not been complied with, or the Commissioner has decided not to register the APP code, the Commissioner can also develop an APP code if ‘satisfied that it is in public interest’ (ss 26E, 26F and 26G). 

 Schedule 1, Part 2 of the Bill amends the Privacy Act to enable the Minister to direct the Commissioner to develop and register APP codes and temporary APP codes. These codes must not cover certain acts and practices which are exempt under the Privacy Act (such as individuals acting in a non-business capacity, organisations acting under Commonwealth contract and employee records). The Minister may direct the Commissioner to develop an APP code ‘if the Minister is satisfied that it is in the public interest’. The Minister may direct the Commissioner to develop a temporary APP code if the Minister is satisfied that ‘it is in the public interest’ and ‘the code should be developed urgently’. The period a temporary APP code may be in force ‘must not be longer than 12 months’. 

Although a registered APP code is a legislative instrument (section 26B) and subject to the usual parliamentary disallowance processes, the Minister’s written directions to the Commissioner would not be legislative instruments (subsections 26GA(3) and 26GB(3)). A temporary APP code would not be subject to the usual parliamentary disallowance processes (subsection 26GB(8)). This may represent a significant new ministerial power to direct the Commissioner to impose privacy requirements on specified APP entities without parliamentary oversight. 

 Penalties and remedies 

Currently section 13G of the Privacy Act outlines civil penalties which may be imposed for ‘serious’ or ‘repeated’ interferences with privacy. The amendments in Schedule 1, Part 8 will refocus section 13G on ‘serious’ interferences with privacy. Whether an act or practice was done or engaged in ‘repeatedly or continuously’ will be one of the factors which a court may take into account in determining if an interference with privacy was ‘serious’. The maximum amounts of the penalties in section 13G (substantially increased in 2022) would remain the same. Other provisions will expand the Commissioner’s options to seek penalties and other remedies for interferences with privacy which may not reach the threshold of being ‘serious’. Section 13H creates a civil penalty if an entity does an act, or engages in a practice, which interferes with the privacy of an individual. The maximum penalty will  be 2,000 penalty units for an individual (currently $660,000) or 10,000 penalty units for a body corporate ($3,300,000). Section 13K and amendments to section 80UB introduce a scheme for civil penalty infringement notices to be issued for breaches of a number of specific obligations under the APPs and non-compliant eligible data breach statements. Those obligations include APP 1.4 (failure to include required information in an APP privacy policy) or non-compliance with subsection 26WK(3) which sets out what must be contained in an eligible data breach statement. 

Paragraph 13K(1)(x) would allow other APP obligations to be prescribed by regulation as part of the infringement notice scheme. The maximum civil penalty for a breach of subsections 13K(1) and (2) would be 200 penalty units (currently $66,000). However, under subsection 104(2) of the Regulatory Powers (Standard Provisions) Act 2014 (Cth) the amount payable under an infringement notice for one alleged contravention would be 12 penalty units for an individual (currently $3,960) and 60 penalty units for a body corporate (currently $19,800). Further amendments to section 80UB of the Privacy Act modify the applicable number of penalty units for infringement notices given to publicly listed corporations which would be worked out by ‘multiplying the number of alleged contraventions by 200’. Schedule 1, Part 9 inserts section 80UA which provides that Federal Courts will also have the power to make a range of orders in civil penalty proceedings where a contravention of a civil penalty provision under the Privacy Act has been established. 

  Public inquiries 

Part IV of the Privacy Act deals with the functions of the Commissioner. Schedule 1, Part 10 would insert provisions into Part IV to allow the Minister to direct the Commissioner to conduct, or to approve the Commissioner conducting, public inquiries into specified matters relating to privacy. The Minister’s direction or approval would not be a legislative instrument. For the purposes of a public inquiry, the Commissioner would be able invite public submissions and use existing investigation powers to obtain documents and examine witnesses (sections 44 and 45). After completing the public inquiry, Commissioner must prepare a written report for the Minister. If any entities have been specified in the Minister’s direction or approval of the public inquiry, they will also receive a copy. The Minister must table a copy before each House of the Parliament within 15 sitting days and the Commissioner must make the report publicly available (unless the Minister otherwise directs). 

 Monitoring and investigation powers 

Schedule 1, Part 14 iinsert new divisions into Part VIB to add monitoring and investigation powers set out in the Regulatory Powers (Standard Provisions) Act 2014 (Cth). These powers include entry, search and seizure powers. The Explanatory Memorandum (p. 65) states: Bringing the Information Commissioner’s regulatory powers in line with the standard provisions would provide additional powers and greater safeguards to ensure they are robust and align with best practice. Additionally, ensuring uniformity with the standard provisions would bring the Information Commissioner’s powers in line with comparable domestic regulators, and increase legal certainty for entities and individuals who are subject to those powers. 

Children’s Online Privacy Code 

The introduction of a Children’s Online Privacy Code (COPC) was a proposal of the Privacy Act review, with the Government indicating $3 million funding for the OAIC to develop a Children’s Online Privacy Code (COPC) over 3 years. Schedule 1, Part 4 contains the provisions to establish a COPC. Section 26GC provides that the Commissioner must develop and register an APP code about online privacy for children within 24 months. Item 30 inserts a definition of the term child into the Privacy Act meaning ‘an individual who has not reached 18 years’ which is consistent with the Online Safety Act 2021 and UK’s age appropriate design code. 

 The Commissioner will have a broad discretion regarding who ‘may’ be consulted in developing the COPC (subsection 26GC(8)). This differs from the Privacy Act review which recommended developers ‘should be required to consult broadly with children, parents, child development experts, child-welfare advocates and industry’ (p. 157). Before registering the COPC, the Commissioner must make a draft of the COPC available, invite and give consideration to public submissions and consult with the eSafety Commissioner and the National Children’s Commissioner (subsection 26GC(9)). The COPC must set out how the APPs are to be applied or complied with in relation to the privacy of children. 

The COPC will not cover some acts and practices that are exempt under the Privacy Act such as individuals acting in non‑business capacity, organisations acting under Commonwealth contract or employee records. Under subsection 26GC(5) the entities  bound by the COPC are providers of social media services, relevant electronic services or designated internet services (within the meaning of the Online Safety Act 2021 (Cth) who are ‘likely to be accessed by children’ and are not providing a health service. There will also be a capacity to specify other APP entities for the purposes of the COPC. 

  Emergency declarations 

Part VIA of the Privacy Act contains provisions dealing with personal information in emergencies or disasters. This includes provision for the Prime Minister or the Minister to make a declaration of emergency in certain circumstances to allow for the wider sharing of information (sections 80J and 80K). Schedule 1, Part 3 amends Part VIA to require a more targeted approach to emergency declarations. pSection 80KA will set out the matters which must be specified in an emergency declaration. These matters include: the kind or kinds of personal information to which the declaration applies the entity or class of entities that may collect, use or disclose the personal information and the entity or class of entities that the personal information may be disclosed to one or more permitted purposes of the collection, use or disclosure (these must be purposes which directly relate to the Commonwealth’s emergency or disaster response). 

Eligible data breach declarations 

Part IIIC of the Privacy Act establishes a scheme which requires regulated entities to notify certain individuals and the Commissioner about ‘eligible data breaches’. Data breaches are ‘eligible’ if they are likely to result in serious harm to any of the individuals to whom the information relates (section 26WE). Schedule 1, Part 7 of the Bill inserts Division 5 into Part IIIC to allow the Minister to make eligible data breach declarations which, similarly to emergency declarations, would permit ‘collections, uses and disclosures of personal information…to prevent or reduce the risk of harm to individuals’. These declarations can be made where the Minister is satisfied it is ‘necessary or appropriate’ to prevent or reduce a risk of harm arising from the misuse of personal information from an eligible data breach (subsection 26X(1)). 

Consistent with amendments to facilitate emergency declarations, an eligible data breach declaration must specify particular matters: the kind or kinds of personal information to which the declaration applies the entity or class of entities that may collect, use or disclose the personal information the entity or class of entities that the personal information may be disclosed to one or more permitted purposes of the collection, use or disclosure. A ‘permitted purpose’ must be a purpose directly related to preventing or reducing a risk of harm to one or more individuals at risk from the eligible data breach. 

  Restrictions on disclosures and disallowance 

For both emergency and eligible data breach declarations the specified entities or classes of entities may include State and Territory authorities but must not be, or include, a media organisation (subsections 80KA(2) and 26X(3)). Under subsections 80J(3) and 26X(10) both emergency and eligible data breach declarations would be legislative instruments  not be subject to the usual parliamentary disallowance process, justified as necessary ‘to ensure that decisive action can be taken’ and ‘to establish an immediate, clear and certain legal basis for entities to handle personal information’. 

  Overseas data flows 

APP 8 addresses cross-border disclosures of personal information. In particular, APP 8.1 provides that before an APP entity discloses personal information about an individual to an overseas recipient, the entity must take ‘reasonable steps’ to ensure that the recipient does not breach the APPs (except APP 1) in relation to that information. This operates in conjunction with section 16C of the Privacy Act which essentially makes APP entities accountable for breaches of the APPs where they have disclosed personal information about an individual to an overseas recipient under APP 8. APP 8.2 provides exceptions from the obligations under APP 8.1. This includes APP 8.2(a) which provides an exception where the recipient is subject to a law or binding scheme with ‘substantially similar’ protections for personal information as the APPs. The amendments in Schedule 1, Part 6 will allow the Minister to prescribe countries and binding schemes which would fall under the existing exception in APP 8.2(a). 

 Paragraph 8.2(aa) amends APP 8.2 to include a new exception to APP 8.1 where countries or binding schemes are prescribed under APP 8.3. Amendments are made to the regulation-making power in section 100 of the Privacy Act to provide that the Minister may only prescribe a country or binding scheme where: the laws of the country, or the binding scheme protect personal information in a way that, ‘overall, is at least substantially similar to the way in which’ the APPs protect information and there are mechanisms that the individual can access to take action to enforce that protection. 

The Attorney-General’s second reading speech indicated the amendments would give businesses and individuals ‘greater confidence’ in the safety of personal information and to ‘reduce costs’ for businesses when entering into arrangements with overseas entities. Article 45(3) of the EU’s GDPR contains comparable provision for the European Commission to decide that a non-EU country ‘has an adequate level of data protection’ to facilitate legal international data flows without further safeguards. Countries which have a comparable provision in privacy laws include Japan and New Zealand. 

  Automated decision-making and privacy policies 

Currently, the APPs regulate the content and availability of privacy policies of APP entities (APP 1.3–1.6). Schedule 1, Part 15 introduces new requirements for APP entities concerning the information that must be included in their privacy policies about the kinds of personal information used, and types of decisions made, in automated decision-making. Subclause APP 1.7 requires an APP entity to include certain information if: the entity has arranged for a computer program to make, or do a thing that is substantially and directly related to making, a decision the decision could reasonably be expected to significantly affect the rights or interests of an individual and personal information about the individual is used in the operation of the computer program to make the decision or do the thing that is substantially and directly related to making the decision. The information which must be included in the privacy policy is set out in subclause APP 1.8. These are: the kinds of personal information used in the operation of such computer programs the kinds of such decisions made solely by the operation of such computer programs the kinds of such decisions for which a thing, that is substantially and directly related to making the decision, is done by the operation of such computer programs. The amendments in Schedule 1, Part 15 would commence 24 months after Royal Assent. 

  Statutory tort for serious invasions of privacy 

This blog - and several submissions by myself to parliamentary and law reform inquiries - has noted a long succession of recommendations for establishment of a statutory tort for serious invasions of privacy. The Privacy Act review proposed the tort should be introduced ‘in the form recommended by' the Australia Law Reform Commission in its Serious Invasions of Privacy in the Digital Era’report. Schedule 2 of the Bill inserts a Schedule 2 into the Privacy Act to establish a cause of action for serious invasions of privacy. The intention is that the Schedule will be read and construed separately from the rest of the Privacy Act. The new provisions in Schedule 2 would commence on the earlier of Proclamation or 6 months after Royal Assent. 

Under subclause 7(1), a plaintiff will have a cause of action in tort against a defendant where: the defendant invaded the plaintiff’s privacy by doing one or both of the following: intruding upon the plaintiff’s seclusion misusing information that relates to the plaintiff and a person in the position of the plaintiff would have had ‘a reasonable expectation of privacy in all of the circumstances’ the invasion of privacy was intentional or reckless and the invasion of privacy was serious. The term intruding upon the seclusion of an individual is defined in subclause 6(1) as including (but not being limited to) ‘physically intruding into the person’s private space’ and ‘watching, listening to or recording the person’s private activities or private affairs’. 'Misusing information that relates to an individual' is be defined as including (but not being limited to) ‘collecting, using or disclosing information about the individual’. Guidance on a threshold regarding how closely information must ‘relate to the plaintiff’ does not appear to be included. 

Subclause 7(7) clarifies that where a defendant invades the plaintiff’s privacy by misusing information that relates to the plaintiff, ‘it is immaterial whether the information was true’. Under subclause 7(2) the new tort would be ‘actionable without proof of damage’. A range of factors are listed which a court may consider in determining whether ‘a person in the position of the plaintiff would have had a reasonable expectation of privacy in all of the circumstances’ and whether the invasion of privacy was serious (subclauses 7(5) and (6)). Clause 14 will set limitation periods within which actions must be commenced. Plaintiffs must commence an action before the earlier of ‘1 year after the day on which the plaintiff became aware of the invasion of privacy’ and ‘the day that is 3 years after the invasion of privacy occurred’. If the plaintiff was under 18 at the time when the invasion of privacy occurred, that person must commence an action before their 21st birthday. 

 Tort Defences 

Where a defendant relies on a public interest in the invasion of privacy, the plaintiff must satisfy the court that this public interest is outweighed by the public interest in protecting the plaintiff’s privacy (subclause 7(3)). Clause 8 lists a range of other defences to claims of invasion of privacy including: where it was required or authorised by or under an Australian law or court/tribunal order where the plaintiff, or another authorised person, expressly or impliedly consented where the defendant reasonably believed it was necessary ‘to prevent or lessen a serious threat to the life, health or safety of a person’ where it was both incidental to the exercise of a lawful right of defence of persons or property and ‘proportionate, necessary and reasonable’. 

It will also be a defence to a cause of action where the invasion of privacy has occurred ‘by publishing’ within the meaning of defamation law and there is a defamation law ‘related defence’ which the defendant is able to establish. These ‘related defences’ would be: a defence of absolute privilege (such as publication of parliamentary or court proceedings) a defence for publication of public documents a defence of fair report of proceedings of public concern. These three defences are not the only defences available in defamation law: the Explanatory Memorandum indicates the other defences were not included ‘because they are not relevant in the context of the statutory tort’. 

Remedies and damages under the Tort

Under the new tort, courts may award damages for ‘emotional distress’. Courts may also award exemplary or punitive damages for invasions of privacy in exceptional circumstances (damages intended to deter or sanction conduct) but will not be able to award aggravated damages (intended to compensate the plaintiff for egregious harm). Subclause 11(5) sets out a maximum cap for damages for non-economic loss and exemplary or punitive damages, which must not exceed the greater of $478,550 or the maximum amount of damages for non-economic loss under defamation law. The model defamation provisions include a mechanism to adjust the maximum damages amount over time (s 35). An ongoing link to the level of defamation damages for non-economic loss means the maximum damages available for invasions of privacy is likely to rise with inflation. Courts will also be able to grant a range of other remedies in addition to, or instead of, damages ‘as the court thinks appropriate in the circumstances’. 

 Journalism and other exclusions 

Clause 15 provides Schedule 2 would not apply to an invasion of privacy to the extent it involves the collection, preparation for publication or publication of journalistic material by a journalist, their employer, a person assisting employed or engaged by the journalist’s employer or a person assisting a journalist in a professional capacity. The scope of this exclusion is limited by the definition of certain terms. The term journalist is defined as a person who ‘works in a professional capacity as a journalist’ and is subject to ‘standards of professional conduct’ or ‘a code of practice’ that applies to journalists (subclause 15(2)). Material will be journalistic material where it: has the character of news, current affairs or a documentary; or consists of commentary or opinion on, or analysis of, news, current affairs or a documentary (subclause 15(3)). Schedule 2 would also not apply to invasions of privacy by: an enforcement body to the extent the enforcement body believes it is reasonably necessary for enforcement related activities (using the definitions in subsection 6(1) of the Privacy Act), an intelligence agency (using the definition in subsection 6(1) of the Privacy Act), or to the extent it involves a disclosure to, or by, an intelligence agency a person who is under 18 years of age (clauses 16, 17 and 18). 

Doxxing offences 

Schedule 3 contains amendments to the Criminal Code to insert two new doxxing offences. Section 474.17C would make it an offence to use a carriage service to make available, publish or otherwise distribute ‘personal data’ in a way that ‘reasonable persons would regard as being, in all the circumstances, menacing or harassing’ towards the individuals concerned. The maximum penalty for this offence would be 6 years imprisonment. Unlike the existing Privacy Act, which uses a concept of ‘personal information’, the definition of personal data in the new offence would be limited to ‘information about the individual that enables the individual to be identified, contacted or located’. This definition would expressly include a number of types of personal data such as an individual’s name, image, telephone number, email address, online account, residential or work address, place of education or place of worship. 

 Section 474.17D makes it an offence to use a carriage service to make available, publish or otherwise distribute the ‘personal data’ of ‘one or more members of a group’. Similar to the above offence, the person must engage in the conduct in a way that reasonable persons would regard as being, in all the circumstances, menacing or harassing towards the members. For this offence to apply, the person must engage in the conduct ‘in whole or in part’ because of their belief that ‘the group is distinguished by one or more protected attributes, such as race, religion, sex, sexual orientation, gender identity, intersex status, disability, nationality or national or ethnic origin’. However, it will be immaterial whether the group is actually distinguished by the relevant attributes (subsection 474.17D(3)). The maximum penalty for this offence would be 7 years imprisonment. 

 Other 

 Schedule 1, Part 1 amends the objects of the Privacy Act to clarify that the promotion of the protection of privacy is ‘with respect to’ personal information as well as to ‘recognise the public interest in protecting privacy’ (paragraphs 2A(a) and (aa)). Schedule 1, Part 5 amends APP 11, ie the obligation of APP entities to take reasonable steps to protect the security of personal information which they hold and destroy or de-identify information they no longer need. APP 11.3 clarifies that these steps include ‘technical and organisational measures’. Part 11 of Schedule 1 expands the declarations which the Commissioner can make where an investigation has found a complaint has been substantiated (section 52). Declarations could include requiring persons or entities to take any reasonable act or course of conduct to ‘prevent or reduce any reasonably foreseeable loss or damage that is likely to be suffered’. Part 12 amends the Commissioner's annual reporting requirements to include further details regarding number of complaints, complaints not investigated and the grounds for decisions. Part 13 expands the Commissioner’s grounds to not to investigate a complaint to include where it ‘has been’ dealt with by a recognised external dispute resolution scheme.

09 September 2024

Safety

The report by former CJ Robert French for the South Australian government on a Legal Examination of Proposed Age-Based Social Media Restrictions comments 

The impact of social media is global. It evolves with new technologies and new applications of existing technology. It provides a variety of means by which people can interact with each other using electronic devices including computers, tablets and iPhones. 

Social media can be beneficial, connecting people and their ideas and experiences, providing new and varied means of self-realisation, and providing opportunities for personal and creative self-expression. It can be educative and deliver community support services that may reduce some of the worst effects of social disadvantage including isolation and inequality. Social media is used for positive support and communication by many elements of the public, private and not-for-profit sectors. The Department of the Premier and Cabinet of South Australia states on its website that it ‘uses social media channels to distribute information to the community’.  In so doing, it reserves the right to remove various species of incoming content including abusive, harassing or threatening comments, replies or direct messages. The Commonwealth Department of Social Services states on its website that it ‘uses a range of social media channels to inform, engage, communicate with and learn from stakeholders.’ These examples could be multiplied. 

Social media can also be a channel for false and harmful content and a platform for bullying, exploitation and predation. It can be addictive. It can inflict harm on vulnerable members of society and particularly on children. While there are benefits to children learning how to navigate social media and how to use it to advantage there are significant risks. Harms identified by the Office of the eSafety Commissioner, established under the Online Safety Act, include: • Personal safety harms — e.g. direct and indirect threats or facilitation of violence; intimidation and harassment; viral challenges; • Health and wellbeing harms — self harm and suicide material — material promoting eating disorders and exposure to development mentally inappropriate conduct; • Harms to dignity — insulting and demeaning comments and trolling; • Privacy harms including doxing, sexual extortion and image-based abuse; • Harms involving discrimination, including hate speech, racism, misogyny, sexual harassment, homophobia and transphobia; • Harms involving perception and manipulation, including grooming of children. 

In recognising these harms, it must also be recognised that across the age ranges to 14 years and from 14 to 16, there will be a variety of developmental stages and vulnerabilities — some of which will depend upon the particular circumstances of the individual child. Risk assessments across these age ranges necessarily involve broad generalisations. 

The South Australian policy setting 

The risks and benefits connected with the use of social media by children are best identified by reference to the investigations and findings of those with expertise and responsibilities in the field. Where a protective regulatory balance should be struck however, is a normative or policy judgment for government informed by evidence and advice. 

The Government of the State of South Australia has taken the view that the most appropriate measure is to restrict access by children to social media generally. The Government has also taken the position that between the ages of 14 and 16, access should only be permitted with parental consent or its equivalent. It is against that background that the South Australian Government has commissioned this legal examination of mechanisms for giving effect to its policy setting. 

The Terms of Reference state in very broad language the policy setting within which this legal examination is conducted. They refer to the harmful effects on the wellbeing and mental health of children of the use of social media and the shortcomings of existing safeguards which are said not to be in step with community expectations. 

The Government acknowledges that the challenges associated with regulating social media services so as to protect children from harm are complex. This independent legal examination has been commissioned by the Premier into ‘how to ban children from having social media accounts’. A legislative pathway is proposed which would be within the legislative power of the South Australian Parliament. The Draft Bill is an indicative model of what legislation, to give effect to the Government’s policy, might look like. The Draft does not pretend to provide the definitive solution to the challenges of regulation and enforcement in this field — challenges which evolve with the dynamic landscape of social media and social media use. 

Existing Commonwealth coverage 

Specific online harmful content is the subject of regulatory powers relating to children conferred on the eSafety Commissioner under the Online Safety Act. It includes: • Cyber bullying of children; • Illegal and restricted online content, including child sexual exploitation material and pro-terror content and pornography; • Non-consensual sharing of intimate images; • Material promoting, inciting, instructing in, or depicting abhorrent violent conduct. 

The regulatory system presently in place under the Commonwealth legislation does not preclude access to social media by children generally. However, it does have mechanisms in place to prevent access by children to particular classes of content. 

The legislative powers of the State of South Australia 

It is within the legislative competency of the South Australian Parliament to enact a law imposing State-specific age restrictions on access to social media services. A territorial link to the State is necessary. That requirement is satisfied by the application of the Act to social media services provided or accessible to users within the State and the imposition of the proposed restrictions upon access to children domiciled within the State. As to the relationship with the Commonwealth legislation, the Online Safety Act of the Commonwealth expressly allows for the possibility of concurrent State laws. The legislative competency of the State is discussed in a separate chapter of this Report. 

The necessary territorial link does pose a practical challenge in that a social media service provider seeking to comply with the restrictions would, as part of that compliance, have to determine which existing and prospective users were domiciled within the State of South Australia. That is a complication which arises from the State-based character of the proposed legislation. 

Options for the establishment of a Regulator 

As appears from examination of the functions of the Commonwealth eSafety Commissioner, the regulatory task in relation to social media is complex and burdensome. It requires financial and human resources and an accumulation of expertise and experience that cannot be created quickly from a standing start. 

It is legally possible for South Australia to create its own bespoke regulator or to confer additional functions on an existing statutory officer such as the Children’s Commissioner or the Commissioner for Consumer Affairs. However, a timeline for getting a State Regulator fully functional and operating could be significant. It would be necessary to recruit people with the expertise and experience necessary to administer and enforce the legislation. There would inevitably be some duplication of resources with those provided to the Commonwealth regulator. 

An alternative approach would be to secure the agreement of the Commonwealth to confer a new State-based regulatory function upon the Commonwealth eSafety Commissioner. There is precedent for that approach in national regulatory schemes. Examples are: Section 13A — Australian Energy Market Act 2004 Section 6AAA — Therapeutic Goods Act 1989 

If the State law does not impose any ‘duty’ on the Commonwealth regulator the consent of the Commonwealth Parliament set out in a law of the Commonwealth would suffice. If the State law purports to confer a duty upon the Commonwealth regulator, that must be a duty which falls within a Commonwealth head of power and is supported by a law of the Commonwealth. The legislative powers of the Commonwealth supporting its Online Safety Act would appear to be sufficient to support a Commonwealth law giving effect to the conferral under State law of regulatory duties to the eSafety Commissioner in relation to the restriction of access to social media services by children in South Australia. 

The choice of regulatory mechanism is a matter for the Government of South Australia and for the Commonwealth if it is decided to try to use the Commonwealth regulator. 

Cooperative federal considerations 

There is another important federal consideration in any design of a South Australian law. South Australia has taken the initiative in proposing a more protective approach to children’s access to social media than presently applies under Commonwealth law. That initiative could itself form the basis for the development of a more comprehensive child protective national scheme than presently exists. It is important that so far as possible South Australian legislation be compatible with the Commonwealth law and capable of providing a template or building block for a cooperative national scheme involving the Commonwealth and other States and Territories of Australia. To that end, the Draft Bill, so far as possible, uses terminology which is consistent with the Commonwealth scheme. A national scheme would remove the requirement for the proposed duty to be limited in its application to children domiciled in South Australia and the compliance and enforcement complications that go with that limitation. 

Legislative models in other jurisdictions 

Consideration has been given to online safety legislation in other jurisdictions. The European Union, Ireland, the United Kingdom, Canada, the United States and Singapore have been referred to. Consideration has also been given to legislative bans enacted in some States of the United States.  

The concepts of: (1) Exemption of beneficial or very low risk social media services; and (2) The use of a reasonable steps criterion required for compliance with a duty to provide access, are derived from some of those examples. 

Reflection upon State statutes in the United States has led to the conclusion that their definitions of terms equivalent to ‘social media services’ are unduly complex, particularly in their lists of statutory exemptions which are likely to give rise to litigious debate. 

The model for South Australia 

The legislative proposal for South Australia uses a generic definition of ‘social media service’ based on that which appears in the Online Safety Act, but which is broader in order to pick up search engines and App stores. The proposal allows for named social media services or classes of social media service to be exempted from age-based restrictions on access by regulation or ministerial determination. The proposal would impose a duty of care on non-exempt social media service providers to prevent access to their services by minors within the restricted age ranges. It would be a defence to a breach of the duty that the provider had taken reasonable steps to comply with it. A separate systemic duty of care would positively require providers to take reasonable steps to prevent access within the restricted age ranges. The operation of these duties of care is elaborated below. 

Regulatory guidelines could set out minimum standards necessary for compliance with the ‘reasonable steps’ requirements in relation to the duties imposed on providers. The ultimate judgment of whether reasonable steps were being taken would be for a court on an enforcement action. There would, however, be ample room for collaboration between the Regulator, the industry, age assurance providers and other stakeholders to give a degree of certainty in this area which involves the use of age assurance mechanisms and determinations of domicile. Despite their novelty in the Australian context, the proposed restrictions do not introduce a regulatory approach which is completely unknown to providers. Many major providers already impose a 13-year old age related ban on access. The Commonwealth Act also imposes age-related bans on access to certain classes of material. 

Exempt Social media — sifting out the good from the bad and the ugly 

The term ‘social media service’ defined broadly, as in the Draft Bill, has a very wide reach. It encompasses the ugly, the bad and the good. It is important that the child protective approach adopted by South Australia, not throw out the baby with the bathwater. It should allow access to existing and new social media services which are beneficial and very low risk, e.g. dedicated educational services and eHealth services. To that end, the Draft Bill, while precluding access by children to a widely defined class of social media services, introduces the concept of an ‘exempt social media service’. It is proposed that ‘exempt social media services’ should be listed by name or category determined by the Minister or the Regulator from time to time according to publicly available criteria. The purpose of leaving the question of what is an exempt social media service to be determined by ministerial or regulatory decision is to ensure flexibility in the face of a complex and rapidly changing technological landscape. It is important not to lock definitions into the legislation which would require amendment by Parliament from time to time to cover unanticipated developments. Some of the definitions in the United States State laws are quite elaborate in their lists of exceptions and are not recommended in the legislative model suggested in this Examination. 

Duties of care to prevent access 

The approach proposed in this Report seeks to give practical and workable effect to the policy of a ban on children’s access to social media. In so doing it does not seek to create a hard-edged prohibition supported by a civil penalty regime of first resort. Rather, it seeks to implement the policy by creating two statutory duties of care. The primary duty would be a duty on a social media service provider to prevent access to its social media service in South Australia by any child under the age of 14 and by any child between the ages of 14 and 16 without the consent of their parents or a person in place of their parents. The second duty would be a duty on a social media service provider to take all reasonable steps to prevent access to their social media service by any person under the age of 14 and by any person between the ages of 14 and 16 without parental consent. These are not just duties to protect children against unsafe online content. They are more comprehensive than the duties of care provided in some other jurisdictions. These are duties of care directed to prevent children from having access to non- exempt social media at all in the lower age range or without parental consent if within the higher age range. Branding them as’ duties of care’ rather than as a general prohibition emphasises the purpose of the legislation which is to give effect to a policy protective of children. The children to whom the duties would apply would be children domiciled or resident in South Australia. The verification of domicile would raise an additional challenge for social media service providers and for the Regulator under the South Australian law. It would be necessary to exclude the application of the duty to children from other States or countries visiting South Australia, e.g., for holiday purposes. A diagram showing a possible decision tree for a provider complying with the duty is attached to this chapter. 

A reasonable steps requirement 

The proposed duties are technologically agnostic. They do not specify the means which a provider must adopt in determining whether access to a user is to be permitted or denied. Plainly, compliance will require the use of age verification and estimation mechanisms. But the means currently available for verification and estimation are still in development.  The hard fact is that there is no error free means of determination of the age of users of an account. There is also a complication in terms of compliance where, as in this case, the relevant duties apply only in relation to children living in South Australia. To know whether the duties apply to it, a social media service provider must know where the proposed user lives. Again that verification — as to address or a location within the State would have to be subject to a ‘reasonable steps’ standard. 

In order to encourage providers into a cooperative rather than adversarial stance with the regulator, it should be open to them to demonstrate, on an allegation of a breach of the first duty, that they have taken all reasonable steps, having regard to available technology, to discharge that duty. The second duty, supportive of the first and directly imposing the reasonable steps requirement, provides the means for a proactive enforcement regime and the development of regulatory guidance as to what constitutes reasonable steps to comply with the restricted access duty. Ongoing consultation with providers and other stakeholders would be essential. 

Modes of enforcement — individual complaints and regulatory inspection 

The proposed Act would provide different ways in which the duties of care are to be enforced. The first way, relevant to the primary duty, would be likely, for the most part, to depend upon complaints made to the regulator. It would arise where a child under the age of 14 is given access to the provider’s social media service, or where a child between the ages of 14 and 16 is given such access without parental consent. There is a limitation to this enforcement mechanism. Complaints-based enforcement is ad hoc and reactive. It would depend in many cases upon a parent becoming aware of a child’s use of a non-exempt social media service and reporting that use to the regulator. The problem with such a complaint-based process, apart from its ad hoc and reactive character, is that it may involve the child in enforcement proceedings. On the other hand, it would be open to the State to treat a breach of the primary duty as a statutory tort, actionable in damages where a child has suffered significant mental or physical harm as a result of the breach. The action could be taken by the child through a legal representative or perhaps by the Regulator on behalf of the child. 

As to the second duty, the question whether a provider has taken reasonable steps to prevent access or access without parental consent could be explored by a process of information gathering from the provider by the Regulator, supported by coercive request powers. Such requests could require the provision of information by a provider relating to its system for verification of domicile, age verification and verification of parental consent where applicable. The Regulator could issue guidance from time to time, of what would constitute reasonable steps. Alternatively, minimum measures necessary to meet the reasonable steps requirement could be specified in a legislative instrument without thereby pre-empting a determination whether they are sufficient in a particular case to meet that requirement. 

Sanctions 

Sanctions for non-compliance with either duty of care, could include the following:

(1) The issue of a remedial notice to the provider to institute a process for age, domicile and parental consent verification that meets the threshold of reasonable steps. 

(2) An enforceable undertaking. 

(3) An infringement notice imposing a specified compensation payment to be made which is capable of challenge in the court. The compensation payment would be applied to a special fund of the kind referred to below. 

(4) The institution of proceedings in court for the imposition of all or any of the following remedies: (i) a declaratory order; (ii) injunctive relief and/or corrective orders; (iii) a compensation order; (iv) a civil penalty. A compensation order would not have the character of a civil penalty. Its proceeds could be paid into a fund to promote research into and education about the effects of social media on children and the means of developing beneficial social media services for children. It could also provide for the discretionary payment of compensation, upon application, for the benefit of a child shown to have suffered harm as a result of exposure to a non-exempt social media service. 

(5) It would be appropriate to seek a civil penalty where there has been wilful or reckless or repeated breaches of one of the duties, non-compliance with a statutory requirement for the provision of information or breach of an enforceable undertaking or an injunction. The amount of the civil penalty could be fixed by regulation. 

(6) A breach of the first duty of care could also constitute a statutory tort where mental harm has resulted. Proceedings for damages for breach of the statutory duty in such a case could be taken by a legal representative of the child harmed or by the Regulator on behalf of the child. 

Ongoing policy development 

A point of importance was made by the Chief Psychiatrist of South Australia about the need for ongoing proactive development of a knowledge base and policies to respond to the harms of social media services and to encourage the development of beneficial social media services or protected uses of existing social media services. An example of a protected use was given by Department for Education officers who spoke of teachers in South Australian schools using Facebook with their students where the Facebook account was controlled by the teacher and was not an account accessible by the students. 

The US Surgeon-General in an Advisory Statement about Social Media and Youth Mental Health, which was issued in 2023, observed that researchers would play a critical role in helping to gain a better understanding of the full impact of social media on mental health and wellbeing and informing policy, best practices and effective interventions. A means by which research could contribute included: • rigorous evaluation of social media’s impact; • the role of age, developmental stage, cohort processes and the in-person environment; • benefits and risks associated with specific social media designs, features and content; • long term effects on adults with social media use during childhood and adolescence; • the development and establishment of standardised definitions and measures for social media and mental health outcomes; • the development and establishment of standardised definitions and measures for social media and mental health outcomes; • evaluation of best practices for healthy social media use; • enhancement of research coordination and collaboration. 

A statutory mechanism already in place in South Australia which might be able to take on the oversight of research, policy development and collaboration incidental to the statutory regime, is the Child Development Council, established under s 46 of the Children and Young People (Oversight and Advocacy Bodies) Act 2016 (SA). Although its primary function under s 55 of that Act is to prepare and maintain the Outcomes Framework for Children and Young People, it has additional functions which include advising and reporting to government on the effectiveness of the Outcomes Framework for Children and Young People for which the Act provides. It may also carry out: such other functions as may be assigned to the Council under this or any other Act or by the Minister. 

This would enable the Council to be given what would amount to policy development functions for the purposes of the further development of policy in relation to restriction of access to social media services and the development of exempt social media services. The Council already has a duty under s 55(3) in performing its functions to seek to work collaboratively with: (a) State authorities and Commonwealth agencies that have functions that are relevant to those of the Council; and (b) Relevant industry, professional and community groups and organisations. 

This could include engagement with providers in what might be a more flexible, high-level way than would be prudent for the regulator to undertake. Alternatively, some other body or authority may be established for that purpose. 

Conclusion — not a counsel of perfection 

Whatever regime is established by the South Australian Government, it will not be perfect. Effecting compliance across the industry will be challenging. Compliance will require age assurance measures, location measures and, where applicable, verification of parental consent. Enforcement measures may be complicated by the fact that many providers are companies which are located outside Australia. The legislation would apply to existing as well as prospective users of social media services. There will undoubtedly be workarounds by knowledgeable child users. However, the perfect should not be the enemy of the good. One non-legal beneficial effect of the law may be to arm parents with the proposition that it is the law not them that restricts access to social media for children in South Australia.

06 August 2024

Divides

'Digital disengagement and impacts on exclusion' (UK Parliamentary Office of Science and Technology, 2024) comments

 • Digital disengagement refers to people that have limited access to the internet or digital devices for motivational or personal reasons, rather than other forms of digital exclusion, such as access or affordability barriers. However, reasons behind digital exclusion can be inter-related. 
 
• Digital disengagement and other forms of digital exclusion are negatively associated with social, health, employment, and financial inequalities, and can compound existing inequalities. 
 
• In 2024, Ofcom estimated that 6% (1.7 million) of UK households did not have the internet at home. It is not clear how many are disengaged due to motivational reasons. However, multiple surveys indicate that lack of interest is the most cited reason for being offline. Other motivational reasons include fear of scams, or lack of confidence and skills. 
 
• Levels of digital engagement are on a spectrum. People may engage with some aspects of digital technology but not others, depending on factors associated with the task, device, confidence, or current life circumstances. 
 
• Stakeholders expressed policy considerations including refreshing the 2014 Digital Inclusion Strategy, improving accessibility, developing digital skills, empowering choice when using technology, and preserving non-digital services and solutions.

The Note goes on to state

Digital disengagement and impacts on exclusion 

Digital disengagement refers to motivational and personal reasons for not being online or using digital devices. It is closely linked to digital exclusion, which broadly refers to people who cannot fully participate in society because they have limited access to internet or digital devices, or are unable to use them. 

Issues associated with digital exclusion (Box 1) are well recognised in academic research and UK policy. Motivational barriers have not been researched in as much depth, or been as much of a focus in policy compared to ability, access and affordability. 

It can be challenging to separate issues associated with digital disengagement and other forms of digital exclusion. This POSTnote focuses on digital disengagement and references digital exclusion more broadly where relevant and inter-linked.

Key issues associated with digital exclusion 

Motivation: Motivational or personal barriers preventing people from engaging online include lack of interest, low confidence, mistrust in the internet, or challenges with using the technology due to inaccessibility (see potential reasons for disengagement). People may make a deliberate choice to not engage with some digital activities, such as owning a smartphone (see selective engagement). 

Ability: Those lacking basic digital skills are excluded by not being able to navigate the online environment.  Lack of skills can also affect motivation. 

Access: Digital exclusion can result from people lacking the infrastructure to access the internet, such as not having adequate broadband or devices to connect with 

Affordability: Digital exclusion can occur if people cannot afford the costs of being online.  Ofcom’s 2024 Adult’s Media Use and Attitudes report found that 17% of people who did not have internet at home, did not have internet because of reasons relating to cost, for example of broadband and devices.

What is digital disengagement? 

Disengagement may be an active choice, as people have differing views on the benefits of online engagement and preferences for non-digital options. It can alternatively be due to external factors that are beyond the person’s control. 

Digital disengagement does not necessarily mean that people are completely offline. Online engagement may be selective based on the type of task, for example (see selective engagement).

Survey data from Ofcom and others indicate that motivational barriers are the most common reason for being offline (Figure 1). 

In 2023, respondents were over four times more likely to specify lack of interest for non-use, compared to cost (Figure 1). 

It is not clear exactly what proportion of the public are disengaged, as people may have multiple reasons for being offline and disengagement is not clearly defined. Research on motivational factors is often based on small sample sizes, potentially as those offline are a smaller and harder to reach proportion of the public (see future research and representation). The proportion of the public that do not have internet at home has been decreasing, which indicates that the population still not regularly online are increasingly those that are not interested (Figure 1). 

26 July 2024

Platforms

'Platform Administrative Law A Research Agenda' by Moritz Schramm comments 

Scholarship of online platforms is at a crossroads. Everyone agrees that platforms must be reformed. Many agree that platforms should respect certain guarantees known primary from public law like transparency, accountability, and reason-giving. However, how to install public law-inspired structures like rights protection, review, accountability, deference, hierarchy and discretion, participation, etc. in hyper capitalist organizations remains a mystery. This article proposes a new conceptual and, by extension, normative framework to analyze and improve platform reform: Platform Administrative Law (PAL). Thinking about platform power through the lens of PAL serves two functions. On the one hand, PAL describes the bureaucratic reality of digital domination by actors like Meta, X, Amazon, or Alibaba. PAL clears the view on the mélange of normative material and its infrastructural consequences governing the power relationship between platform and individual. It allows us to take stock of the distinctive norms, institutions, and infrastructural set ups enabling and constraining platform power. In that sense, PAL originates-paradoxically-from private actors. On the other hand, PAL draws from 'classic' administrative law to offer normative guidance to incrementally infuse 'good administration' into platforms. Many challenges platforms face can be thought of as textbook examples of administrative law. Maintaining efficiency while paying attention to individual cases, acting proportionate despite resource constraints, acting in fundamental rights-sensitive fields, implementing external accountability feedback, maintaining coherence in ruleenforcement, etc.-all this is administrative law. Thereby, PAL describes the imperfect and fragmented administrative regimes of platforms and draws inspiration from 'classic' administrative law for platforms. Consequentially, PAL helps reestablishing the supremacy of legitimate rules over technicity and profit in the context of platforms. 

'Power Plays in Global Internet Governance' (GigaNet: Global Internet Governance Academic Network, Annual Symposium 2015) by Madeline Carr comments 

The multi-stakeholder model of global Internet governance has emerged as the dominant approach to navigating the complex set of interests, agendas and implications of our increasing dependence on this technology. Protecting this model of global governance in this context has been referred to by the US and EU as ‘essential’ to the future of the Internet. Bringing together actors from the private sector, the public sector and also civil society, multi-stakeholder Internet governance is not only regarded by many as the best way to organise around this particular issue, it is also held up as a potential template for the management of other ‘post-state’ issues. However, as a consequence of its normative aspirations to representation and power sharing, the multi-stakeholder approach to global Internet governance has received little critical attention. This paper examines the issues of legitimacy and accountability with regard to the ‘rule-makers’ and ‘rule-takers’ in this model and finds that it can also function as a mechanism for the reinforcement of existing power dynamics.

12 September 2023

Lobbying

The Lobby Network: Big Tech's Web of Influence in the EU (Corporate Europe Observatory and LobbyControl, 2021) by Max Bank, Felix Duffy, Verena Leyendecker, Margarida Silva comments 

 As Big Tech’s market power has grown, so has its political clout. Just as the EU tries to rein in the most problematic aspects of Big Tech – from disinformation, targeted advertising to excessive market power – the digital giants are lobbying hard to shape new regulations. They are being given disproportionate access to policy-makers and their message is amplified by a wide network of think tanks and other third parties. Corporate Europe Observatory and LobbyControl profile Big Tech’s lobby firepower, given it is now the EU’s biggest lobby spending industry. 

The lobby firepower of Big Tech undermines democracy 

In the last two decades we have seen the rise of companies providing digital services. Big Tech firms have become all-pervasive, playing critical roles in our social interactions, in the way we access information, and in the way we consume. These firms not only strive to be dominant players in one market, but with their giant monopoly power and domination of online ecosystems, want to become the market itself. 

In her announcement1 of plans to shape the EU’s digital future, President of the European Commission von der Leyen declared: “I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident.” 

The current situation is quite the opposite. Tech firms like Google, Facebook, Amazon, Apple and Microsoft long ago consolidated their hold of their market, and dominated top spots in the world’s biggest companies. 

A mere handful of companies determine the rules of online interaction and increasingly shape the way we live. The COVID-19 pandemic has only sped up the momentum for digitisation and the importance of the companies. Big Tech’s business model has received heavy criticism for its role in the spread of disinformation and the undermining of democratic processes its reliance on the exploitation of personal data, and its immense market power and unfair market practices. 

Meanwhile as the economic power of big digital companies has grown, so has their political power. In this report, we offer an overview of the tech industry’s lobbying fire-power with regard to the EU institutions, including who the big spenders are, what they want, and just how outsized their privileged access is. This is especially important given that EU policy-makers are currently seeking to regulate the digital market and its players via the Digital Services Act package. This EU initiative is made up of two components, the Digital Services Act and the Digital Markets Act, meant to “to create a safer digital space in which the fundamental rights of all users of digital services are protected”, and “to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally”. 

We map for the first time the ‘universe’ of actors lobbying the EU’s digital economy, from Silicon Valley giants to Shenzhen’s contenders; from firms created online to those making the infrastructure that keeps the internet running; tech giants and newcomers. 

We found a wide yet deeply imbalanced ‘universe’:

  • with 612 companies, groups and business associations lobbying the EU’s digital economy policies. Together, they spend over € 97 million annually lobbying the EU institutions. This makes tech the biggest lobby sector in the EU by spending, ahead of pharma, fossil fuels, finance, or chemicals. 

  • in spite of the varied number of players, this universe is dominated by a handful of firms. Just ten companies are responsible for almost a third of the total tech lobby spend: Vodafone (€ 1,750,000), IBM (€ 1.750.000), QUALCOMM (€ 1.750.000), Intel (€ 1,750,000), Amazon (€ 2,750,000), Huawei (€ 3,000,000), Apple (€ 3,500,000), Microsoft (€ 5,250,000), Facebook (€ 5,550,000) and with the highest budget, Google (€ 5,750,000). 

  • out of all the companies lobbying the EU on digital policy, 20 per cent are US based, though this number is likely even higher. Less than 1 per cent have head offices in China or Hong Kong. This implies Chinese firms have so far not invested in EU lobbying quite as heavily as their US counterparts. 

  • These huge lobbying budgets have a significant impact on EU policy-makers, who find digital lobbyists knocking on their door on a regular basis. More than 140 lobbyists work for the largest ten digital firms day to day in Brussels and spend more than € 32 million on making their voice heard. 

  • Big Tech companies don’t just lobby on their own behalf; they also employ an extensive network of lobby groups, consultancies, and law firms representing their interests, not to mention a large number of think tanks and other groups financed by them. The business associations lobbying on behalf of Big Tech alone have a lobbying budget that far surpasses that of the bottom 75 per cent of the companies in the digital industry. Academic and Big Tech critic Shoshana Zuboff has argued that lobbying – alongside establishing relationships with elected politicians, a steady revolving door, and a campaign for cultural and academic influence – has acted as the fortification that has allowed a business model, built on violating people’s privacy and unfairly dominating the market, to flourish without being challenged. 

This is also the case at EU level. The aim of Big Tech and its intermediaries seems to make sure there are as few hard regulations as possible – for example those that tackle issues around privacy, disinformation, and market distortion – to preserve their profit margins and business model. If new rules can’t be blocked, then they aim to at least water them down. In recent years these firms started embracing regulation in public, yet continue pushing back against behind closed doors. There are some differences between what different tech firms want in terms of EU policy, but the desire to remain ‘unburdened’ by urgently needed regulations is shared by most of the large platforms. 

Big Tech’s deep pockets might also reflect the fact that this industry is rather new and emerging, and its home base is not the EU. Most of the big players come from the US. This has several consequences for the lobbying efforts of the industry in the EU. First of all, channels of influence are still in the process of being built. The ties to governments are not as close as, for instance, between the German Government and its national car industry. This, in addition to growing criticism of Big Tech’s business practices, can start explaining why the digital industry’s lobbying relies heavily on influencing public opinion and on using third-parties, such as think tanks and law and economic firms, as a tool for that purpose. 

The Digital Markets Act and the Digital Services Act – the two strands of the Digital Services Act package – are the EU’s first legislative attempt to tackle the overarching power of the tech giants. And the lobbying battle being waged over them show us the lobby might of the tech industry in practice. More than 270 meetings on these proposals have taken place, 75 percent of them with industry lobbyists. Most of them targeted at Commissioners Vestager and Breton who are responsible for the new rules. This lobby battle has now moved to the European Parliament and Council. In spite of the lack of transparency, we start seeing Big Tech’s lobbying footprint in the EU capitals.

11 September 2023

Modding

'From Community Governance to Customer Service and Back Again: Re-Examining Pre-Web Models of Online Governance to Address Platforms’ Crisis of Legitimacy' by Ethan Zuckerman and Chand Rajendra-Nicolucci in (2023) Social Media and Society comments 

As online platforms grow, they find themselves increasingly trying to balance two competing priorities: individual rights and public health. This has coincided with the professionalization of platforms’ trust and safety operations—what we call the “customer service” model of online governance. As professional trust and safety teams attempt to balance individual rights and public health, platforms face a crisis of legitimacy, with decisions in the name of individual rights or public health scrutinized and criticized as corrupt, arbitrary, and irresponsible by stakeholders of all stripes. We review early accounts of online governance to consider whether the customer service model has obscured a promising earlier model where members of the affected community were significant, if not always primary, decision-makers. This community governance approach has deep roots in the academic computing community and has re-emerged in spaces like Reddit and special purpose social networks and in novel platform initiatives such as the Oversight Board and Community Notes. We argue that community governance could address persistent challenges of online governance, particularly online platforms’ crisis of legitimacy. In addition, we think community governance may offer valuable training in democratic participation for users. 

Since the earliest days of computing, people have used information technology to converse with one another. Four years before the internet, Noel Morris and Tom Van Vleck wrote both an electronic mail system and a real-time chat system for MIT’s Compatible Time-Sharing System (CTSS), allowing users who logged onto the single shared computer to leave messages for one another or send messages to another user’s terminal (Van Vleck, 2012). Within 3 years of the introduction of the internet, email became the primary use of a network initially established to let computer scientists run programs on remote machines (Sterling, 1993). France’s Minitel service, designed to give users access to an electronic telephone directory and the ability to make travel reservations online, quickly became dominated by chat services, particularly erotic chat (Tempest, 1989). People want to talk to one another and will find ways to do so as soon as they are technically capable of connecting to one another. Unfortunately, as soon as people are able to talk to one another, they are also able to harm each other. Spam has undermined the utility of email and largely destroyed Usenet, the dominant community platform of the academic internet in the 1980s and early 1990s. Harassment and hate speech have become facts of life for users of many online systems, particularly for women, people of color, and LGBTQIA+ people. People often behave differently online than they would offline (Suler, 2004) and the impetus for humans to harass each other via digital tools is at least as strong as the impulse to connect. 

The emergence of trust and safety as a professional discipline reflects the centrality of issues like content moderation, spam and fraud prevention, and efforts to combat child sexual abuse imagery (CSAM) to the operation of platforms that enable user-generated content and conversation. As Tarleton Gillespie (2018) notes in Custodians of the Internet, “Platforms are not platforms without moderation.” Recent efforts to recognize trust and safety as a profession, with the establishment of the Trust & Safety Professional Association in 2020 and the emergence of a Journal of Online Trust and Safety in 2021 are overdue, as the work of policing online spaces traces back at least to the 1980s, if not earlier. 

One danger of losing the early history of online governance is a narrowing of possible futures, making it seem as if the contemporary model for governing online spaces, where professionals make decisions about what behavior is acceptable, with little input from members of the community, is the way it’s always been done. We refer to this model as the “customer service” model and contrast it to earlier models of online governance in which community members were significant, if not always primary, decision-makers about the online spaces they were a part of. This article examines three paradigms of online governance that preceded the contemporary customer service model and suggests that varying degrees of community governance may be a viable and socially beneficial option for many online spaces. 

This article is far from an exhaustive history of early online governance or of the emergence of the customer service model, though both histories are needed. While there has been excellent work calling attention to the complexities of trust and safety (Gillespie, 2018; Gray & Suri, 2019), it has focused primarily on the “web 2.0” social media platforms that emerged in the mid-2000s—the shift toward the customer service model begins in the late 1980s and is cemented in place by the mid-1990s. This is also an opinionated and personal history, as one of the authors (Zuckerman) built the early content moderation department for Tripod.com, one of the web’s first user-generated content sites, from 1995 to 1999.

03 September 2023

IP Infringements

'The localization of IP infringements in the online environment: From Web 2.0 to Web 3.0 and the Metaverse' (WIPO Study, 2023) by Eleonora Rosati comments 

Over time, technological advancements have resulted in novel ways both to exploit content and to infringe rights – including intellectual property rights (IPRs) – vesting in them. Legislative instruments have consistently clarified that pre-existing rights continue to apply to new media, i.e., means to disseminate intangible assets, including in digital and online contexts. In terms of rights enforcement, however, the progressive dematerialization of content and dissemination modalities has given rise to challenges, including when it comes to determining where an alleged IPR infringement has been committed. 

The importance of such an exercise cannot be overstated: it is inter alia key to determining (i) whether the right at issue (e.g., a registered IPR) is enforceable at the outset, (ii) which law applies to the dispute at hand, as well as – in accordance with certain jurisdictional criteria – (iii) which courts are competent to adjudicate it. For example, determining that the relevant infringement has been committed in country A serves in turn to determine: (i) if the right at issue is enforceable at all, given that IPRs are territorial in nature. So, if the IPR in question is a national trademark, the infringement needs to be localized in the territory of the country where the right is registered; (ii) whether, e.g., country A’s law is applicable to the dispute at hand; and (iii) if, e.g., the courts in country A have jurisdiction to adjudicate the resulting dispute. 

This said, questions of applicable law and jurisdiction should not be conflated. Answering the former serves to ensure that a court does not have to apply more than one law, but rather only focus on the initial act of infringement to identify the law applicable to the proceedings. Vice versa, such a need to ensure that only one law is applicable does not exist in the context of jurisdiction rules, which frequently provide for more than one forum. 

The localization exercise described above has proved to be particularly challenging when the infringing activity is committed in a digital or online context. For infringements occurring in Web 2.0 situations, courts around the world have nevertheless progressively developed various approaches to localize the infringing activity, by considering the place where (a) the defendant initiated the infringing conduct (causal event criterion), (b) the infringing content may be accessed (accessibility criterion), and (c) the infringing conduct is targeted (targeting criterion). While none of these criteria is devoid of shortcomings, targeting has progressively gained traction in several jurisdictions around the world. Proof of targeting depends on a variety of factors, including language, currency, possibility of ordering products or services, relevant top-level domain, customer service, availability of an app in a national app store, etc. Overall, what is required to establish targeting is a substantial connection with a given territory. 

Another development is currently underway: it is the transition from the already interactive dimension of Web 2.0 to the even better integrated and more immersive reality of Web 3.0 (if not already Web 4.0!). It is expected that such a transition will be made possible by the rise of augmented reality, blockchain, cryptocurrencies, artificial intelligence, and non-fungible tokens for digital assets. In this sense, the progressive evolution of the metaverse will be pivotal. Even though the concept of metaverse has existed for over thirty years, it has recently been revamped. Thanks to the advent of the new technologies just mentioned, it is hoped that the “new” metaverse will be characterized by four main features: interoperability across networked platforms; immersive, three-dimensional user experience; real-time network access; and the spanning of the physical and virtual worlds. In all this, different metaverses have been developed already, which fall into two main categories: centralized and decentralized. The distinction is drawn based on whether the metaverse at issue is owned and ruled by a single entity, e.g., a company, or whether it is instead characterized by a dispersed network and decentralized ownership structure, e.g., a decentralized autonomous organization. 

While, as stated, it appears reasonable to consider the treatment of Web 2.0 situations as reasonably settled, the transition from Web 2.0 to Web 3.0 has the potential to pose new challenges to the interpretation and application of the criteria discussed above. The present study is concerned precisely with the legal treatment of such a transition. Specifically, this study seeks to answer the following questions: Can the same criteria and notions developed in relation to other dissemination media find application in the context of IPR infringements carried out through and within the metaverses? Does the distinction between centralized and decentralized metaverses have substantial implications insofar as the localization of IPR infringements is concerned? 

The IPRs considered are copyright, trademarks and designs. The analysis is limited to infringements committed outside of contractual relations and adopts an international and comparative perspective, without focusing on any specific jurisdiction. While examples from different legal systems are provided and reviewed as appropriate, by choosing such an approach it is hoped that a lens is offered through which the main questions at the heart of the present study may be answered in terms that are as broad and helpful as possible to different legal systems. Also of relevance to the question of enforceability of IPRs online and on the metaverse is the consideration of the subjects against whom claims may be brought and their legal basis: in this sense, the alleged IPR infringement that requires localizing may not only trigger direct/primary liability but also the liability of subjects other than the direct infringer, including information society service providers whose services are used to infringe. 

The study is structured as follows. Sections 1 and 2 detail the background to the present analysis, as well as its relevant objectives and approach. Section 3 addresses conflicts of laws issues. It reviews the relevant framework for the localization of IPR infringements in cross-border situations, having regard to international and regional instruments, as well as selected national experiences. This section further draws a distinction between unregistered and registered IPRs. Section 4 focuses specifically on digital and online situations and reviews academic and judicial discourse on localization approaches for the purpose of determining applicable law and, where relevant, jurisdiction. A discussion of the criteria based on causal event, targeting and accessibility – including their shortcomings – is also undertaken. Section 5 subsequently considers different types of subjects against whom infringement claims may be advanced, available remedies, and the type of resulting liability. Section 6 is specifically concerned with the different kinds of metaverse and determines whether the findings of the preceding sections may find satisfactory application in relation to this new medium, at least in principle. 

Insofar as the main questions presented above are concerned, the one asking whether the same criteria and notions developed in relation to other media may find application in the context of IPR infringements carried out through and within the metaverses is answered in the affirmative. It is further submitted that the distinction between centralized and decentralized metaverses – while of substantial relevance to the determination of enforcement options – may not have significant implications insofar as the localization of IPR infringements is concerned. 

Overall, this study offers as a main conclusion (Section 7) that, as things currently stand, the existing legal framework – as interpreted by courts in several jurisdictions in relation to Web 2.0 scenarios – appears to offer sufficiently robust guidance for the localization of IPR infringements, including those committed through the metaverse(s). All this is nevertheless accompanied by the caveat that substantial challenges might arise in terms of retrieving evidence that would serve to establish a sufficiently strong connecting factor with a given territory, for the purpose of both determining applicable law and jurisdiction. Furthermore, the diversity of remedies and enforcement options currently available across different jurisdictions begs the question whether the time has come for undertaking a more extensive harmonization of both aspects at the international and/or regional levels.

02 September 2023

Age Verification

The national Government response to the 'Roadmap for Age Verification' developed by the eSafety Commissioner (eSafety) states 

The Roadmap acquits a key recommendation in the February 2020 House of Representatives Standing Committee on Social Policy and Legal Affairs (the Committee) report, Protecting the Age of Innocence (the report), which recommended that the Australian Government direct and adequately resource the eSafety Commissioner to expeditiously develop and publish a roadmap for the implementation of a regime of mandatory age verification for online pornographic material. The Government response to the report, released in June 2021, supported the recommendation and noted that the Roadmap would be based on ‘detailed research as to if and how a mandatory age verification mechanism or similar could practically be achieved in Australia’. 

The Roadmap makes a number of recommendations for Government, reflecting the multifaceted response needed to address the harms associated with Australian children accessing pornography. 

This Government response addresses these recommendations, sets out the Government’s response to this issue more broadly and outlines where work is already underway. This includes work being undertaken by eSafety under the Online Safety Act 2021, noting that since the Roadmap was first recommended in February 2020, the Australian Government has delivered major regulatory reform to our online safety framework with the passage of the Online Safety Bill on 23 July 2021 with bipartisan support, and the commencement of the Online Safety Act on 23 January 2022. The Online Safety Act sets out a world-leading framework comprising complaints-based schemes to respond to individual pieces of content, mechanisms to require increased transparency around industry’s efforts to support user safety, and mandatory and enforceable industry codes to establish a baseline for what the digital industry needs to do to address restricted and seriously harmful content and activity, including online pornography.   

The Roadmap highlights concerning evidence about children’s widespread access to online pornography 

Pornography is legal in Australia and is regulated under the Online Safety Act. Research shows that most Australian adults have accessed online pornography, with a 2020 survey by the CSIRO finding that 60 per cent of adults had viewed pornography. 

However, pornography is harmful to children who are not equipped to understand its contents and context, and they should be protected from exposure to it online. Concerningly, a 2017 survey by the Australian Institute of Family Studies found that 44 per cent of children between the ages of 9-16 were exposed to sexual images within the previous month. 

The Roadmap highlights findings from eSafety’s research with 16-18-year-olds, revealing that of those who had seen online pornography (75% of participants), almost half had first encountered it when they were 13, 14, or 15 years old. Places where they encountered this content varied from pornography websites (70%), social media feeds (35%), ads on social media (28%), social media messages (22%), group chats (17%), and social media private group/pages (17%). The Roadmap acknowledges that pornography is readily available through websites hosted offshore and also through a wide range of digital platforms accessed by children. 

The Roadmap finds an association between mainstream pornography and attitudes and behaviours which can contribute to gender-based violence. It identifies further potential harms including connections between online pornography and harmful sexual behaviours, and risky or unsafe sexual behaviours. 

The Roadmap finds age assurance technologies are immature, and present privacy, security, implementation and enforcement risks 

‘Age verification’ describes measures which could determine a person’s age to a high level of accuracy, such as by using official government identity documents. However, the Roadmap examines the use of broader ‘age assurance’ technologies which include measures that perform ‘age estimation’ functions. The Roadmap notes action already underway by industry to introduce and improve age assurance and finds that the market for age assurance products is immature, but developing. 

It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. 

For age assurance to be effective, it must: • work reliably without circumvention; • be comprehensively implemented, including where pornography is hosted outside of Australia’s jurisdiction; and • balance privacy and security, without introducing risks to the personal information of adults who choose to access legal pornography. 

Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature. 

The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken. 

Without the technology to support mandatory age verification being available in the near term, the Government will require industry to do more and will hold them to account. The Australian Government has always made clear that industry holds primary responsibility for the safety of Australian users on their services. It is unacceptable for services used by children to lack appropriate safeguards to keep them safe. While many platforms are taking active steps to protect children, including through the adoption of age assurance mechanisms, more can and should be done. The Government is committed to ensuring industry delivers on its responsibility of keeping Australians, particularly children, safe on their platforms. 

Government will require new industry codes to protect children 

The effective implementation of the Online Safety Act is a priority of the Albanese Government, including the creation of new and strengthened industry codes to keep Australians safe online. The industry codes outline steps the online industry must take to limit access or exposure to, and distribution and storage of certain types of harmful online content. The eSafety Commissioner can move to an enforceable industry standard if the codes developed by industry do not provide appropriate community safeguards. 

The codes are being developed in two phases, the first phase addressing ‘class 1’ content, which is content that would likely be refused classification in Australia and includes terrorism and child sexual exploitation material. The second phase of the industry codes will address ‘class 2’ content, which is content that is legal but not appropriate for children, such as pornography. 

The codes and standards can apply to eight key sections of the online industry, which are set out in the Online Safety Act: • social media services (e.g. Facebook, Instagram and TikTok); • relevant electronic services (e.g. services used for messaging, email, video communications, and online gaming services, including Gmail and WhatsApp); • designated internet services (e.g. websites and end-user online storage and sharing services including Dropbox and Google Drive); • internet search engine services (e.g. Google Search and Microsoft Bing); • app distribution services used to download apps (e.g. Apple IOS and Google Play stores); • hosting services (e.g. Amazon Web Services and NetDC); • internet carriage services (e.g. Telstra, iiNet, Optus, TPG Telecom and Aussie Broadband); and • manufacturers and suppliers of any equipment that connects to the internet, and those who maintain and install it (e.g. of modems, smart televisions, phones, tablets, smart home devices, e-readers etc). 

Phase 1 

Work on the first phase of codes commenced in early 2022, and on 11 April 2022 eSafety issued notices formally requesting the development of industry codes to address class 1 material. On 1 June 2023, the eSafety Commissioner agreed to register five of the eight codes that were drafted by industry. The eSafety Commissioner assessed these codes and found that they provide appropriate community safeguards in relation to creating and maintaining a safe online environment for end-users, empowering people to manage access and exposure to class 1 material and strengthen transparency of and accountability for class 1 material. 

The steps that industry must take under these codes include, for example: • requirement for providers under the Social Media Services Code, including Meta, TikTok and Twitter, to remove child sexual exploitation material and pro-terror material within 24 hours of it being identified and take enforcement action against those distributing such material, including terminating accounts and preventing the creation of further accounts; and • requirement for providers under the Internet Carriage Service Providers Code, including Telstra, iiNet and Optus, to ensure Australian end-users are advised on how to limit access to class 1 material by providing easily accessible information available on filtering products, including through the Family Friendly Filter program, at or close to the time of sale. 

These registered codes will become enforceable by eSafety when they come into effect on 16 December 2023. 

The eSafety Commissioner requested that industry revise the code for Search Engine Services, to ensure it accounts for recent developments in the adoption of generative AI, and made the decision not to register the Relevant Electronic Services Code and Designated Internet Services Code. The eSafety Commissioner found that these two codes failed to provide appropriate community safeguards in relation to matters that are of substantial relevance to the community. For these sections of industry, eSafety will now move to develop mandatory and enforceable industry standards. The registered codes, including all of the steps industry are now required to take, are available at eSafety’s website: www.esafety.gov.au/industry/codes/register-online-industry-codes-standards. 

Phase 2 

The next phase of the industry codes process will address ‘class 2’ content, which is content that is legal, but not appropriate for children, such as pornography. 

In terms of the content of the code – which will be subject to a code development process – Section 138(3) of the Online Safety Act 2021 outlines examples of matters that may be dealt with by industry codes and industry standards, and includes: • procedures directed towards the achievement of the objective of ensuring that online accounts are not provided to children without the consent of a parent or responsible adult; • procedures directed towards the achievement of the objective of ensuring that customers have the option of subscribing to a filtered internet carriage service; • giving end‑users information about the availability, use and appropriate application of online content filtering software; • providing end‑users with access to technological solutions to help them limit access to class 1 material and class 2 material; • providing end‑users with advice on how to limit access to class 1 material and class 2 material; • action to be taken to assist in the development and implementation of online content filtering technologies; and • giving parents and responsible adults information about how to supervise and control children’s access to material. 

In light of the importance of this work, the Minister for Communications has written to the eSafety Commissioner asking that work on the second tranche of codes commence as soon as practicable, following the completion of the first tranche of codes. The Government notes the Roadmap recommends a pilot of age assurance technologies. Given the anticipated scope of the class 2 industry codes, this process will inform any future Government decisions related to a pilot of age assurance technologies. The Government will await the outcomes of the class 2 industry codes process before deciding on a potential trial of age assurance technologies. 

Government will lift industry transparency 

The Government also notes that the Online Safety Act 2021 sets out Basic Online Safety Expectations (BOSE) for the digital industry and empowers the eSafety Commissioner to require industry to report on what it is doing to address these expectations. A core expectation, set out in section 46(1)(d) of the Online Safety Act 2021, is that providers ‘…will take reasonable steps to ensure that technological and other measures are in effect to prevent access by children to class 2 material provided on the service’. The Online Safety (Basic Online Safety Expectations) Determination 2022 also provides examples of ‘reasonable steps’ that industry could take to meet this expectation, which includes ‘implementing age assurance mechanisms.’ 

The Commissioner is able to require online services to report on how they are meeting the BOSE. Noting the independence of the eSafety Commissioner’s regulatory decision-making processes, the Government would welcome the further use of these powers and the transparency that they bring to industry efforts to improve safety for Australians, and to measure the effectiveness of industry codes. 

Government will ensure regulatory frameworks remain fit-for-purpose 

The Government has committed to bring forward the independent statutory review of the Online Safety Act, which will be completed in this term of government. With the online environment constantly changing, an early review will ensure Australia’s legislative framework remains responsive to online harms and that the eSafety Commissioner can continue to keep Australians safe from harm. The review of the Privacy Act 1988 (Privacy Act Review) also considered children’s particular vulnerability to online harms, and the Privacy Act Review Report made several proposals to increase privacy protections for children online. The Government is developing the response to the Report, which will set out the pathway for reforms. 

The Privacy Act Review Report proposes enshrining a principle that recognises the best interests of the child and recommends the introduction of a Children’s Online Privacy Code modelled on the United Kingdom’s Age Appropriate Design Code. It is recommended that a Children’s Online Privacy Code apply to online services that are likely to be accessed by children. The requirements of the code would assist entities by clarifying the principles-based requirements of the Privacy Act in more prescriptive terms and provide guidance on how the best interests of the child should be upheld in the design of online services. For example, assessing a child’s capacity to consent, limiting certain collections, uses and disclosures of children’s personal information, default privacy settings, enabling children to exercise privacy rights, and balancing parental controls with a child’s right to autonomy and privacy. 

The requirements of the Code could also address whether entities need to take reasonable steps to establish an individual’s age with a level of certainty that is appropriate to the risks, for example by implementing age assurance mechanisms. 

More support and resources for families 

While the Government and our online safety regulator will continue working with industry on this challenge, tools are already available to prevent children accessing pornography online. 

The Government supports the eSafety Commissioner’s work in developing practical advice for parents, carers, educators and the community about safety technologies. These products include online resources such as fact sheets, advice and referral information, and regular interactive webinars. These resources are freely available through the eSafety Commissioner’s website at: www.eSafety.gov.au. The Roadmap proposes the establishment of an Online Safety Tech Centre to support parents, carers and others to understand and apply safety technologies that work best for them. The Government has sought further advice from the eSafety Commissioner about this proposal to inform further consideration. 

The Roadmap also recommends that the Government: • fund eSafety to develop new, evidence-based resources about online pornography for educators, parents and children; and • develop industry guidance products and further work to identify barriers to the uptake of safety technologies such as internet filters and parental controls. The Government supports these recommendations. In the 2023-24 Budget the Government provided eSafety with an additional $132.1 million over four years to improve online safety, increasing base funding from $10.3 million to $42.5 million per year. This ongoing and indexed funding provides Australia’s online safety regulator with funding certainty, allowing long term operational planning, more resourcing for its regulatory processes, and to increase education and outreach. 

The eSafety Commissioner works closely with Communications Alliance – an industry body representing the communications sector – to provide the Family Friendly Filter program. Under this program, internet filtering products undergo rigorous independent testing for effectiveness, ease of use, configurability and availability of support prior to certification as a Family Friendly Filter. Filter providers must also agree to update their products as required by eSafety, for example where eSafety determines, following a complaint, that a specified site is prohibited under Australian law.