27 September 2024

Devices

In Secretary, Department of Health v Medtronic Australasia Pty Ltd [2024] FCA 1096 the Federal Court of Australia has ordered Medtronic Australasia Pty Ltd (Medtronic) to pay $22 million in penalties for unlawfully supplying 16,267 units of the Infuse Bone Graft Kit to 109 hospitals between 1 September 2015 and 31 January 2020. 

 The Court's judgment comes after the Therapeutic Goods Administration (TGA) commenced proceedings against Medtronic in August 2021. The Court also ordered that Medtronic pay $1 million as a contribution to the TGA’s legal costs. 

 The penalty is the largest ever imposed for contraventions of the Therapeutic Goods Act 1989 (Cth). 

 Australian law generally requires that therapeutic goods are entered in the Australian Register of Therapeutic Goods (ARTG) before they can be lawfully supplied in Australia. Although Medtronic's the Infuse Bone Graft Kit was entered in the ARTG for supply with a separately packaged spinal cage - the LT Cage - that entry in the ARTG did allow the Kit to be supplied without the LT Cage. Medtronic unlawfully supplied the Kit without the LT Cage. 

The judgment notes

 The parties have advanced different views as whether the Kit was or was not a “medical device” within the meaning of the Act. Solely for the purposes of these proceedings in order to enable the Court to resolve the matter on an agreed basis, the parties agree that the Court may proceed on the basis that the Kit was, during the Relevant Period, a “therapeutic good” (medicine). The parties agree that this does not reflect any broader agreed or admitted position as to whether the Kit is, or was during the Relevant Period, a medicine or medical device, nor any broader agreement or admitted position about the characterisation of the Kit that binds either party or otherwise affects the correct approach to any similar issue that might arise outside these proceedings. 

Although, during the Relevant Period, there was significant clinical demand for the Kit by itself (including for use together with another cage), there was no real clinical demand for the Kit and Cage together. 

As the Kit alone was not entered in the ARTG, its supply without the Cage was a contravention of s 19D of the Act, which prohibits, relevantly, the supply in Australia of therapeutic goods for use in humans, unless the goods are registered goods or listed goods, or are otherwise subject to a relevant exemption, approval or authority. 

For the purposes of s 19D, there was no relevant exemption, approval or authority in force in the Relevant Period in respect of the Kit.

Further

While the Act prohibited the supply of the Kit without the Cage by Medtronic , whether or not the Kit was supplied with or without the Cage the Act did not prohibit a health practitioner from using the Kit (without the Cage) in surgical procedures. The Secretary does not allege that health practitioners acted unlawfully by using the Kit in surgical procedures.

In contextualising the scale of the penalty the judgment states 

Medtronic continues to supply a substantial number of medical devices in the Australian market and is currently the sponsor of over 1,900 kinds of medical devices included in the ARTG, of which over 500 are Class III medical devices. 

Medtronic’s gross revenue each financial year during the period from FY21 to FY23 (noting Medtronic 's financial years run 1 April – 31 May) was as follows: (a) Financial Year 2021: $1,041,563,000 (b) Financial Year 2022: $1,007,178,000 (c) Financial Year 2023: $1,065,471,000 

Medtronic incurs substantial costs in deriving its gross revenue. After accounting for expenses, Medtronic ’s profit (after income tax expense attributable to Medtronic ) in the period from FY21 to FY23 was as follows: (a) Financial Year 2021: $55,298,000 (b) Financial Year 2022: $24,685,000 (c) Financial Year 2023: $49,111,000 

Profit derived from the contraventions 

Medtronic generated a total of $77,187,176 in gross revenue from the 16,267 contravening supplies of the Kit. 

The gross revenue generated by the contravening supplies in each financial year during the Relevant Period was as follows: (a) Financial Year 2016: $7,406,000 (b) Financial Year 2017: $16,756,800 (c) Financial Year 2018: $17,090,272 (d) Financial Year 2019: $19,187,884 (e) Financial Year 2020: $16,746,220 

After accounting for costs (including the costs of goods sold, operating product costs and intercompany transfer pricing), the net revenue from the contravening supplies of the Kit is estimated to be $8,982,474 (excluding GST).

Rights

'Governmental influence over rights consciousness: public perceptions of the COVID-19 lockdown' by Simon Halliday, Andrew Jones, Jed Meers and JoeTomlinson in (2024) Journal of Law and Society comments

 Legal consciousness has long been a major focus of enquiry within socio-legal studies – sufficiently so, indeed, that it may be difficult to frame it as a coherent field of enquiry. Examination of how the scholarship has developed over time reveals various underlying theoretical convictions and eclectic methodological approaches. Nonetheless, it is fair to say that a decent amount of the research has been critically informed. The puzzle of state law's hegemonic force – why people continue to turn to state law despite its failure to live up to its ideals – has concerned many critical scholars. Relatedly, there have been examinations of how some seek to resist law's power, albeit in ways that may ultimately sustain it. 

Some of the work on rights consciousness more specifically has similarly been critical in its orientation. Here, important insights have been offered about the ways in which law, envisaged as a solution to society's justice problems, falls short of achieving its potential. Legal rights may not fulfil their promise if, for example, the assertion of rights consciousness reinforces a sense of victimhood or requires the overcoming of significant obstacles, or if the implementation of rights policies in various non-legal settings alters the meaning of the rights in law. 

The starting point for these critical studies of rights consciousness tends to be people's sense of grievance. As Amy Blackstone et al. have put it, ‘[h]ow do individuals respond when they feel their rights have been violated? Do those who perceive a wrong simply tell the wrong-doer, do they tell others, or do they ignore it?’ Senses of grievance might concern experiences of offensive public speech, instances of abuse and harm, or a sense of discrimination in relation to ethnicity, gender, disability, or body shape. 

Critical scholarship in this vein has produced a large and valuable body of research. However, our suggestion in this article is that there is a dimension of the critical approach to rights consciousness that is largely missing from the field. Because the critical enquiry generally starts with people's sense of grievance, there tends to be an elision of a sense of injustice and rights consciousness, and that elision carries a risk: that we overlook situations where there is not necessarily the coincidence of rights consciousness and a sense of injustice. 

To put it in terms of William Felstiner et al.’s framework for analysing the antecedent stages of disputing, much of the critical work on rights consciousness focuses on the ‘claiming’ stage of the ‘naming, blaming, claiming’ sequence; it demonstrates the difficulty of claiming, notwithstanding someone's sense that a particular individual or organization is responsible for the unacceptable infringement of their rights. Our argument is that we might usefully shift our attention to the prior ‘blaming’ stage of the sequence: ‘the transformation of a perceived injurious experience into a grievance’, as Felstiner et al. put it.  The question of whether people do, in fact, regard a breach of rights as an unacceptable experience for which another party should be held responsible ought to be included as part of a broader critical enquiry around rights consciousness. To do so, however, we must keep rights consciousness conceptually distinct from a sense of grievance. 

There is good reason to hypothesize that a sense of rights infringement will not always coincide with a sense of grievance. Within political science, for example, empirical research suggests that, in certain situations, the public are willing to trade off civil liberties for security.  Likewise, within legal theory, fundamental rights are only rarely seen as absolute; legal doctrine is premised on the idea of human rights frequently being qualified or limited in various ways.  Legal consciousness research can build on these insights and broaden its perspective on right consciousness by interrogating rather than presuming the relationship between perceptions of rights violations and a sense of injustice. Importantly, this would open up a new space for critical enquiry around rights consciousness. 

This article proceeds as follows. In the next section, we briefly set out the nature of our research project in which we explored rights consciousness, examining whether and in what ways the United Kingdom (UK) public felt the COVID-19 pandemic lockdown to be a violation of their basic rights. We present our quantitative data on the relationship between people's rights consciousness and their sense of grievance around lockdown, showing that, while most people felt that lockdown was a violation of basic rights, they did not feel aggrieved about it. The section that follows interrogates the project's qualitative data to consider people's reasoning processes around rights violations and their acceptability. We then explore the determinants of rights consciousness during the pandemic, analysing our national survey data. Finally, the article discusses the significance of our findings for the critical study of rights consciousness.

PharmaCrime

The US Department of Justice has announced its 'First Criminal Drug Distribution Prosecutions Related to a Digital Health Company That Distributed Controlled Substances Via Telemedicine'. 

Ruthia He, the founder and CEO of California-based digital health company Done Global Inc. and clinical president David Brody were arrested in connection with alleged participation in a scheme to distribute Adderall over the internet, conspire to commit health care fraud in connection with the submission of false and fraudulent claims for reimbursement for Adderall and other stimulants, and obstruct justice. 

 The DOJ states 

 “As alleged, these defendants exploited the COVID-19 pandemic to develop and carry out a $100 million scheme to defraud taxpayers and provide easy access to Adderall and other stimulants for no legitimate medical purpose,” said Attorney General Merrick B. Garland. “Those seeking to profit from addiction by illegally distributing controlled substances over the internet should know that they cannot hide their crimes and that the Justice Department will hold them accountable.” ... 
 
“As alleged in the indictment, the defendants provided easy access to Adderall and other stimulants by exploiting telemedicine and spending millions on deceptive advertisements on social media. They generated over $100 million in revenue by arranging for the prescription of over 40 million pills,” said Principal Deputy Assistant Attorney General Nicole M. Argentieri, head of the Justice Department’s Criminal Division. “These charges are the Justice Department’s first criminal drug distribution prosecutions related to telemedicine prescribing through a digital health company. As these charges make clear, corporate executives who put profit over the health and safety of patients—including by using technological innovation—will be held to account.” 
 
According to court documents, He and Brody allegedly conspired with others to provide easy access to Adderall and other stimulants in exchange for payment of a monthly subscription fee. The indictment alleges that the conspiracy’s purpose was for the defendants to unlawfully enrich themselves by, among other things, by increasing monthly subscription revenue and thus increasing the value of the company. Done allegedly arranged for the prescription of over 40 million pills of Adderall and other stimulants, and obtained over $100 million in revenue. 
 
“The internet is a place of remarkable innovation, allowing its users to make innumerable types of transactions with greater ease. Such transactions, however, must always be legal,” said Deputy Chief of the Criminal Division Matthew Yelovich for the Northern District of California. “The indictment alleges that He and Brody used an internet-based infrastructure to illegally distribute drug sales and to conspire to commit health care fraud. This office will always prosecute health care fraud and illegal drug distribution on the internet as vigorously as we do traditional frauds and illegal drug distribution.” 
 
He and Brody allegedly obtained subscribers by targeting drug seekers and spending tens of millions of dollars on deceptive advertisements on social media networks. They also allegedly intentionally structured the Done platform to facilitate access to Adderall and other stimulants, including by limiting the information available to Done prescribers, instructing Done prescribers to prescribe Adderall and other stimulants even if the Done member did not qualify, and mandating that initial encounters would be under 30 minutes. To maximize profits, He allegedly put in a place an “auto-refill” function that allowed Done subscribers to elect to have a message requesting a refill be auto-generated every month. He wrote that Done sought to “use the comp structure to dis-encourage follow-up” medical care by refusing to pay Done prescribers for any medical visits, telemedicine consultation, or time spent caring for patients after an initial consultation, and instead paying solely based on the number of patients who received prescriptions. 
 
“The defendants in this case operated Done Global Inc., an online telehealth website that prescribed Adderall and other highly addictive medications to patients who bought a monthly subscription. The defendants allegedly preyed on Americans and put profits over patients by exploiting telemedicine rules that facilitated access to medications during the unprecedented COVID-19 public health emergency,” said DEA Administrator Anne Milgram. “Instead of properly addressing medical needs, the defendants allegedly made millions of dollars by pushing addictive medications. In many cases, Done Global prescribed ADHD medications when they were not medically necessary. In 2022 the FDA issued a notice of shortages in prescription stimulants, including Adderall. Any diversion of Adderall and other prescription stimulant pills to persons who have no medical need only exacerbates this shortage and hurts any American with a legitimate medical need for these drugs. The DEA will continue to hold accountable anyone, including company executives, that uses telehealth platforms to put profit above patient safety.” ... 
 
He and Brody allegedly persisted in the conspiracy even after being made aware that material was posted on online social networks about how to use Done to obtain easy access to Adderall and other stimulants, and that Done members had overdosed and died. They also allegedly concealed and disguised the conspiracy by making fraudulent representations to media outlets to forestall government investigations and action and induce third parties to continue doing business with Done. ... 
 
“Instead of prioritizing the health of their customers, He and Brody’s telemedicine company allegedly prioritized profits—more than $100 million worth—by fraudulently prescribing medications like Adderall and other stimulants,” said Chief Guy Ficco of IRS Criminal Investigation. “This led customers to addiction, abuse, and overdoses, which the company tried to conceal by making false representations to the media in order to deter oversight by government agencies.” 
 
He, Brody, and others also conspired to defraud pharmacies and Medicare, Medicaid, and the commercial insurers to cause the pharmacies to dispense Adderall and other stimulants to Done members in violation of their corresponding responsibility; Medicare, Medicaid, and the commercial insurers to pay for the cost of these drugs; and Done members to continue to pay subscription fees to Done. He and others allegedly made false and fraudulent representations about Done’s prescription policies and practices to induce the pharmacies to fill Done’s prescriptions. As a result, Medicare, Medicaid, and the commercial insurers paid in excess of approximately $14 million. 
 
The indictment also alleges that He and Brody conspired to obstruct justice after a grand jury subpoena was issued to another telehealth company and in anticipation of a subpoena being issued to Done, including by deleting documents and communications, using encrypted messaging platforms instead of company email, and ultimately failing to produce documents in response to a subpoena issued to Done by a federal grand jury. 
 
If convicted, He and Brody each face a maximum penalty of 20 years in prison on the conspiracy to distribute controlled substances and distribution of controlled substances counts.

26 September 2024

Junk

The FTC announces that is taking action against DoNotPay, a US company that claimed to offer an AI service that was “the world’s first robot lawyer,” but unsurprisingly  failed to live up to "its lofty claims that the service could substitute for the expertise of a human lawyer". 

 According to the FTC’s complaint, DoNotPay promised that its service would allow consumers to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” and that the company would “replace the $200-billion-dollar legal industry with artificial intelligence.” DoNotPay, however, could not deliver on these promises. The complaint alleges that the company did not conduct testing to determine whether its AI chatbot’s output was equal to the level of a human lawyer, and that the company itself did not hire or retain any attorneys. 

The complaint also alleges that DoNotPay offered a service that would check a small business website for hundreds of federal and state law violations based solely on the consumer’s email address. This feature purportedly would detect legal violations that, if unaddressed, would potentially cost a small business $125,000 in legal fees, but according to the complaint, this service was also not effective. 

DoNotPay has agreed to a proposed Commission order settling the charges against it. The settlement would require it to pay $193,000, provide a notice to consumers who subscribed to the service between 2021 and 2023 warning them about the limitations of law-related features on the service. The proposed order also will prohibit the company from making claims about its ability to substitute for any professional service without evidence to back it up.

The FTC  states

The DoNotPay Service is an online subscription service targeted to U.S. consumers seeking assistance with a range of commercial issues and legal issues involving American civil law. Respondent used the emergence of new technology like artificial intelligence (AI) as a marketing tool, positioning the DoNotPay Service as a cutting-edge solution for producing legal documents. Respondent described the Service as “the world’s first robot lawyer” and an “AI lawyer” capable of performing legal services such as drafting “ironclad” demand letters, contracts, complaints for small claims court, challenging speeding tickets, and appealing parking tickets. Through a chatbot, subscribers would submit prompts or queries to an AI “robot lawyer” that purportedly operated like a human lawyer, including by applying the relevant laws to subscribers’ particular legal and factual situations; relying on legal expertise and knowledge to avoid potential complications, such as statutes of limitations, compensation limits, and jurisdiction, when generating legal demand letters or initiating cases in small claims court; and detecting legal violations on subscribers’ business websites and providing advice about how to fix them. In fact, the DoNotPay Service was not designed to operate like a human lawyer in these respects. 

The DoNotPay Service 

5. Respondent has offered a General Membership subscription for the DoNotPay Service since 2019. Although the cost of a General Membership has fluctuated since at least 2020, at times relevant to this Complaint, Respondent charged subscribers $36 every two months. 

6. Respondent has offered a Small Business Protection Plan since 2021. Although the cost of the Small Business Protection Plan has fluctuated, at times relevant to this Complaint, Respondent charged subscribers $49.99 per month. 

7. The DoNotPay Service has offered features designed to assist subscribers with consumer issues, including obtaining college fee waivers, creating passport photos, changing mailing addresses, claiming rebates, deleting accounts, donating plasma for cash, and finding discounts. At times relevant to this Complaint, the Service also offered features designed to assist subscribers with legal issues, including but not necessarily limited to, breach of contract demand letters, defamation cease-and-desist letters, divorce settlement agreements, restraining orders, insurance claims, releases of liability, revocable living trusts, lawsuits in small claims court, challenging speeding tickets, and appealing parking tickets. ... 

DoNotPay did not test whether the Service’s law-related features operated like a human lawyer. DoNotPay has developed the Service based on technologies that included a natural language processing model for recognizing statistical relationships between words, chatbot software for conversing with users, and an Application Programming Interface (“API”) with OpenAI’s ChatGPT. None of the Service’s technologies has been trained on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns. DoNotPay employees have not tested the quality and accuracy of the legal documents and advice generated by most of the Service’s law-related features. DoNotPay has not employed attorneys and has not retained attorneys, let alone attorneys with the relevant legal expertise, to test the quality and accuracy of the Service’s law- related features. 

21. DoNotPay subscribers who used the law-related features have complained that the Service did not ask them to submit information relevant to their breach of contract claims, failed to consider important legal issues, and generated legal documents that were not fit for use. 

Website Legal Diagnostics Feature 

22. DoNotPay has promoted a website diagnostic feature of the Service that purportedly would check a consumer’s small business website for hundreds of federal and state law violations based solely on the consumer’s email address. Compl. Exh. K. This feature purportedly would detect legal violations that, if unaddressed, would potentially cost a consumer $125,000 in legal fees. 

DoNotPay’s website diagnostic tool did not, in fact, analyze a consumer’s small business website for hundreds of federal and state law violations based solely on an email address.

BadChat

The Office of the Victorian Information Commissioner report on OVIC's Investigation into the use of ChatGPT by a Child Protection worker comments 

In December 2023, the Department of Families, Fairness and Housing (DFFH) reported a privacy incident to the Office of the Information Commissioner (OVIC), explaining that a Child Protection worker (CPW1) had used ChatGPT when drafting a Protection Application Report (PA Report). The report had been submitted to the Children’s Court for a case concerning a young child whose parents had been charged in relation to sexual offences. 

PA reports are essential in protecting vulnerable children who require court ordered protective intervention to ensure their safety, needs and rights. These reports contain Child Protection workers’ assessment of the risks and needs of the child, and of the parents’ capacity to provide for the child’s safety and development. 

Despite its popularity, there are a range of privacy risks associated with the use of generative artificial intelligence (GenAI) tools such as ChatGPT. Most relevant in the present circumstances are risks related to inaccurate personal information and unauthorised disclosure of personal information. After conducting preliminary inquiries with DFFH, the Privacy and Data Protection Deputy Commissioner commenced an investigation under section 8C(2)(e) of the Privacy and Data Protection (PDP) Act with a view to deciding whether to issue a compliance notice to DFFH under section 78 of that Act. OVIC may issue a compliance notice where it determines that: a. an organisation has contravened one or more of the Information Privacy Principles (IPPs); b. the contravention is serious, repeated or flagrant; and c. the organisation should be required to take specified actions within a specified timeframe to ensure compliance with the IPPs. 

Findings 

OVIC’s investigation confirmed DFFH’s initial findings – that CPW1 used ChatGPT in drafting the PA Report and input personal information in doing so. 

There were a range of indicators of ChatGPT usage throughout the report, relating to both the analysis and the language used in the report. These included use of language not commensurate with employee training and Child Protection guidelines, as well as inappropriate sentence structure. More significantly, parts of the report included personal information that was not accurate. Of particular concern, the report described a child’s doll – which was reported to Child Protection as having been used by the child’s father for sexual purposes – as a notable strength of the parents’ efforts to support the child’s development needs with “age-appropriate toys”. 

The use of ChatGPT therefore had the effect of downplaying the severity of the actual or potential harm to the child, with the potential to impact decisions about the child’s care. Fortunately, the deficiencies in the report did not ultimately change the decision making of either Child Protection or the Court in relation to the child. 

By entering personal and sensitive information about the mother, father, carer, and child into ChatGPT, CPW1 also disclosed this information to OpenAI (the company which operates ChatGPT). This unauthorised disclosure released the information from the control of DFFH with OpenAI being able to determine any further uses or disclosures of it. 

While the focus of the investigation was on the PA Report incident, OVIC also considered other potential uses of ChatGPT by CPW1 and their broader team, as well as examining the general usage of ChatGPT across DFFH. This revealed that:

• A DFFH internal review into all child protection cases handled by CPW1’s broader work unit over a one year period, identified 100 cases with indicators that ChatGPT may have been used to draft child protection related documents. 

• Within the period of July to December 2023, nearly 900 employees across DFFH had accessed the ChatGPT website, representing almost 13 per cent of its workforce. 

Contravention of the IPPs 

While the PA Report incident may have involved the contravention of multiple IPPs, OVIC’s investigation specifically considered DFFH’s management of the risks associated with the use of ChatGPT through the lens of two IPPs:

• IPP 3.1 – which requires organisations to take reasonable steps to make sure that the personal information it collects, uses or discloses is accurate, complete and up to date. 

• IPP 4.1 – which requires organisations to take reasonable steps to protect the personal information it holds from misuse and loss and from unauthorised access, modification or disclosure. 

DFFH submitted to OVIC’s investigation that it had a range of controls in place at the time of the PA Report incident in the form of existing policies, procedures, and training materials (such as its Acceptable Use of Technology Policy and eLearning modules on privacy, security and human rights). However, OVIC found that these controls were far from sufficient to mitigate the privacy risks associated with the use of ChatGPT in child protection matters. It could not be expected that staff would gain an understanding of how to appropriately use novel GenAI tools like ChatGPT from these general guidance materials. 

There was no evidence that, by the time of the PA Report incident, DFFH had made any other attempts to educate or train staff about how GenAI tools work, and the privacy risks associated with them. Additionally, there were no departmental rules in place about when and how these tools should or should not be used. Nor were there any technical controls to restrict access to tools like ChatGPT. Essentially, DFFH had no controls targeted at addressing specific privacy risks associated with ChatGPT and GenAI tools more generally. The Deputy Commissioner therefore found that DFFH contravened both IPP 3.1 and IPP 4.1 and determined that the contraventions were “serious” for the purposes of section 78(1)(b)(i) of the PDP Act. 

Issuing of a compliance notice 

The decision on whether to issue a compliance notice required OVIC to look at the present circumstances and consider whether DFFH currently has reasonable controls in place to prevent similar breaches of IPP 3.1 and IPP 4.1. 

Since the PA Report incident, DFFH has released specific Generative Artificial Intelligence Guidance to “help employees understand the risks, limitations and opportunities of using GenAI tools such as ChatGPT”. It has also promoted this guidance through awareness raising activities. 

While the content of this guidance is broadly fit for purpose, it must be noted that DFFH has almost no visibility on how GenAI tools are being used by staff. Despite the extent of use of GenAI tools across DFFH, it has no way of ascertaining whether personal information is being entered into these tools and how GenAI-generated content is being applied. 

In these circumstances, the controls that DFFH has in place are insufficient to mitigate the risks that using GenAI tools will result in inaccurate personal information or in the unauthorised disclosure of personal information. This is particularly the case in child protection matters, where the risks of harm from using GenAI tools are too great to be managed by policy and guidance alone. Given this, OVIC considers that a major gap in DFFH’s controls is the use of technical solutions to manage employee access to GenAI tools. Specifically, the Deputy Commissioner considers that ChatGPT and similar GenAI tools should be prohibited from being used by Child Protection employees. 

OVIC therefore issued a compliance notice requiring that DFFH must take the following specified actions: 

1. Issue a direction to Child Protection staff setting out that they are not to use any web-based or external Application Programming Interface (API)-based GenAI text tools (such as ChatGPT) as part of their official duties. This direction must be issued by 24 September 2024.  

2. Implement and maintain Internet Protocol blocking and/or Domain Name Server blocking to prevent Child Protection staff from using the following web-based or external API-based GenAI text tools: ChatGPT; ChatSonic; Claude; Copy.AI; Meta AI; Grammarly; HuggingChat; Jasper; NeuroFlash; Poe; ScribeHow; QuillBot; Wordtune; Gemini; and Copilot. The list does not incorporate GenAI tools that are included as features within commonly used search engines.  

3. Implement and maintain a program to regularly scan for web-based or external API-based GenAI text tools which emerge that are similar to those specified in Action 2 – to enable the effective blocking of access for Child Protection staff. This action must be implemented by 5 November 2024 and maintained until 5 November 2026. 

4. Implement and maintain controls to prevent Child Protection staff from using Microsoft365 Copilot. This action must be implemented by 5 November 2024 and maintained until 5 November 2026.

5. Provide notification to OVIC upon the implementation of each of Specified Actions 1 – 4 explaining the steps taken to implement the respective Specified Actions. 

6. Provide a report to OVIC on its monitoring of the efficacy of Specified Actions listed 1 – 4 on 3 March 2025; 3 September 2025; 3 March 2026; and 3 September 2026. 

DFFH response to the investigation 

OVIC welcomes DFFH’s response to this report’s findings and conclusions, as shown at Annexure B. In summary, DFFH accepts the finding that there was a breach of IPPs 3.1 and 4.1 and commits to addressing the actions specified in the Compliance Notice within the required timeframes. However, in its response DFFH contends that the report “did not find that any staff had used GenAI to generate content for sensitive work matters”. In fact, the report presents the opposite – the Deputy Commissioner found on the balance of probabilities that CPW1 used ChatGPT to generate content which was used in a very sensitive work matter – the drafting of the PA Report which was submitted to the Children’s Court for a child protection case.

AI and pharma

'AI Renaissance: Pharmaceuticals and Diagnostic Medicine' by Ty J Feeney and Michael S Sinha in The George Washington Journal of Law and Technology (forthcoming 2025) comments 

tThe explosive growth of Artificial Intelligence (AI) in the modern era has led to significant advancements in the world of medicine. In drug discovery, AI technology is used to classify proteins as drug targets or non-targets for specific diseases, more accurately interpret and describe pharmacology in a quantitative fashion, and predict protein structures based on only a protein sequence for input. AI methods are used in drug development to generate predictive models for drug screening purposes, refine and modify candidate structures of drugs to optimize compounds, and predict a drug’s physiochemical properties, bioactivity, and toxicity. In the medical diagnostics space, the advancement of AI technology in colonoscopy, percutaneous coronary intervention (PCI), acute stroke and intracranial hemorrhage (ICH), vascular surgery, and ophthalmology may all offer increased efficacy as compared to traditional patient-care techniques. 

As use of AI in pharmaceutical development processes and diagnostic medicine increases, the rapidly growing technology still has substantial barriers to overcome. The combination of AI with other novel technologies (e.g., nanotechnology) is anticipated to provide solutions to problems in drug development, model selection, drug screening, and even in clinical trials. Advancements in AI-enabled imaging analysis will be increasingly used in the fields of radiology and pathology, bringing with it increased efficiency as compared to traditional patient-care techniques. Challenges in data representation, data labeling, small sample sizes, data privacy, ethical concerns, and interpretation of models present barriers for AI developers and interested clinicians to overcome when further developing AI technology in the pharmaceutical and medical diagnostics industries. 

As governments consider the implications of an AI-enabled world, it will be crucial for the United States government to develop comprehensive legislation and regulations for the increasingly widespread technology. As concern from American citizens, corporate leaders, and government officials became palpable, President Joseph R. Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence stands out as a substantial starting point for anticipated legislation. Debate as to how AI should be regulated points to a logical conclusion: Congress must create a new entity within the federal government with the sole purpose of regulating AI technology, developers, and sellers. This newly created entity must consider appropriate regulatory schemes, the need for interaction between all interested agencies and industries, and proper enforcement techniques such as licensure, monitoring, and legal penalties. 

25 September 2024

Workplace Privacy

Victoria's Independent Broad-based Anti-corruption Commission (IBAC) special report on Operation Turton, identified 'a problematic workplace culture at the then Metropolitan Fire Brigade (MFB) that led to repeated instances where employees misused sensitive information to advance personal and industrial interests'. 

 IBAC states 

Operation Turton investigated allegations of unauthorised access and disclosure of information by employees of the MFB. The investigation was prompted by allegations that an MFB network administrator had accessed the email accounts of MFB executives without authorisation. The investigation commenced in January 2019 and concluded in June 2021. The fire services are an essential part of Victoria’s emergency response and management sector and, with its firefighters, perform a crucial role in keeping the community safe. To do this, these agencies must operate efficiently, effectively, and free from undue influence. Over the past two decades, issues within Victorian fire services have been documented in several reports, inquiries, and investigations. These issues have included instances of misconduct as well as more systemic cultural and workplace issues identified across the former Metropolitan Fire Brigade (MFB), the Country Fire Authority (CFA), their boards of management and the respective workforces. 

In June 2018, MFB notified the Independent Broad-based Anti-corruption Commission (IBAC) of allegations that an MFB Network Administrator, Stephan Trakas, had accessed the email accounts of MFB executives without authority. IBAC conducted preliminary inquiries and in January 2019, determined to progress this matter to an investigation, Operation Turton, under section 60(1)(b) of the Independent Broad-based Anti-corruption Commission Act 2011 (Vic) (IBAC Act). 

IBAC concluded Operation Turton in June 2021. Operation Turton investigated allegations of unauthorised access and disclosure of information by some employees of the then MFB. IBAC found deficiencies in information and data security practices and processes and instances of individual employees who were motivated by personal and industrial interests. 

IBAC identified five separate incidents where MFB information was accessed or disclosed without authorisation, with three incidents involving MFB employees from the Information and Communications Services business area (ICS). The impact of the conduct varied but included breaches of privacy, risks to the integrity of investigations and impeding the efficient operation of MFB. 

It appears these incidents were largely driven by a desire to further the interests of the Victorian Branch of the United Firefighters Union (UFU) or its Secretary, Peter Marshall. It was clear these incidents were facilitated by a workplace culture where employees did not trust management and did not believe them to be acting in the best interests of the organisation or its employees. 

In relation to these specific incidents, IBAC heard evidence that some employees were sharing MFB information directly with the Union without authority or the awareness of MFB management. One factor in the unauthorised disclosures to the Union was some employees’ belief that eventually the Union would be able to access this information through legitimate means. 

Employees have the right to be unionised and have access to union representation, and unions have rights to lawfully enter workplaces and to organise and represent employees.4However, IBAC found that a particular clause in the industrial agreement between MFB and the UFU, often referred to as ‘consult and agree’, gave the UFU a significant level of influence over the operations of MFB. The clause impaired MFB’s governance and ability to operate effectively and efficiently, giving rise to a misconduct and corruption vulnerability within the organisation. 

The incidents of unauthorised information disclosure and the broader industrial environment suggest a culture where MFB could not operate effectively and independently of the Union. ... 

Operation Turton highlights how a problematic culture within MFB, information security vulnerabilities and an industrial environment that impaired MFB’s ability to address these issues contributed to an environment where information misuse appeared commonplace. 

Over the years IBAC has routinely highlighted corruption risks associated with unauthorised information access and disclosure. 

Operation Turton highlights how misuse of information can enable further misconduct and can be used to advance personal and industrial interests. The investigation emphasises the importance of a positive information security culture, where governance, information security, personnel security, information communications technology security and physical security are appropriately designed to protect against information misuse. On 1 July 2020, MFB employees and approximately 1400 career firefighters from the CFA were merged into a new agency, Fire Rescue Victoria (FRV). In addition to its employees, MFB’s systems, policies and procedures were transitioned into FRV, creating a risk that the deficiencies identified by IBAC through Operation Turton would continue. 

While FRV provided an opportunity for a fresh start, it employs the same workforce as MFB, albeit with an altered executive and oversight structure. Therefore, the risks identified in Operation Turton continue. Accordingly, IBAC is making recommendations (detailed in section 6.2) to FRV to address long-standing and systemic corruption risks to improve workplace culture and information security. It is hoped the management of FRV will continue to work with its workforce to strengthen its ICT systems and processes and to address the structural and cultural issues identified in Operation Turton. ... 

Section 159 recommendations IBAC makes the following recommendations (pursuant to section 159(1) of the IBAC Act):

Recommendation 1 

Fire Rescue Victoria develops clear policies and procedures regarding the matters that may be the subject of consultation with employees and their representatives at the Consultation Committee, and in what circumstances Fire Rescue Victoria information may be disclosed to employees and their representatives to inform that consultation. 

Recommendation 2 

Fire Rescue Victoria addresses the information and communication technology security vulnerabilities and risks identified in Operation Turton by: (a) actioning the consolidated findings of the audit and reviews conducted in this area since 2018 (b) engaging an appropriately qualified independent person to review information security infrastructure, policy and procedures to identify any remaining deficiencies against the Victorian Protective Data Security Standards and Framework or any other issues (c) consulting with the Office of the Victorian Information Commissioner on the adequacy of its information security in line with the Privacy and Data Protection Act 2014 (Vic), including how it is addressing any shortfalls identified in the review recommended above. To support and inform this consultation, FRV must provide the Office of the Victorian Information Commissioner with the full final report of the independent person referred to in Recommendation 2(b). 

Recommendation 3 

Fire Rescue Victoria reviews and strengthens its policies and procedures for employees on how to appropriately share information with their unions in line with the enterprise bargaining agreements, the Privacy and Data Protection Act 2014 (Vic) and the Victorian public sector Code of Conduct. Alongside these policies being appropriately enforced, they should also clearly state that non- compliance could lead to disciplinary action being taken, termination of employment or constitute a criminal offence. 

Recommendation 4 

Fire Rescue Victoria conducts a review of its internal complaint processes, including an anonymous survey of employees on these processes and employees’ willingness to report improper conduct, and implements any recommendations arising from that review to ensure: (a) Fire Rescue Victoria employees understand the importance of reporting suspected corrupt or improper conduct and how they can report such matters (b) Fire Rescue Victoria employees understand how they will be supported and protected if they make a report. 

IBAC requests that Fire Rescue Victoria provide a progress report on the action taken in response to Recommendations 1 to 4 in six months and a full report on its outcomes within 12 months. (b) engaging an appropriately qualified independent person to review information security infrastructure, policy and procedures to identify any remaining deficiencies against the Victorian Protective Data Security Standards and Framework or any other issues (c) consulting with the Office of the Victorian Information Commissioner on the adequacy of its information security in line with the Privacy and Data Protection Act 2014 (Vic), including how it is addressing any shortfalls identified in the review recommended above. To support and inform this consultation, FRV must provide the Office of the Victorian Information Commissioner with the full final report of the independent person referred to in Recommendation 2(b). .

Regulatory Capture

'How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance' by Kevin Wei, Carson Ezell, Nick Gabrieli and Chinmay Deshpande comments 

Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. This article examines the ways in which industry influence in AI policy can result in policy outcomes that are detrimental to the public interest, i.e., scenarios of “regulatory capture.” First, we provide a framework for understanding regulatory capture. Then, we report the results from 17 expert interviews identifying what policy outcomes could constitute capture in AI policy and how industry actors (e.g., AI companies, trade associations) currently influence AI policy. We conclude with suggestions for how capture might be mitigated or prevented. 

In accordance with prior work, we define “regulatory capture” as situations in which:

1. A policy outcome contravenes the public interest. These policy outcomes are characterized by regulatory regimes that prioritize private over public welfare and that could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. Potential outcomes can include changes to policy, enforcement of policy, or governance structures that develop or enforce policy. 

2. Industry actors exert influence on policymakers through particular mechanisms to achieve that policy outcome. We identify 15 mechanisms through which in-dustry actors can influence policy. These mechanisms include advocacy, revolving door (employees shuttling between industry and government), agenda-setting, cultural capture, and other mechanisms as defined in Table 0. Policy outcomes that arise absent industry influence—even those which may benefit industry—do not reflect capture. 

To contextualize these outcomes and mechanisms to AI policy, we interview 17 AI policy experts across academia, government, and civil society. We seek to identify possible outcomes of capture in AI policy as well as the ways that AI industry actors are currently exerting influence to achieve those outcomes. 

With respect to potential captured outcomes in AI policy, experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals above others. 

Experts most commonly identified that AI industry actors use the following mechanisms to exert policy influence:

• Agenda-setting (15 of 17 interviews): Interviewees expressed that industry actors advance anti-regulation narratives and are able to steer policy conversations toward or away from particular problems posed by AI. These actors, including AI companies, are also able to set default standards, measurement metrics, and regulatory approaches that fail to reflect public interest goals. 

• Advocacy (13): Interviewees were concerned with AI companies’ and trade associations’ advocacy activities targeted at legislators. 

• Academic capture (10): Interviewees identified ways that industry actors can direct research agendas or promote particular researchers, which could in turn influence policymakers. 

• Information management (9): Interviewees indicated that industry actors have large information asymmetries over government actors and are able to shape policy narratives by strategically controlling or releasing specific types of information. 

To conclude, we explore potential measures to mitigate capture. Systemic changes are needed to protect the AI governance ecosystem from undue industry influence—building technical capacity within governments and civil society (e.g., promoting access requirements, providing funding in- dependent of industry, and creating public AI infrastructure) could be a first step towards building resilience to capture. Procedural and institutional safeguards may also be effective against many different types of capture; examples include building regulatory capacity in government, empowering watchdogs, conducting independent review of regulatory rules, and forming advisory boards or public advocates. Other mitigation measures that are specific to different types of industry influence are outlined in Table 0. 

Although additional research is needed to identify more concrete solutions to regulatory capture, we hope that this article provides a starting point and common framework for productive discussions about industry influence in AI policy.

The mechanisms are summarised as

1. Advocacy 

2. Procedural obstruction 

3. Donations, gifts, and bribes 

4. Private threats 

5. Revolving door 

6. Agenda-setting 

7. Information management 

8. Information overload 

9. Group identity 

10. Relationship networks 

11. Status 

12. Academic capture 

13. Private regulator capture 

14. Public relations 

15. Media capture

24 September 2024

Pricing and Privacy

The US Federal Trade Commission has launched action against the three largest prescription drug benefit managers (PBMs) —Caremark Rx, Express Scripts (ESI), and OptumRx —a nd their affiliated group purchasing organizations (GPOs) for engaging in anticompetitive and unfair rebating practices that have artificially inflated the list price of insulin drugs, impaired patients’ access to lower list price products, and shifted the cost of high insulin list prices to vulnerable patients. 

The FTC states 

The FTC’s administrative complaint alleges that CVS Health’s Caremark, Cigna’s ESI, and United Health Group’s Optum, and their respective GPOs—Zinc Health Services, Ascent Health Services, and Emisar Pharma Services—have abused their economic power by rigging pharmaceutical supply chain competition in their favor, forcing patients to pay more for life-saving medication. According to the complaint, these PBMs, known as the Big Three, together administer about 80% of all prescriptions in the United States. 
 
The FTC alleges that the three PBMs created a perverse drug rebate system that prioritizes high rebates from drug manufacturers, leading to artificially inflated insulin list prices. The complaint charges that even when lower list price insulins became available that could have been more affordable for vulnerable patients, the PBMs systemically excluded them in favor of high list price, highly rebated insulin products. These strategies have allowed the PBMs and GPOs to line their pockets while certain patients are forced to pay higher out-of-pocket costs for insulin medication, the FTC’s complaint alleges. 
 
“Millions of Americans with diabetes need insulin to survive, yet for many of these vulnerable patients, their insulin drug costs have skyrocketed over the past decade thanks in part to powerful PBMs and their greed,” said Rahul Rao, Deputy Director of the FTC’s Bureau of Competition. “Caremark, ESI, and Optum—as medication gatekeepers—have extracted millions of dollars off the backs of patients who need life-saving medications. The FTC’s administrative action seeks to put an end to the Big Three PBMs’ exploitative conduct and marks an important step in fixing a broken system—a fix that could ripple beyond the insulin market and restore healthy competition to drive down drug prices for consumers.” 
 
Insulin medications used to be affordable. In 1999, the average list price of Humalog—a brand-name insulin medication manufactured by Eli Lilly—was only $21. However, the complaint alleges that the PBMs’ chase-the-rebate strategy has led to skyrocketing list prices of insulin medications. By 2017, the list price of Humalog soared to more than $274—a staggering increase of over 1,200%. While PBM respondents collected billions in rebates and associated fees according to the complaint, by 2019 one out of every four insulin patients was unable to afford their medication. The FTC’s Bureau of Competition makes clear in a statement issued today that the PBMs are not the only potentially culpable actors – the Bureau also remains deeply troubled by the role drug manufacturers like Eli Lilly, Novo Nordisk, and Sanofi play in driving up list prices of life-saving medications like insulin. Indeed, all drug manufacturers should be on notice that their participation in the type of conduct challenged here raises serious concerns, and that the Bureau of Competition may recommend suing drug manufacturers in any future enforcement actions. ... 
 
The PBMs’ financial incentives are tied to a drug’s list price, also known as the wholesale acquisition cost. PBMs generate a portion of their revenue through drug rebates and fees, which are based on a percentage of a drug’s list price. PBMs, through their GPOs, negotiate rebate and fee rates with drug manufacturers. As the complaint alleges, insulin products with higher list prices generate higher rebates and fees for the PBMs and GPOs, even though the PBMs and GPOs do not provide drug manufacturers with any additional services in exchange. 
 
The complaint further alleges that PBMs keep hundreds of millions of dollars in rebates and fees each year and use rebates to attract clients. PBMs’ clients are payers, such as employers, labor unions, and health insurers. Payers contract with PBMs for pharmacy benefit management services, including creating and administering drug formularies—lists of prescription drugs covered by a health plan. ... 
 
Insulin list prices started rising in 2012 with the PBMs’ creation of exclusionary drug formularies, the FTC’s complaint alleges. Before 2012, formularies used to be more open, covering many drugs. According to the complaint, that changed when the PBMs, leveraging their size, began threatening to exclude certain drugs from the formulary to extract higher rebates from drug manufacturers in exchange for favorable formulary placement. Securing formulary coverage was critical for drug manufacturers to access patients with commercial health insurance, the FTC alleges. Competition usually leads to lower prices as sellers try to win business. But in the upside-down insulin market, manufacturers—driven by the Big Three PBMs’ hunger for rebates—increased list prices to provide the larger rebates and fees necessary to compete for formulary access, the FTC’s complaint alleges. According to the complaint, one Novo Nordisk Vice President said that PBMs were “addicted to rebates.” While PBMs’ rebate pressures continued, insulin list prices soared. For example, the list price of Novolog U-100, an insulin medication manufactured by Novo Nordisk, more than doubled from $122.59 in 2012 to $289.36 in 2018. 
 
The complaint alleges that even when low list price insulins became available, the PBMs systematically excluded them in favor of identical high list price, highly rebated versions. As described in the complaint, one PBM Vice President acknowledged that this strategy allowed the Big Three to continue to “drink down the tasty … rebates” on high list price, highly rebated insulins. 
 
The PBMs Caused the Burden of High Insulin List Prices to Shift to Vulnerable Patients, the FTC Alleges 
 
According to the complaint, as insulin list prices escalated, the PBMs collected rebates that, in principle, should have significantly reduced the cost of insulin drugs for patients at the pharmacy counter. Certain vulnerable patients, such as patients with deductibles and coinsurance, often must pay the unrebated higher list price and do not benefit from rebates at the point of sale. Indeed, they may pay more out-of-pocket for their insulin drugs than the entire net cost of the drug to the commercial payer. Caremark, ESI, and Optum knew that escalating insulin list prices and exclusion of low list price insulins from formularies hurt vulnerable patients—yet continued to pursue and incentivize strategies that shifted the burden of high list prices to patients, the FTC’s complaint alleges. 
 
Caremark, ESI, and Optum and their respective GPOs engaged in unfair methods of competition and unfair acts or practices under Section 5 of the FTC Act by incentivizing manufacturers to inflate insulin list prices, restricting patients’ access to more affordable insulins on drug formularies, and shifting the cost of high list price insulins to vulnerable patient populations, the FTC’s complaint alleges.

Earlier this month the FTC announced 'Refunds to Consumers Deceived by Genetic Testing Firm 1Health.io Over Data Deletion and Security Practices'. 

 The Federal Trade Commission is sending refunds to more than 2,400 consumers related to a settlement with 1Health.io, formerly known as Vitagene, over allegations the genetic testing company left sensitive genetic and health data unsecured, deceived consumers about their ability to get their data deleted, and unfairly changed its privacy policy retroactively. 

The FTC’s June 2023 complaint alleged that 1Health.io’s security failures put consumers’ sensitive data at risk, contrary to the company’s promise to exceed industry-standard security practices. The complaint also alleged that the company promised consumers they could delete their personal information at any time when, in fact, the company's failure to maintain a data inventory meant that the company could not always honor that promise. 

The complaint further alleged that, in 2020, the company unfairly changed its privacy policy by expanding the types of third parties with whom it could share health and genetic data that consumers had already provided the company, without notifying consumers or obtaining their consent. 

The FTC is sending payments totaling more than $49,500 to 2,432 consumers. Most consumers will get a check in the mail. Recipients should cash their checks within 90 days, as indicated on the check. Eligible consumers who did not have an address on file will receive a PayPal payment, which should be redeemed within 30 days. ... 

The Commission’s interactive dashboards for refund data provide a state-by-state breakdown of refunds in FTC cases. In 2023, FTC actions led to $324 million in refunds to consumers across the country.

AI regulation

'Competing Legal Futures – “Commodification Bets” All the Way From Personal Data to AI' by Marco Giraudo, Eduard Fosch-Villaronga and Gianclaudio Malgieri in (2024) German Law Journal comments 

Waves of technological innovations based on AI fuel narratives of unprecedented economic and social progress in every possible imaginable field: from healthcare to retail, from the stock market to marketing and advertising, from farming to sports. The novelty of AI tools makes their related impact on society unknown or difficult to be known, as it is often masked under a veil of extreme short term efficiency bearing on the stability of their legal foundations. At the same time, forms of folie normative (or regulatory madness) and the adoption of countless resolutions, ethical guidelines, standards, and laws—often contradictory—and unclear, further blur the visibility over the legal robustness of the foundations of AI-powered business. It is unsurprising therefore that AI tools are marketed in a context of legal uncertainty oftentimes through the use of innovative contractual solutions to secure legal entitlements to perform such new activities and attract further investments in these newly emerging markets. 

Over time, such technological waves are often accompanied by legal shockwaves arising from formal and substantive conflicts between claimed legal entitlements to market new products and counterclaims to protect the prevailing legal order. Many times, the exaggerated expectations of the legal sustainability of innovative solutions that accompany the commercial release of new products are contradicted by widespread recognition of adverse effects in terms of fundamental rights and democratic order, as well as other constitutionally protected interests. As a result, some AI products are already being rejected outright, either by the intervention of law enforcement agencies or by judicial pronouncements retroactively declaring them unlawful. For example, facial recognition technologies or AI-based virtual friendship services have been banned in some jurisdictions by the courts and likely confirmed by legislative action. 

In economic terms, many investors may soon face significant losses as a result of the outright prohibition of markets that they had imagined to be “legal” and whose legal foundations eventually “disappear” because they are found to be incompatible with some fundamental rights and democratic public order. Yet, many investors and policymakers seem unconcerned about the potential economic and political consequences of increasing uncertainty about the stability of the legal foundations of AI-based products, as if the legal rules announced by courts or enforcement decisions by specialized agencies in the EU or the FTC in the U.S. do not affect the economic value of new activities and services. Such disregard for the legal dynamics and increasing inalienability of the entitlements being traded is in striking contrast to what would be expected of investors faced with such profound fluctuations in legal uncertainty within an industry. 

All the more so when we consider the lack of learning from similar legal-economic patterns that have been playing out for more than a decade in the context of the first phase of information capitalism, whose legal foundations are currently in a stalemate. For a long time now, economic actors have been unfettered by faltering legal claims to commodify personal data, the “new currency of the Internet.” Even there, despite widespread warnings about the legal dangers of “everydayness as a commercialization strategy,” we have witnessed a sustained flow of investment into an industry whose core resource has long been suspected of not qualifying as a “commodity.” Today, the entire industry is hanging by a thread, riding on what appears to be a cluster of “legal bubbles,” as legal support for the commodification of personal data diminishes and surveillance practices become increasingly costly, if not prohibited. And yet, with no apparent rational explanation, the very same private actors whose behavioral commodification bets are in jeopardy are the ones raising the stakes with AI gambles to even higher levels of magnitude, with an inadequate response from rule makers and enforcers. If history is any guide, the recent anomalous economic inaction in the face of legal dynamism portends legal instability ‘on steroids’ for the AI-based industry. 

In this Article, we provide a legal-economic canvas of these institutional co-evolutionary dynamics shattering the legal foundations of the digital markets to make sense of contested commodification bets all the way from personal data to AI. From the experience of the early phase of information capitalisms and current flares up of legal uncertainty in the face of AI powered services, we articulate theoretical insights concerning anomalous coordination between legal, technological, and economic dynamics as amplified by overreliance on co-regulatory strategies proposed by current regulatory actions for example, the European AI Act proposal, as a solution of the “pacing problem,” when do we regulate technology. In this spirit, we elaborate on the growing literature on the legal fragility of the market for personal data and AI-based services and connect it with reflections on economic anomalies in the ‘market for legal rules’, which is increasingly supported by judicial evidence and litigation. 

We argue that the co-regulatory model exacerbates legal instability in emerging markets also because it is subject to moral hazard dynamics, all too favored by an over-reliance on the goodwill of private actors as well as on the enforcement priorities of Member States’ DPAs. Stalling strategies and opportunistic litigation can flourish within such a model to the advantage of economic agents’ commodification bets, thus prolonging what we call the “extended legal present,” in which a plurality of possible legal futures compete with each other, and which is fraught with economic and political uncertainty. 

In particular, different legal futures also map onto complementary economic futures, as they affect the legal existence and the cost structure of markets for these new “exemplary goods.” During such a period, economic agents have to “bet” on one legal future to ground their business models, with no guarantees of legal success. We call this phenomenon competing legal futures, which can fuel dangerous legal bubbles if not properly identified and addressed, due to the overlooked instability of legal entitlements at the core of innovative business models. At the same time, legal dissonances and tensions between innovative legal practices and the prevailing institutional order may lead to geopolitical tensions or internal constitutional crisis. 

Against this background, we articulate a common intellectual framework for thinking in the face of the current legal-economic waves of uncertainty affecting AI and other digital innovations, likely to be amplified by political negotiations and compromises leading to the final text of the so called Artificial Intelligence Act (AIA). The AIA is posed to be the first regulatory framework that addresses the impacts that AI has for society, laying down the rules for developers building these systems and user rights. This proposed piece of legislation relies heavily on private standardization activities, also named as co-regulation. The presence of co-evolutionary legal, economic and technological dynamics at the frontier of innovation requires a comparative and interdisciplinary approach, and so we propose a series of “neologisms” as conceptual experiments for the holistic study of complex social objects. Although the Article mainly refers to European events as a case study, their theoretical and practical implications are by no means limited to the EU legal-economic area for the well-known “Brussel effect,” nor are the theoretical insights we draw from them. The Article continues as follows. The first part briefly recounts how the foundational commodification bets on which the digital economy has been deployed and partly rejected by the EU judiciary and DPA’s, pointing to the poor adaptation of economic agents’ legal practices despite emerging legal fault lines at the core of the European digital markets. It then outlines the sense of dejà vu in making the legal foundations for AI, which are currently being shaken by flares of legal uncertainty as the commercial release of AI-based products unfolds. The second part sketches out a theoretical framework of the functioning of the “market for legal rules” to navigate uncertain legal futures. It also emphasizes the presence of anomalies and distortions typically associated with market exchanges at large, as they are exacerbated by the institutional functioning of the co-regulatory model. It concludes by calling for a course correction in the over-reliance on co-regulation and proposes a number of strategies as a resilient strategy for the governance of legal innovation in the face of legal uncertainty.

Datafication

'Monetising Digital Data in Higher Education: Analysing the Strategies and Struggles of EdTech Startups' by Janja Komljenovic, Kean Birch and Sam Sellar in (2024) Postdigital Science and Education comments  

Digital data are perceived to be valuable in contemporary economies and societies. Since the 2011 World Economic Forum described personal data as a ‘new asset class’ that underpins the development of new products and services (World Economic Forum 2011), policymakers, economic and social actors, and scholars have sought to understand how data create both commercial and social value. For example, digital markets and data have become so important for our economies that in 2022–2023, the European Union introduced the Digital Markets Act to bring order to the digital economy, the Digital Services Act to harmonise rules for online intermediary services and create a safe online environment, and the European Data Act to facilitate the use and exchange of digital data for economic and social benefit. 

However, digital data are neither inherently valuable nor exist ‘out there’ waiting to be collected and exploited. Instead, data and data products are constructs of political-economic and socio-technical arrangements, which also create conditions for data monetisation (Birch 2023). We are particularly interested in user data, i.e. digital data that are logged and collected as an outcome of an individual engaging with a digital platform. User data include, but are not limited to, personal data. Scholars have analysed how user data are imagined to be made valuable in various sectors, such as in healthcare via behavioural nudging (Prainsack 2020), in insurance via personalisation (McFall et al. 2020), or in the application of big data to food and agriculture (Bronson and Knezevic 2016). The literature also highlights the risks and adverse effects of datafication, including surveillance (Zuboff 2019) and various forms of population control and exploitation (Sadowski 2020). In each case, for digital user data to be made useful and valuable, data must be collected, analysed, and processed to produce various digital products and outputs, such as algorithms, analytics (e.g. scores, metrics), automated decisions, or dashboards (Mayer-Schönberger and Cukier 2013). 

As the datafication of our economies and societies has expanded in general, so too has it impacted higher education (HE). Datafication refers to the ‘quantification of human life through digital information, very often for economic value’, with important social consequences (Mejias and Couldry 2019: 1). In education, datafication consists of data collection from all processes in educational institutions at all scales and levels, impacting stakeholder practices (Jarke and Breiter 2019). In HE, policymakers attempt to improve university quality, efficiency, and impact via datafication at the sectoral and institutional levels. For example, the UK Higher Education Statistics Agency (HESA) established a Data Futures programme as an infrastructure for datafying the sector and collecting and collating data from universities (Williamson 2018), with an alpha phase launched in 2021–2022. Moreover, Jisc, a HE sectoral agency providing network and IT services, supports universities with various initiatives, such as the Data Maturity Framework launched in 2024, which universities can use as a template to improve data capabilities and datafy their institutions. Digital data, then, is one of the foundational elements of postdigital education because digital technologies that staff and students use every day are increasingly data-based (Jandrić et al. 2024; Jandrić and Knox 2022). 

User data in HE are not only valuable for universities and policymakers but also for the EdTech industry. Scholars aligning themselves with the field of critical data and platform studies in education (Decuypere et al. 2021) have already conducted excellent research into various aspects of data practices related to the economic value, such as EdTech’s commercial interest not always sitting well with user privacy (Hillman 2022) and the work needed to produce and manage school data (Selwyn 2020). Specifically in HE, emerging work has found that EdTech companies turn user data into assets they control (Hansen and Komljenovic 2023). EdTech incumbents such as Pearson have evolved into data organisations with intensive mobilisation of data analytics for impacting HE processes and governance (Williamson 2016, 2020). Research has also identified tensions and unintended consequences in relation to data work at universities (Selwyn et al. 2018), pedagogic, cultural, and social effects (Williamson et al. 2020), and the need for universities to pay greater attention to privacy issues and data standards in procurement processes (Ali et al. 2024). Thus, research in this field highlights (1) the relations between EdTech companies and universities as pivotal and (2) the dynamics of the EdTech industry as being highly relevant for the sector. 

Data in HE are understood to be valuable in terms of their use, which is mostly the ambition of universities, and in economic terms, which is mostly the concern of the EdTech industry (Komljenovic et al. 2024a, b). In this article, we contribute to the literature by examining strategies employed by EdTech startups to make digital and personal data valuable in HE and the struggles that these startups confront. In other words, we examine the economic dimension of postdigital HE, which is co-constitutive of the socio-material assemblages of digital products and services (Knox 2019; Lupton 2018). Understanding how digital data can be made economically valuable is important because the monetisation of user data is consequential for university practices and the nature of postdigital HE, and because governments and organisations see digital data as the premise of contemporary economies in which HE is embedded. Moreover, we specifically focus on EdTech startups because of the promised transformation and disruption that they seek to achieve in HE (Decuypere et al. 2024; Ramiel 2020). As a result, we can reasonably expect these companies to be leaders of datafication processes. 

In what follows, we first elaborate on our conceptual and empirical approach. We then move to discuss the economic construction of data value by EdTech startups and the challenges they confront, before concluding with some reflections on the impact that data monetisation has in HE