05 February 2025

AI Infrastructure

'The Steep Cost of Capture' by  Meredith Whittaker in (2021) 28(6) interactions 50–55 comments 

In considering how to tackle this onslaught of industrial AI, we must first recognize that the “advances” in AI celebrated over the past decade were not due to fundamental scientific breakthroughs in AI techniques. They were and are primarily the product of significantly concentrated data and compute resources that reside in the hands of a few large tech corporations. Modern AI is fundamentally dependent on corporate resources and business practices, and our increasing reliance on such AI cedes inordinate power over our lives and institutions to a handful of tech firms. It also gives these firms significant influence over both the direction of AI development and the academic institutions wishing to research it. 

Meaning that tech firms are startlingly well positioned to shape what we do — and do not — know about AI and the business behind it, at the same time that their AI products are working to shape our lives and institutions. 

Examining the history of the U.S. military’s influence over scientific research during the Cold War, we see parallels to the tech industry’s current influence over AI. This history also offers alarming examples of the way in which U.S. military dominance worked to shape academic knowledge production, and to punish those who dissented. 

Today, the tech industry is facing mounting regulatory pressure, and is increasing its efforts to create tech-positive narratives and to silence and sideline critics in much the same way the U.S. military and its allies did in the past. Taken as a whole, we see that the tech industry’s dominance in AI research and knowledge production puts critical researchers and advocates within, and beyond, academia in a treacherous position. This threatens to deprive frontline communities, policymakers, and the public of vital knowledge about the costs and consequences of AI and the industry responsible for it—right at the time that this work is most needed. 

'Why ‘open’ AI systems are actually closed, and why this matters' by David Gray Widder, Meredith Whittaker and Sarah Myers West in (2024) 635 Nature 827–833 comments 

This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.