Innoculating Law Schools Against Bad Metrics' by Kimberlee G Weatherall and Rebecca Giblin in K Bowrey (ed), Feminist Perspectives on Law, Law Schools and Law Reform: Intellectual Property & Beyond (forthcoming) comments
Law schools and legal scholars are not immune to the expanding use of quantitative metrics to assess the research of universities and of scholars working within universities. Metrics include grant and research income, the number of articles produced in journals on ranked lists, and citations (by scholars, and perhaps courts). The use of metrics also threatens to expand to measure other kinds of desired activity, with various metrics suggested to measure the impact of research beyond scholarly circles, and even more amorphous qualities such as leadership and mentoring.
Many working legal scholars are (understandably) unaware of the full range of ways in which metrics are calculated, and how they are used in universities and in research policy. What is more, despite a large and growing research policy literature and perhaps an instinct that metrics are inherently flawed as a means to recognise research 'performance', few researchers are aware of the full scope of known and proven weaknesses and biases in research metrics.
In this contribution to a forthcoming book, we describe the use of metrics in university and research and higher education policy (with a focus on Australia). We review the literature on the many flaws and biases inherent in metrics used, with a focus on legal scholarship.
Most importantly, we also want to promote a conversation about what it might look like for academic researchers working in law faculties or on legal issues to assess research contributions that promote the shared values of the legal academy. Our focus is on two areas of research assessment: research impact and the bucket of concepts variously described as mentorship, supervision, and/or leadership. We reframe the questions that researchers and assessors should ask: not, “what impact has this research had”, but “what have you done about your discovery?”; not “what is your evidence of research leadership”, but, “what have you done to enable the research and careers of others?”. We also present concrete suggestions for how working legal scholars and faculties can shift the focus of research assessment towards the values of the legal academy. The chapter incorporates some of our thinking on developing meaningful legal research careers - something that will hopefully be of interest to any working legal scholar.
The authors note
In the discipline of law and legal studies in Australia, we have seen the dangers of inappropriate reliance on metrics up close, most notably via the disastrous 2010 ERA journal list. Expert law academics had universally agreed that journal rankings weren’t suitable for assessing legal research, given the discipline’s size, idiosyncratic journal publishing structures, high degree of specialisation, and the reality that, as a similar exercise in the UK had established, legal research of the highest quality can be found in a wide range of journals. Despite law’s courteous and steadfast resistance, however, when the Australian Research Council (ARC) released its draft journal ranking in mid 2008, law journals were included. The list was almost ludicrously inappropriate, featuring just two non-US journals in the top 198 (despite the fact that few US journals would even consider publishing research on Australian law) and with the ARC apparently unaware that most of them were not subject to peer review. But still, it had to be taken seriously. As Bowrey explains, ‘from this time on there was no longer any real discussion of rejecting rankings but rather only discussion of how to ‘improve’ a bad situation.’ The eventual list of law journals was amended, but it remained highly unsatisfactory. While some inequities were fixed during the revision process, fresh ones were introduced thanks to the fierce lobbying and rent-seeking that took place once it became clear that rankings would be used to assess legal research and law schools.
While publishing in top ranked journals was supposed to be a reasonable proxy for publishing the best research, it ended up rewarding all kinds of other characteristics instead. The highest ranked Australian law journals were ultimately mostly generalist university reviews, which disproportionately publish men over women and public law over any other specialty (indeed, one of the most prestigious journals, the Federal Law Review, has announced that it will only publish public law (federal or state), abandoning a long-standing position of publishing the best work on areas within the federal jurisdiction). Research that was interdisciplinary, international, or covered one of the (many) snubbed sub disciplines could be incapable of finding a home in a top ranked journal, even if it was important, urgent, and of excellent quality. The reality is that editors of the leading law school journals often feel unqualified to review or publish interdisciplinary work in particular, or work from smaller sub- disciplines they judge ‘not of general interest’. And the use of the flawed list wasn’t limited to the research assessment exercise: as had been predicted, universities rapidly cooppted those rankings to inform all manner of other processes, including promotions, recruitment and internal grant allocations. Part of the list’s distorting effect was to change where researchers submitted their research - and even the topics on which they worked. For example, one empirical study of industrial relations academics found they were changing the focus of their work in order to bring it within the purview of those higher ranked journals by making it less Australia-focused and less critical.