Showing posts with label Bibliometrics. Show all posts
Showing posts with label Bibliometrics. Show all posts

26 June 2022

Bibliometrics

'Making a Spectacle of Oneself in the Academy Using the H-Index: From Becoming an Artificial Person to Laughing at Absurdities' by Andrew C. Sparkes in (2021) 27(8-9) Qualitative Inquiry states 

This article offers autoethnographic insights into the consequences of making a spectacle of oneself in the audit culture of the academy. Spectacle 1 explores my experiences of using the h-index as part of an annual salary review and how this made me feel like an artificial person. Spectacle 2 shows how, at a conference, I used laughter to expose some absurdities of the h-index and felt better for doing so. Stories that tell different truths about ourselves in combination with the corporeality of laughter, I suggest, can assist us to re-attune ourselves and resist the process of becoming artificial persons. 

 Sparkes comments 

A great deal has changed since I first gained university employment in the mid-1980s, not all of it for the better. In recent years, various scholars have documented the deep affective somatic and spiritual crises that many academics are suffering from that threatens to overwhelm them as they are propelled toward burnout or something worse (Burrows, 2012; Gill, 2010; Moriarty, 2019; Pelias, 2004; Sparkes, 2007). The reasons why these various crises have come about since the early 1990s has been well documented and critiqued (Collini, 2017; Davies & Bansel, 2010; Davies et al., 2006; Smyth, 2017; Spooner, 2018; Spooner & McNinch, 2018; Tourish, 2019; Zawadzki & Jensen, 2020). This literature consistently places the blame for the deleterious changes in universities, and their subsequent impact on academic work and life, at the feet of the neoliberal project and the changes brought about by the processes of neoliberalization. One such process has been the development of an audit culture. This involves the quantification and evaluation of academic work along with an increasing dependence on these quantitative measures to define and assess academic productivity and efficiency as well as the reputation of individuals, disciplines and institutions. 

According to Shore (2008), the term “audit culture” is of recent origin and was coined by sociologists and anthropologists to describe not so much a type of society, place, or people so much as a condition shaped by the use of “modern techniques and principles of financial audit, but in contexts far removed from the world of financial accountancy” (p. 279). For Schwandt (2015), however, the audit culture is not just about techniques but involves a “system of values and goals that becomes inscribed in social practices thereby influencing the self-understanding of a practice and its role in society” (p. 9). 

In his detailed analysis of the rise of the audit culture in the academy, Spooner (2018) points out that while “productivity” pressures in academic settings are not new, what is new under the contemporary audit culture “is the sheer magnitude, depth, and ubiquity of audit culture’s implementation and pervasiveness” (p. 901). This has led to a situation in which academics are depersonalized, quantified, and constrained in their scholarship “via a suffocating array of metrics and technologies of governance” (p. 895). For Spooner, this situation has been made possible by the global confluence of market, managerialism, and measurement forces that are inextricably linked, interdependent, and mutually reinforcing in ways that lead to what he calls a “triple M” crisis that impacts in multiple ways on the daily lives of academics. In support of this, Bottrell and Manathunga (2019a) suggest that the production of an ever-expanding audit culture is fueled by managerial regimes that are obsessed with “academic performance, productivity and their measurement and surveillance through numerous forms of accountability” (p. 6). 

Significantly, academics are now estimated to be one of the most surveilled groups in history and can be ranked on more than one hundred different scales and indices that measure their value (Erickson et al., 2020). Many, according to Tourish (2019) and Warren (2017), feel under constant surveillance and experience their working lives as a series of administrative moments involving facts and figures to be collected and submitted for various assessments and audits either pending, happening, or being autopsied. Faced with such increased scrutiny the figure of the “neurotic academic” described by Loveday (2018) has become emblematic of the contradictions facing the contemporary academy suffused as it is with anxiety. This anxiety, for many, is generated by their entanglement in what Burrows (2012) calls “metric assemblages” that have taken on a life of their own to become autonomous actors in the academy (e.g., the Research Excellence Framework in the UK). Such assemblages, according to Han (2020), are an essential feature under neoliberalism of the historical shift from ritual to “dataism”:

The human being now has to comply with data . . . Enormous volumes of data displace the human being from its central position as producer of knowledge, and the human being itself is reduced to a data set, a variable that can be calculated and manipulated . . . Transparency, the imperative of dataism, is the source of the compulsion to transform everything into data and information, that is, to make it visible. (Han, 2020, pp. 81–82)

Dataistic regimes of performativity are made possible by dramatic increases in the amount of highly portable but depersonalized raw data available to university managers. This leads to complex social environments and the people within them becoming “machine readable” and reduced to a score that then allows academics to be metrically positioned within a hierarchy of status according to apparently “objective” managerial determinations of individual success and value to institutional prestige. In this process, as Valero et al. (2019) argue, there is no need to ask key questions about the situational or contextual conditions such as history, experience, material resources, teaching loads, and access to resources which people have available for performing their measurable activities in the first place to gain their score. Yet, these scores can now be used to make important decisions about their lives and careers. 

Of course, as Cheek (2018) acknowledges, accountability is a perfectly reasonable expectation of academics and universities in relation to their use of public monies. For her, however, it is the way that research products, research production, and demonstrating accountability are thought about that is the issue. This is particularly so when researchers become positioned “as data collectors, and research is reduced to data for meeting the criteria of these audit exercises” (p. 327). In support of this view, Tourish (2019) notes that we have now reached the stage where the systems designed to ensure accountability have overwhelmed what we are trying to do because we are always trying to feed the insatiable hunger of the beast of measurement. Feeding this beast is certainly wasteful of time, but this is just one of many negative effects brought about by the audit culture that include the withering away of collegial life along with the norms of academic community, such as, the will to critique and the promotion of critical thought (Collini, 2017; Davies, 2005; Lincoln, 2018; Smyth, 2017; Spooner, 2018; Westheimer, 2018). 

Significantly, Tourish (2019) points out that “Auditing is not a neutral gaze trained on what we do. It transforms the subject of its inquiry” (p. 35) and materializes new ways of thinking, doing, and being in the academy. These new ways, as various scholars have suggested, transform people into auditable entities and produce specific sorts of worker subjects that require individual practitioners to organize themselves as a response to targets, indicators, and evaluations and to focus their energies on “what counts.” (Ball, 2003; Davies & Bansel, 2010; Shore & Wright, 2015; Smyth, 2017). In addition, as newly responsibilized, flexible, entrepreneurial, and competitive individuals operating within the research marketplace, they are also expected to set aside personal beliefs and commitments and live an existence of calculation as part of becoming what Gill (2018) calls a quantified self in the neoliberal academy. 

In terms of transforming oneself into “what counts” within the audit culture, academics are increasingly incited to make spectacles of themselves. This incitement to make a spectacle of oneself is, according to Cheek and Persson (2020), driven by the neoliberal market-based principles that have permeated higher education and created new imperatives that include the need to be visible and tell various audiences about our research and ourselves to gain and retain a competitive edge over others. This imperative, as Cheek and Øby (2019) point out, is particularly strong in relation to almetrics that involve new forms of researcher and researcher-related metrics that make the digital, online self both visible and calculable. Not surprisingly, they note that entrepreneurs and experts along with tailor-made courses in research-related institutions have now emerged to guide and help academics make the “right” kind of spectacle of themselves online so as to be “relevant” in the research marketplace. 

For some, making a spectacle of oneself in the academy provides an opportunity to be successful in the research marketplace while for others, even if successful, the psychosocial costs of doing so are high as it can lead to inner conflicts and feelings of inauthenticity and low self-worth (Gill, 2018; Moriarty, 2019; Ruth, 2008; Sparkes, 2007; Warren, 2017). This raises questions about how academics interpret and respond to this incitement to make a spectacle of themselves and how they feel emotionally about doing so in the process. Autoethnographic work that draws on personal experience to purposefully comment on and critique cultural practices, embrace vulnerability with a purpose, and create a reciprocal relationship with the audience in the hope of compelling a response would seem to be one way to address such questions (Adams & Herrmann, 2020; Holman Jones et al., 2013; Sparkes, 2020). This said, some consideration first needs to be given as to whether or not we need another autoethnography from inside the university and for what purpose.

14 December 2021

ARC and ERA

Acting Minister Robert's letter to the ARC will not surprise many observers of the Government's anxieties about cultural marxism, a supposed tsunami of books on Jane Austen and academics busy working in areas that do not generate patents. 

(Confession: I teach patent law, presumably of some utility, but am not a patent holder and despite my love for the very astute Jane am more likely to refer to John Austin and John Langshaw Austin in tutorials. No cultural marxism here!) 

The letter includes the following -

the Australian Government has outlined a number of policy directions that rely on the research undertaken by universities, with the support of the ARC. I am writing to outline my expectations of you, as Chief Executive Officer (CEO) of the ARC, across a number of key areas relevant to those directions. 

The Government values the important role played by university research in the creation of new knowledge, new social and economic opportunities for our citizens, and a platform for our engagement in the intellectual and practical challenges facing the world. Successive Australian governments have made a sustained and significant investment in high-quality research within our university sector, which has contributed to the international success and recognition of our research sector. 

To increase research and end-user engagement, harness the benefits of publicly funded research and drive economic growth and recovery over coming years, we must take action now to strengthen the translation pipeline for Australian research. This includes encouraging greater collaboration with industry to stimulate more research and development (R&D) activity across our economy. The ARC and its programs are central to these goals. 

Accordingly, this Letter of Expectation identifies four key areas which I ask you to prioritise for immediate implementation, so that reforms can be in place before the end of 2022. 

These areas are: supporting national priorities strengthening the National Interest Test (NIT) fast-tracking implementation of recommendations from the review of the Excellence of the Research for Australia (ERA) and the Engagement and Impact (El) assessments enhanced organisational governance. 

These reforms will support the Government's policy ambition to drive impact from our public investment in university research and to develop a stronger system of national R&D. 

Supporting national priorities 

International experience demonstrates that a focus on national priorities and a vision for the future potential of our nation are critical building blocks in harnessing the benefits of publicly funded research. To this end, I ask the ARC to ensure its research funding schemes align clearly and tangibly to those areas of Government priority for economic development. In the balance of research allocations recommended by the ARC, I ask that future recommendations put to me under s.52 of the Australian Research Council Act 2001 (ARC Act) are to support approval under the Linkage Program of a minimum of 40 per cent by value of all grant funding decisions for the period of the determination. 

The National Manufacturing Priorities (NMP), in particular, reflect a considered process of assessing the potential for future economic development in Australia to drive the initiatives and targets that should be supported across all relevant Government priorities. The NMP should take primary focus in the prioritisation of research investments, with no less than 70 per cent of the recommended Linkage Program grants to be aligned with those priorities in future rounds. I note that other Government policies may also support prioritisation of the remaining Linkage Program allocations in specific cases, such as the Low Emissions Technology Statement, the National Agricultural Innovation Priorities and the Defence Science and Technology Strategy 2030. 

By clearly specifying priorities from the outset of the grant consideration process, researchers and their sponsoring universities can ensure that the best research capabilities and the highest quality applications can be focused on the key areas of national need. It is my belief this approach will strengthen the quality of Australia's research endeavour. 

The Government remains committed to the development of new knowledge through investigator led research funding opportunities, the so-called 'blue-sky' research in which Australia's universities excel. As such, I am not seeking an alignment of Discovery grants program with specific national priorities. 

However, I ask that you develop clear guidance for researchers so that they use simple and easy-to-understand language to identify in their applications the potential gains and practical outcomes from their proposed research and its likely contribution to the national interest. 

This information should support the capacity to report on the proportion of applications for grants and successful projects relevant to the NMPs and other national priorities. 

Strengthening the NIT 

I ask that the assessment process and recommendations for funding made to me demonstrate a clear public interest from the significant public investment in university research. 

This statement of public value will make the research more accessible to potential end-users and build support for continuing that investment. 

In consultation with the research and end-user community, as well as the Department of Education, Skills and Employment, I ask the ARC to prioritise the extension and enhancement of the NIT to increase its transparency in the ARC grants process. As part of this work, I ask the ARC to bring forward a proposal to enhance and expand the role of the industry and other end-user experts in assessing the NIT of high-quality projects, prior to recommendation to me as the responsible Minister. 

Given the importance of inter-disciplinary research in addressing the science, technology, humanities, and societal issues that underpin the national priorities described above, I ask that you also consider the need for reviewers in the College of Experts from a broader range of backgrounds and ensure reviewers are supported by appropriate training to assess these types of grant applications. 

To support these outcomes, I ask that you review the operation of the College of Experts and brief me on options for expanding the pool of people who participate in the College to include experts from backgrounds beyond universities, in particular those from industry and other end-user groups. This may require redesigning the grants assessment process to manage the demand on such people in a manner consistent with the roles against which they provide their expertise, such as through the assessment of impact or research questions that cut across discipline areas. 

Fast-tracking implementation of ERA review 

I have noted the outcomes of the review of the ERA and the EI assessments (the Review), and the proposed refinements to these arrangements set out in the Review recommendations. I ask that the ARC expedites implementation of further work stemming from that Review, including fast-tracking development of more efficient and robust assessments of the quality and impact of Australian research. 

My expectation is that this should include clear measures to identify industry engagement and the translation of research to impact. I note that the Review identified many measures related to industry and end-user engagement as having a declining relevance to the assessment of quality in ERA. Accordingly, I ask the ARC and the Department to jointly develop robust quantitative metrics that are more explicitly focused on the impact of research for the next EI assessment in 2024. The end result will be a set of metrics that recognises outcomes like patents, IP and commercial agreements and will have less emphasis on case studies to measure research impact. 

To be effective, the performance information generated by the ERA and EI assessments must drive the quest for excellence. In advance of the next ERA, I ask the ARC to convene the expert working group proposed by the Review to develop a revised ERA rating scale. 

The new scale should embed an approach that sets the 'world standard' benchmark against those nations and universities that are at the forefront ofresearch. The new scale should provide a comparator that will set a rising standard over time. While acknowledging the complexity of the assessment process, I expect that the results will be underpinned by a benchmarking structure that is clear in its ambition and provides granular and meaningful reporting of the level of achievement across different universities. 

Enhanced organisational governance 

To support my reform ambition, I ask that you provide advice on the re-establishment of a designated committee under the ARC Act to support you in your role as CEO. The creation of such a committee is in line with the governance arrangements of similar bodies such as the National Health and Medical Research Council. The committee should build on the expertise of the existing Advisory Committee by bringing additional external anp end-user perspectives to the governance of the Council and its programs in order to reflect the Government's current priorities. I envisage a proactive agenda for the committee supporting you to align the ARC strategic agenda with Government priorities, improve governance and drive innovation in the development of high-quality research funding programs and research impact assessment. 

In recognition of the independence of ARC, the CEO will continue to provide recommendations and advice directly to me. To assist the CEO to undertake this role, I propose the committee, operating with an independent Chair, should support and provide recommendations to the CEO on the strategic agenda of the organisation. The committee should have a broad membership with substantial industry, research end-user and governance representation, with Terms of Reference (ToRs) that support ongoing reform of ARC operations and a focus on driving impact from publicly funded research in Australia's universities. I ask that you provide me with advice before the end of the year on the committee's establishment, including membership and ToRs, for my approval. 

In order to provide a clear and strong message to stakeholders with an interest in university research I ask that the committee, as one of its first actions, work with the ARC to develop a three-year ARC Strategy. This Strategy will set out the ARC's alignment with the Government's priorities and set out a forward-looking agenda. The agenda will demonstrate the ways in which the ARC will develop and drive Australia's research agenda, in line with the best practice of comparable and high performing international research agencies, over the next three to five years.

And just in case the ARC hasn't got the message 

More broadly, I look forward to a regular and ongoing dialogue with you to keep me and my office informed of important issues relating to the work, health and culture of the organisation, including through regular meetings and prior notice of significant announcements and events. The ARC and the Department should work closely together to scope and develop activities that respond to this Statement of Expectation.

28 January 2021

Metrics

Innoculating Law Schools Against Bad Metrics' by Kimberlee G Weatherall and Rebecca Giblin in K Bowrey (ed), Feminist Perspectives on Law, Law Schools and Law Reform: Intellectual Property & Beyond (forthcoming) comments 

Law schools and legal scholars are not immune to the expanding use of quantitative metrics to assess the research of universities and of scholars working within universities. Metrics include grant and research income, the number of articles produced in journals on ranked lists, and citations (by scholars, and perhaps courts). The use of metrics also threatens to expand to measure other kinds of desired activity, with various metrics suggested to measure the impact of research beyond scholarly circles, and even more amorphous qualities such as leadership and mentoring. 

Many working legal scholars are (understandably) unaware of the full range of ways in which metrics are calculated, and how they are used in universities and in research policy. What is more, despite a large and growing research policy literature and perhaps an instinct that metrics are inherently flawed as a means to recognise research 'performance', few researchers are aware of the full scope of known and proven weaknesses and biases in research metrics. 

In this contribution to a forthcoming book, we describe the use of metrics in university and research and higher education policy (with a focus on Australia). We review the literature on the many flaws and biases inherent in metrics used, with a focus on legal scholarship. 

Most importantly, we also want to promote a conversation about what it might look like for academic researchers working in law faculties or on legal issues to assess research contributions that promote the shared values of the legal academy. Our focus is on two areas of research assessment: research impact and the bucket of concepts variously described as mentorship, supervision, and/or leadership. We reframe the questions that researchers and assessors should ask: not, “what impact has this research had”, but “what have you done about your discovery?”; not “what is your evidence of research leadership”, but, “what have you done to enable the research and careers of others?”. We also present concrete suggestions for how working legal scholars and faculties can shift the focus of research assessment towards the values of the legal academy. The chapter incorporates some of our thinking on developing meaningful legal research careers - something that will hopefully be of interest to any working legal scholar.

The authors note

In the discipline of law and legal studies in Australia, we have seen the dangers of inappropriate reliance on metrics up close, most notably via the disastrous 2010 ERA journal list. Expert law academics had universally agreed that journal rankings weren’t suitable for assessing legal research, given the discipline’s size, idiosyncratic journal publishing structures, high degree of specialisation, and the reality that, as a similar exercise in the UK had established, legal research of the highest quality can be found in a wide range of journals. Despite law’s courteous and steadfast resistance, however, when the Australian Research Council (ARC) released its draft journal ranking in mid 2008, law journals were included. The list was almost ludicrously inappropriate, featuring just two non-US journals in the top 198 (despite the fact that few US journals would even consider publishing research on Australian law) and with the ARC apparently unaware that most of them were not subject to peer review. But still, it had to be taken seriously. As Bowrey explains, ‘from this time on there was no longer any real discussion of rejecting rankings but rather only discussion of how to ‘improve’ a bad situation.’ The eventual list of law journals was amended, but it remained highly unsatisfactory. While some inequities were fixed during the revision process, fresh ones were introduced thanks to the fierce lobbying and rent-seeking that took place once it became clear that rankings would be used to assess legal research and law schools. 

While publishing in top ranked journals was supposed to be a reasonable proxy for publishing the best research, it ended up rewarding all kinds of other characteristics instead. The highest ranked Australian law journals were ultimately mostly generalist university reviews, which disproportionately publish men over women and public law over any other specialty (indeed, one of the most prestigious journals, the Federal Law Review, has announced that it will only publish public law (federal or state), abandoning a long-standing position of publishing the best work on areas within the federal jurisdiction). Research that was interdisciplinary, international, or covered one of the (many) snubbed sub disciplines could be incapable of finding a home in a top ranked journal, even if it was important, urgent, and of excellent quality. The reality is that editors of the leading law school journals often feel unqualified to review or publish interdisciplinary work in particular, or work from smaller sub- disciplines they judge ‘not of general interest’. And the use of the flawed list wasn’t limited to the research assessment exercise: as had been predicted, universities rapidly cooppted those rankings to inform all manner of other processes, including promotions, recruitment and internal grant allocations. Part of the list’s distorting effect was to change where researchers submitted their research - and even the topics on which they worked. For example, one empirical study of industrial relations academics found they were changing the focus of their work in order to bring it within the purview of those higher ranked journals by making it less Australia-focused and less critical.

17 August 2020

Retractions

'How do academia and society react to erroneous or deceitful claims? The case of retracted articles’ recognition' by Hajar Sotudeh, Nilofar Barahmand, Zahra Yousefi and Maryam Yaghtin in (2020) Journal of Information Science comments 

Researchers give credit to peer-reviewed, and thus, credible publications through citations. Despite a rigorous reviewing process, certain articles undergo retraction due to disclosure of their ethical or scientific deficiencies. It is, therefore, important to understand how society and academia react to the erroneous or deceitful claims and purge the science of their unreliable results. Applying a matched-pairs research design, this study examined a sample of medicine-related retracted and non-retracted articles matched by their content similarity. The regression analysis revealed similarities in obsolescence trends of the retracted and non-retracted groups. The Generalized Estimating Equations showed that citations are affected by the retraction status, life after retraction, life cycle and the journals’ previous reputation, with the two formers being the strongest in positively predicting the citations. The retracted papers obtain fewer citations either before or after retraction, implying academia’s watchful reaction to the low-quality papers even before official announcement of their fallibility. They exhibit an equal or higher social recognition level regarding Tweets and Blog Mentions, while a lower status regarding Mendeley Readership. This could signify social users’ sensibility regarding scientific quality since they probably publicise the retraction and warn against the retracted items in their tweets or blogs, while avoiding recording them in their Mendeley profiles. Further scrutiny is required to gain insight into the sensibility, if any, about scientific quality. The study’s originality relies on matching the retracted and non-retracted papers with their topics and neutralising variations in their citation potentials. It is also the first study comparing the groups’ social impacts.

The authors conclude 

Although scholars have to pursue the most valid and reliable research design to achieve the most truthful knowledge, they may intentionally or unintentionally report fallible or deceptive knowledge. The misconducts or mistakes can be positioned on a severity continuum from innocuous (e.g. gift authorship, duplicated publication and salami effect) to crucial (e.g. data making and fabrication) [92]. Recently, the scientific community has been experiencing an increase in scientific fraud which is believed to have roots in the competitive atmosphere of science characterised by ‘publish or perish pressure’ [93], researchers’ ambitions and financial needs, scientific hubris [92,94], pressure to publish in ‘high impact’ journals [58], lack of research funds and the proliferation of predatory OA journals [95]. Fraud and deceit in medicine are tightly related and directly detrimental to human well-being and health and are, therefore, considered as an ‘evolving type of crime’ [96]. Consequently, any mistakes, either intentional or inadvertent, ought to be cancelled out as soon as detected. Retraction is a mechanism expected to offset the negative consequences of scientific misconduct and mistakes. This gives rise to the question of how the mechanism has been successful in achieving its ultimate goals. According to research findings, withdrawn papers continue to receive citations [94,97]. However, no studies were found to compare the retracted papers with their non-retracted peers dealing with the same topics. To re-investigate the phenomenon, the present study used a matched-pairs research design to compare the obsolescence trends and citation counts of the retracted and non-retracted papers. The results of the regression analyses showed that the retracted and non-retracted groups of articles show similarities in their obsolescence trend, in that both reach their peak points in the same ages (third year of publication) and adhere to an exponential model in their annual trends after the peaks (Figure 1). The former group achieves its peak at a considerably lower point, although it shows a rather positive increasing trend in their Citation Geo-Mean before retraction. However, the group starts to get obsolete after retraction with an even more accelerated pace compared with that of its non-retracted rival group 

The results of the GEE analyses revealed that the traditional citation counts of the papers in the sample are affected by their retraction status, life cycle, life after retraction and average JIF, which is in line with the existing knowledge [2,14,98]. The positive effect of the citation age relative to the retraction date reveals that the later a paper is retracted, the higher its citation counts are. However, it is not a strong element. Instead, according to the B coefficients of the model, the ‘non-retracted’ and ‘after retraction’ categories are the strongest factors which positively predict the citation quantity. The pairwise comparison of the factors revealed that the non-retracted papers are higher in their citation counts compared with their retracted peers either before or after their retraction. Although both groups experience an increase in their citations in the second phase of their lives (i.e. after retraction), the retracted papers are lower in their citation quantity even in this phase. 

Overall, according to the obsolescence trend that continues at a faster pace after retraction for the retracted papers, one may conclude that the scientific community starts to reduce recognising the papers after the public announcement of the fallibility of the claims. In other words, the retraction mechanism relatively succeeds in preventing the spread of the fallible information. However, the fact that the citations to the retracted papers are higher in number after retraction compared with those received in before-retraction phase reveals that the retraction mechanism does not come to completely eradicate, but to attenuate the negative consequences of the erroneous and worthless outputs. The situation seems to be the same as reported in 1990 by Pfeifer and Snodgrass [60], who observed the citations to the retracted papers to be reduced, but not ‘effectively purged’. 

Low quality of papers could not be completely hidden from the sharp eyes of judicious scholars. The relatively fewer citation counts received by the retracted papers before retraction compared with their non-retracted peers – either in the same period or in the after-retraction phase – can witness the existence of some already withheld potential citations. Consequently, why the withdrawal of the papers fails to orient the scientific community towards the complete eradication may rely on somehow inevitable and shallow citations caused by such factors as coincidence and negligence. On the one hand, some of the citing papers may be accepted or published at the same time that their cited articles are announced to be retracted. Therefore, the citation would be inevitably released before or at the time of the authors’ awareness of the official announcement of the retraction. On the other hand, the ongoing citation to the retracted papers can be attributed to some kind of superficial impact. As scholars are not necessarily scrupulous and conscious in their citation habits, they may choose and cite easily accessible items, rely on the most visible and the shortest representation of an item (e.g. abstracts, second-hand citations and snippets) without digging deep into its details, tactically cite (e.g. perfunctory citation given out of politeness, policy or piety), and cite to provide background and introductory information. It is obvious that these types of citations cannot signify a profound impact. Moreover, the free (or low cost) online and widespread availability of a wide range of materials puts all contents with different degrees of validity and authority on the same level of accessibility and hence on the same level of credibility in the minds of Internet users [99]. This may arouse some kind of passive impact characterised by the user’s loss of his/her control (or needs of control) over his/her information seeking behaviour. This may be reflected in users’ lack of critical evaluation knowledge and skills [99], unwillingness to undertake extensive efforts to verify the credibility of online information and their rare and occasional use of information quality criteria [100,101]. 

According to the findings related to the social metrics, non-retraction positively predicts the social impact, as measured by Mendeley Readership, while negatively explains it when measured by Blog Mentions or Tweets, though the effect is not significant for the latter. This is in line with previous findings confirming the high correlation of papers’ quality with their Mendeley Readership [26,40,41] and its low correlation with Tweet counts [34]. The retracted papers are significantly lower in their Mendeley Readership compared with their non-retracted peers. The positive association of retraction with the Tweets and Blog Mentions and its negative association with Mendeley Readership may seem paradoxical at the first glance. However, the situation would be clarified when the differences of the social networks in terms of their nature and functions are taken into consideration. In fact, Mendeley is a reference manager. It is probably that Mendeley users added the retracted articles before retraction and then did not verify the records to delete the retracted ones. It is, thus, interesting to conduct further investigations to test how the retracted articles are added to Mendeley libraries before and after retraction. Moreover, as Mendeley is an online scholarly social network devoted to scientific research and reference management [102], it is more scientific in nature. It is, therefore, not far from expectations that its users show to be more prudent when confronting poor quality papers. However, Twitter and Blogs are relatively more public and popular in nature [103] with the potential to attract lay audiences [104]. Consequently, there could be a contamination risk of disseminating the retracted papers among the public by non-expert users with low information literacy and evaluation skills. On the other hand, the social users may use the microblogging and sharing the facilities of such social networks as Twitter or Blogs to broadcast, discuss and probably warn about a new retraction. The Retraction Watch blog [48] devoted to the discussion on the retracted papers is an obvious instance. As a result, a social post containing a link to a retracted paper announcing its retraction can gain momentum, go viral and lead to a high social impact for the retracted article. From this angle, the increase in tweeting retracted articles is not harmful, but constructive in the sense that it helps readers in distinguishing the valid and invalid papers. This gives rise to the question of how social networking functions regarding the fake and fallible scientific claims: does it leverage their diffusion or help to promote public watchfulness? Is it possible that the social users publicise the retraction and warn against the retracted articles in their Tweets or blogs while avoiding recording them in their Mendeley profiles? The various and mixed motivations of social mentioning require further studies to shed light on the real societal impact of the retracted papers. 

On this basis, the results of the present study urge for enhancing information and media literacy, especially training to assess credibility [99]. It also highlights the necessity for a more watchful and reliable reviewing system to detect and weed the poor quality manuscripts before their publications. It also highlights the need for a highly visible and transparent system of alerting and awareness raising about the retracted items as also proposed by Korpela [63]. 

The present research has some limitations. Given the relatively small size of the sample, the results of the present study are not generalizable and should be interpreted with caution. Moreover, retraction reasons which are not taken into consideration here are of different importance. For example, frauds more seriously jeopardise scientific authenticity and ethics than accidental mistakes or authorship conflicts. Accordingly, the citations to the retracted articles are not of the same importance. It is, therefore, necessary to repeat the research by taking the withdrawal reasons into account and comparing the impacts of the retracted papers categorised by the gravity of their retraction reasons. Furthermore, the retracted articles showed to be equal or higher in their societal impact regarding Tweets and Blog Mentions compared with their non-retracted counterparts, while they have a lower status in Mendeley Readership. This requires scrutinised opinion mining to elucidate the motivations of social users in mentioning them.

20 July 2019

Journal Rankings and Workload

'Ranking Legal Publications: The Israeli Inter-University Committee Report' by Michael Birnhack, Ronen Perry and Doron Teichman comments
 The Report offers a global ranking of academic legal publications, covering more than 900 outlets, and using a four-tier categorization. The ranking is based on a combined quantitative and qualitative methodology. The Report was composed in the context of the Israeli academic system, but the methodology and the results are not jurisdiction-specific.  
Evaluating academic publications is a never-ending challenge. Such evaluation is an integral part of internal hiring, promotion, and tenure procedures, and of external funding decisions and institutional rankings. The proper way to evaluate academic publications has been the subject of fierce debate. The traditional method for academic evaluation is specific review of each publication, assessing its originality, rigor, and significance. This method, known as "peer-review", is often difficult to perform and might be subjective and biased. These concerns have generated an increased interest in the use of quantitative indicators in research evaluation. However, notwithstanding its scientific allure, the use of quantitative measures to assess research has been heavily criticized by the academic community for losing sight of the intrinsic value of academic work, for ignoring distinct "citation communities" in various fields, and for creating perverse incentives that could actually undermine scientific innovation and reward mediocre work. 
Evaluation of legal scholarship faces particular challenges due to the absence of comprehensive, universally endorsed, quantitative rankings of law journals and the fundamental bifurcation into peer-reviewed and student-edited journals. The challenges are further complicated in non-English speaking jurisdictions, such as Israel, where scholars publish in both their local language and English, in domestic and foreign journals. 
The current Report aims to respond to some of the criticism of quantitative measures and to address the distinctive characteristics of the academic legal domain by proposing a ranking of legal publications that integrates various qualitative and quantitative measures. The Report is the product of more than two years of work, initiated by the Deans of four Israeli law schools operating within public universities. It presents the background to the Committee's work, outlines general principles, and explains the scope of the current project and the ranking methodology. The Appendix contains the proposed ranking.

'Research workloads in Australian universities' by John Kenny and Andrew Fluck in (2018) 60(2) Australian Universities' Review 25 comments 

This article provides insight into the nature of research workload allocation for Australian academics. It explores the distinction between research performance and research workload allocation. Research performance can be judged at an institutional level, a work group level or an individual level. The process by which an institution's research performance is judged is not necessarily suitable at the level of the individual academic. The research performance of individual academics is based on their 'research output', in the form of publications, grants or supervision of research students, but historically, little attention has been paid to the 'input' or the time required to achieve these outputs. To determine the real costs of research, and to examine academic working conditions, this paper argues the clear distinction must be made that 'output' is about research performance; whereas 'input' is about research workload allocation. Therefore, what is needed is a suite of reasonable time allocations which can be associated with research activities, as is the case for teaching related activities. The paper analyses data from an online survey, circulated to academics across Australia in 2016, in which staff estimated the typical time spent on a wide range of research related tasks. The findings from the 2059 respondents show staff strongly support a transparent and holistic approach to workload planning which acknowledges the full range of activities they undertake. Analysis of the times associated with the research tasks led to the development of a table of suggested time estimates, based on the median values, for many common research activities.

05 January 2017

Regulation Metrics

'Measuring the Temperature and Diversity of the U.S. Regulatory Ecosystem' by Michael James Bommarito II and Daniel Martin Katz comments
Over the last 23 years, the U.S. Securities and Exchange Commission has required over 34,000 companies to file over 165,000 annual reports. These reports, the so-called “Form 10-Ks,” contain a characterization of a company’s financial performance and its risks, including the regulatory environment in which a company operates. In this paper, we analyze over 4.5 million references to U.S. Federal Acts and Agencies contained within these reports to build a mean-field measurement of temperature and diversity in this regulatory ecosystem. While individuals across the political, economic, and academic world frequently refer to trends in this regulatory ecosystem, there has been far less attention paid to supporting such claims with large-scale, longitudinal data. 
In this paper, we document an increase in the regulatory energy per filing, i.e., a warming “temperature.” We also find that the diversity of the regulatory ecosystem has been increasing over the past two decades, as measured by the dimensionality of the regulatory space and distance between the “regulatory bitstrings” of companies. This measurement framework and its ongoing application contribute an important step towards improving academic and policy discussions around legal complexity and the regulation of large-scale human techno-social systems.
'The Economic Burden of Prescription Opioid Overdose, Abuse, and Dependence in the United States, 2013' by Curtis S Florence, Chao Zhou, Feijung Luo and Likang Xu in (2016) 54(10) Medical Care comments
It is important to understand the magnitude and distribution of the economic burden of prescription opioid overdose, abuse, and dependence to inform clinical practice, research, and other decision makers. Decision makers choosing approaches to address this epidemic need cost information to evaluate the cost effectiveness of their choices.
The authors sought to estimate the economic burden of prescription opioid overdose, abuse, and dependence from a societal perspective. They conclude that total economic burden of fatal overdose and abuse and dependence of prescription opioids is around US$78.5 billion, with over one third of due to increased health care and substance abuse treatment costs (U$28.9 billion) and approximately one quarter of the cost being borne by the public sector in health care, substance abuse treatment, and criminal justice costs.

20 May 2016

Australian Legal Bibliometrics

The incisive and important 'A Report into Methodologies Underpinning Australian Law Journal Rankings. Prepared for the Council of Australian Law Deans (CALD)' (UNSW Law Research Paper No. 2016-30) by Kathy Bowrey comments
Law schools face significant institutional pressure to adopt journal ranking lists that are used to inform comparative assessment of the Faculty, School and individual researcher performance.
CALD has commissioned a written report that:
1. critically evaluates the methodology of up to eight Law journal lists or rankings agreed to by the parties; 
2. makes recommendations about the suitability of the lists to act as a proxy for academic research quality, including suggesting revisions or modifications to methodology and reference to how to maintain the currency of any proposed list, as appropriate; 
3. comments on the utility of the lists in view of the suggested purposes for which they may be used.
This report is in four parts.
Part One provides a brief overview of bibliometric databases and indices currently in use in the higher education sector to assess research productivity, quality and influence. New non-citation based alternative metrics for the Humanities and perception studies are also discussed. There is also discussion of the Washington & Lee journal ranking list, which underpinned the original CALD/ERA lists. An updated version informed the Deakin List.
This part also contains comments on difficulties in applying existing bibliometrics to the output of Australian legal researchers.
Part Two provides analysis of the following Australian law journal ranking methodologies and lists: CALD list (2009); ERA 2010; Australian Business Deans Council Journal Quality List 2013; Deakin University Law Journal Rankings; University Of Tasmania Law Journal Rankings. This part also includes tables that allow for review of the performance of particular law journals across the various ranking lists provided.
Part Three addresses new developments in research assessment and current critical literature on the use and misuse of metrics.
Part Four provides recommendations to guide future discussion of the use of metrics to assess legal research.

25 October 2015

Legal Publication Bibliometrics

'Fashions and Methodology' by Reza Dibadj in Rethinking Legal Scholarship: A Transatlantic Interchange (Forthcoming) [PDF] comments
I attempt in this chapter to build on prior empirical work where I compared who and what was being published in top law reviews in three different jurisdictions: the United States, Britain, and France.
Part I begins by discussing the key empirical findings of a research project that analyzed a sample of legal publications in the United States, Britain, and France. As discussed, the work proceeded in two phases: first, identifying what “top” journal and “elite” law school might be in each jurisdiction; second, analyzing each article according to author characteristics, legal method employed, and subject matter. Part II then draws implications from this preliminary work, attempting to relate the empirical results to the academic legal culture in each jurisdiction. Put simply, can one try to find meaning in these results?
After having surveyed what is being published in “top” law journals across three different jurisdictions, as well as trying to explore links between these results and legal culture, Part III tries to draw some implications. At least two important points emerge. First, that as legal academics we need to pay more attention to quality and how to measure it. Yet existing quality metrics — journal rankings, peer review, bibliometric citations, and the like — are by themselves at best incomplete and at worst misleading. As such, I argue that quality cannot be understood without the threshold concept of methodology. Entering the dangerous territory of linking methodology with quality becomes all but inevitable if we hope to begin improving the state of legal research. Ironically, what is deeply missing in this literature is a focus on methodology. While it becomes extraordinarily difficult, if not impossible, to generalize across jurisdictions there remains a central question of what Americans may learn from Europeans when it comes to legal research and vice-versa? Methodology can begin to provide a framework to address this question.
An Australian perspective is provided in 'Time and chance and the prevailing orthodoxy in legal academia happeneth to them all - a study of the top law journals of Australia and New Zealand' by James Allan and Anthony Senanayake in (2012) 33 Adelaide Law Review 519.