26 April 2019


'Bullshitters. Who Are They and What Do We Know about Their Lives?' (IZA Institute of Kabor Econics, 2019) by John Jerrim, Phil Parker and Nikki Shure comments 
‘Bullshitters’ are individuals who claim knowledge or expertise in an area where they actually have little experience or skill. Despite this being a well-known and widespread social phenomenon, relatively few large-scale empirical studies have been conducted into this issue. This paper attempts to fill this gap in the literature by examining teenagers’ propensity to claim expertise in three mathematics constructs that do not really exist. Using Programme for International Student Assessment (PISA) data from nine Anglophone countries and over 40,000 young people, we find substantial differences in young people’s tendency to bullshit across countries, genders and socio-economic groups. Bullshitters are also found to exhibit high levels of overconfidence and believe they work hard, persevere at tasks, and are popular amongst their peers. Together this provides important new insight into who bullshitters are and the type of survey responses that they provide. 
The authors state
In his seminal essay-turned-book On Bullshit, Frankfurt (2005) defines and discusses the seemingly omnipresent cultural phenomenon of bullshit. He begins by stating that “One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share” (Frankfurt, 2005: 1). His book spent weeks on the New York Times’ bestsellers list in 2005 and has recently been cited in the post-truth age to better understand Donald Trump (e.g. Jeffries, 2017; Heer, 2018; Yglesias, 2018). 
Other philosophers have since expanded on his work, most notably G. A. Cohen in his essay “Deeper into Bullshit” (Cohen 2002), but there has been limited large scale empirical research into this issue. We fill this important gap in the literature by providing new cross-national evidence on who is more likely to bullshit and how these individuals view their abilities and social status. This is an important first step in better understanding the seemingly ubiquitous phenomenon of bullshit. 
We make use of an influential cross-national education survey administered every three years by the Organisation for Economic Cooperation and Development (OECD), namely the Programme for International Student Assessment (PISA). This data is commonly used by the OECD and education researchers to benchmark education systems or the performance of specific subgroups of pupils (e.g. Anderson et al., 2007; Jerrim and Choi, 2014; Parker et al., 2018), but has never been used to compare participants across countries in terms of their proclivity to bullshit. This paper fills this important gap in the literature. Previous academic work on bullshit has been limited and mostly theoretical. Black (1983) edited a collection of essays on “humbug”, the predecessor of bullshit, which he defines as “deceptive misrepresentation, short of lying, especially by pretentious word or deed, of somebody's own thoughts, feelings or attitudes” (Black, 1983: 23). Frankfurt (2005) is the first theoretical treatment of the concept of “bullshit” and he situates it in terms of previous philosophical traditions. A crucial aspect of bullshitting in Frankfurt’s work is the fact that bullshitters have no concern for the truth, which is different than a purposeful lie (Frankfurt, 2005: 54). Cohen responds to Frankfurt’s essay and focuses on a slightly different definition of bullshit where “the character of the process that produces bullshit is immaterial” (Cohen, 2002: 2). 
Petrocelli (2018) is one of the few studies to explore bullshitting empirically. He looks at the “antecedents of bullshit”, namely: topic knowledge, the obligation to provide an opinion hypothesis (i.e. individuals are more likely to bullshit when they feel social pressure to provide a response) and the “ease of passing bullshit hypothesis” (i.e. people are more willing to bullshit when believe they will get away with it). He finds that participants are more likely to bullshit when there is pressure to provide an opinion, irrespective of their actual level of knowledge. Petrocelli also concludes that individuals are more likely to bullshit when they believe they can get away with it, and less likely to bullshit when they know they will be held accountable for the responses they provide (Petrocelli, 2018). His work uses smaller sample sizesthan our work (N ≈ 500) and does not answer the question of who bullshitters are and how they view their abilities or social standing. 
Pennycook et al. (2015) is the only other empirical study focused on bullshit. They present experiment participants with “pseudo-profound bullshit” - vacuous statements constructed out of buzzwords - to ascertain when they can differentiate bullshit from meaningful statements and create a Bullshit Receptivity (BSR) scale. Their results point to the idea that some people may be more receptive towards pseudo-profound bullshit, especially if they have a more intuitive cognitive style or believe in the supernatural (Pennycook et al., 2015). Their study focuses on ability to detect bullshit and the mechanisms behind why some people cannot detect bullshit, rather than proclivity to bullshit, which is the focus of this paper. 
In psychology, there has been a related literature on overconfidence and overclaiming. More and Healy (2008) provide a thorough overview of existing studies on overconfidence and distinguish between “overestimation”, “overplacement”, and “overprecision” as three distinct types of overconfidence. Overestimation occurs when individuals rate their ability as higher than it is actually observed to be, overplacement occurs when individuals rate themselves relatively higher than their actual position in a distribution, and overprecision occurs when individuals assign narrow confidence intervals to an incorrect answer, indicating overconfidence in their ability to answer questions correctly (More and Healy, 2008). The type of questions we use to construct our bullshit scale are closely related to overestimation and overprecision since the individuals need to not only identify whether or not they are familiar with a mathematical concept, but also assess their degree of familiarity. 
Similar to how we define bullshit, overclaiming occurs when individuals assert that they have knowledge of a concept that does not exist. In one of the first studies on overclaiming, Philips and Clancy (1972) create an index of overclaiming based on how often individuals report consuming a series of new books, television programmes, and movies, all of which were not real products. They use this index to explore the role of social desirability in survey responses. Stanovich and Cunningham (1992) also construct a scale of overclaiming using foils, fake concepts mixed into a list of real concepts, and signal-detection logic for authors and magazines to examine author familiarity. In both of these studies, however, the focus is not on the actual overclaiming index. Randall and Fernandes (1991) also construct an overclaiming index, but use it as a control variable in their analysis of self-reported ethical conduct. 
Paulhus, Harms, Bruce, and Lysy (2003) focus more directly on overclaiming. They construct an overclaiming index using a set of items, of which one-fifth are non-existent, and employ a signal-detection formula to measure overclaiming and actual knowledge. They find that overclaiming is an operationalisation of self-enhancement and that narcissists are more likely to overclaim than non-narcissists (Paulhus et al., 2003). Atir, Rosenzweig, and Dunning (2015) find that people who perceive their expertise in various domains favourably are more likely to overclaim. Pennycock and Rand (2018) find that overclaimers perceive fake news to be more accurate. Similar to Atir et al. (2015), we find that young people who score higher on our bullshit index also have high levels of confidence in their mathematics self-efficacy and problem-solving skills. 
We contribute to the existing literature on the related issues of bullshitting, overconfidence and overclaiming in three important ways. First, we use a large sample of 40,550 young people from nine Anglophone countries to examine bullshit, which enables us to dig deeper into the differences between subgroups (e.g. boys versus girls, advantaged vs. disadvantaged young people). Second, we provide the first internationally comparable evidence on bullshitting. We use confirmatory factor analysis to construct our scale and test for three hierarchical levels of measurement invariance (configural, metric and scalar). This allows us to compare average scores on our bullshit scale across countries in a robust and meaningful way. Finally, we also examine the relationship between bullshitting and various other psychological traits, including overconfidence, self-perceptions of popularity amongst peers and their reported levels of perseverance. Unlike many previous studies, we are able to investigate differences between bullshitters and non-bullshitters conditional upon a range of potential confounding characteristics (including a high-quality measure of educational achievement) providing stronger evidence that bullshitting really is independently related to these important psychological traits. 
Our findings support the view that young men are, on average, bigger bullshitters than young women, and that socio-economically advantaged teenagers are more likely to be bullshitters than their disadvantaged peers. There is also important cross-national variation, with young people in North American more likely to make exaggerated claims about their knowledge and abilities than those from Europe. Finally, we illustrate how bullshitters display overconfidence in their skills, and are more likely to report that they work hard when challenged and are popular at school than other young people. 
The paper now proceeds as follows. Section 2 provides an overview of the Programme for International Student Assessment (PISA) 2012 data and our empirical methodology. This is accompanied by Appendix A, where we discuss how we test for measurement invariance of the latent bullshit scale across groups. Results are then presented in section 3, with discussion and conclusions following in section 4.