'Bankrolling Bigotry: An Overview of the Online Funding Strategies of American Hate Groups', a report by Global Disinformation Index and Institute for Strategic Dialogue, comments
Hatred is surging across the United States. Figures released by the Southern Poverty Law Center (SPLC) suggest that the number of hate groups rose steadily between 2014 and 2018, including a 55% growth in the number of number of white nationalist groups active between 2017 and 2019. In 2018 the FBI announced that hate crimes were at the highest volume they had been for 16 years, and recent analysis from the Center for Strategic and International Studies identifies white supremacists as the most significant terror threat facing the US. This matches global trends where white supremacist terrorism has spiked by 320%, in part buoyed by a broad morass of hate against communities including Jews, Muslims, immigrants, people of colour, people with disabilities and the lesbian, gay, bisexual, transgender, queer and others (LGBTQ+) community.
This hatred has come to the streets over the course of 2020. As civil rights protesters have demonstrated across the country, groups such as the Proud Boys have incited hatred and participated in violent clashes. Such hatred threatens the safety, security and wellbeing of minority communities, and societal harmony writ large.
There is a clear need for greater efforts to be made to tackle hate groups. While these groups remain free to mobilise they can target minority communities with hatred and violence, as well as proselytise and recruit new members. The struggle against these actors plays out in many ways. Civil society groups produce counter-messaging, which undermines the propaganda of hate; specialist practitioners work to de-radicalise individuals involved in extremist movements; and activists and academics build evidence bases and advocate for changes from social media platforms to improve and enforce their policies against hate-mongers.
Another area where there has been successful activism over recent years is in limiting the ability of hate groups to raise funds. Advocacy groups like SumOfUs have helped wage campaigns that put pressure on companies whose products are used to facilitate the funding of hate. A number of individuals involved in promoting hatred have been banned from platforms such as PayPal, limiting their ability to make money or raise donations.
The extent to which hate groups use different platforms to raise funds is currently not widely understood, however, so their efforts to limit this activity are not always effective. To improve our ability to check the scale and nature of online funding by hate groups, the Institute for Strategic Dialogue (ISD) and Global Disinformation Index (GDI) have analysed the digital footprints of 73 US-based hate groups, identified through existing studies conducted by the SPLC and Anti-Defamation League (ADL), with additional coding and vetting by ISD and GDI analysts. These groups were then assessed for the extent to which they used 54 funding mechanisms. The research aimed to map out the online infrastructure behind hate groups’ financing and fundraising in order to support efforts to defund and therefore disempower hate movements in the US.
Through this research, we found that hate groups use popular platforms such as PayPal, Facebook Fundraisers and Stripe, although these platforms often have explicit policies supposedly preventing their use to facilitate hate or violence. Through this work we have improved our understanding of how different types of groups raise money using a broad spectrum of online platforms and services. This work has informed a series of recommendations which, if enacted, could diminish the ability of those who seek to spread hatred to succeed.
The key findings are
• We analysed the digital footprints of 73 US-based groups involved in promoting hatred against individuals on the basis of their gender, sexuality, race, religion or nationality. We checked for their use of 54 online fundraising mechanisms, which included 47 platforms, 5 different cryptocurrencies and the presence of membership or consulting services, ultimately finding 191 instances of hate groups using online fundraising services to support their activity.
• The platform most commonly used by the hate groups studied was Charity Navigator, an organisation that assesses charities in the US and ranks them according to a certain set of criteria; currently it is used by 29 groups. The second most commonly used platform was PayPal, currently used by 21 of the groups we analysed, followed by Facebook Fundraisers, currently used by 19 groups. Charity Navigator and Facebook Fundraisers are both powered by Network for Good, a fundraising software company that allows any non-profit with a profile on the non-profit information service Guidestar to use their service to raise funds.
• A number of the hate groups analysed in this report have non-profit status in the US: 32 of the 73 (44%) hate groups have either 501(c)(3) or 501(c)(4) tax status in the US. This potentially helps legitimise hate groups and provides them with avenues through which to raise money.
• More than one-third (38%) of the platforms analysed do not have a policy which explicitly prohibits hate groups from using their services. A majority – 29 of the 47 (62%) platforms – included in the investigation had policies designed to push back against or ban hateful activity in some way.
• Hate groups used 24 of those 29 (83%) platforms with policies against hate speech, showing a failure to implement and enforce these policies.
• Different types of hate groups prioritised different funding mechanisms. When identifying hate groups for analysis we subcategorised them according to their ideology. Through this we found that white supremacist organisations were least likely to use funding mechanisms such as onsite donation forms, crowdfunding mechanisms or onsite retail, instead preferring to use cryptocurrency donations. This potentially reflects proactive policy enforcement by funding platforms, suggesting that policy enforcement can become an effective tool limiting the activities of hate groups online. It may also be a result of the preferred mobilisation strategies of these groups, which prioritise decentralised organisation and the incitement of violence. Conversely anti-LGBTQ+ groups, which are in some instances well-established organisations that operate under the banner of legitimate religious groups, had the most diverse funding strategies.
The recommendations are
• Platforms should adopt policies which limit their use by hate groups: We found that 38% of the platforms studied did not have any policies in place prohibiting their use by hate groups. Furthermore, some platforms only had limited policies in place prohibiting violent organisations, but ignored their use by non-violent hate groups. The mass proliferation of hatred against minority communities helps inspire violence and fuels community polarisation and societal destabilisation. We recommend that platforms that facilitate organisational fundraising adopt comprehensive policies banning their use by groups that promote hatred and discrimination of individuals according to their identity, including gender, sexuality, race, religion, disability or nationality.
• Where platforms do have policies to prevent the abuse of services by hate groups, they should be more proactive and comprehensive in their enforcement: Hate groups used 83% of the platforms we identified that had policies in place around hatred. It is essential that organisations are more proactive in the enforcement of their terms of service so that they live up to the values which they publicly express, and limit their abuse by hate groups. This might include greater resource allocation to safety and policy teams dealing with such issues on the platforms, or proactive outreach to experts who can provide support in identifying and analysing the activity of hate groups on the platforms.
• Industry bodies such as the Electronic Transactions Association or the Merchant Acquirer’s Committee should take on a leadership role in developing standard-setting guidelines about hate and extremism in order to encourage the broad adoption of policies to limit online fundraising tools for hate groups: industry standard guidelines should be drawn up to help guide a more cohesive and uniform response to the misuse of financial technology by extremists at a policy level.
• Congressional debate on whether such groups should qualify for non-profit tax status: 44% of the hate groups in our study are registered non- profit organisations in the US. An Internal Revenue Service (IRS) designation may act as a sort of kite mark, making platforms and payment providers wary of acting against a group. Through our research we found evidence that being registered as non-profits helped the groups studied raise funds. Following the outcome of the 2013 “IRS targeting scandal”, which found that the IRS had used inappropriate and politically-motivated criteria to identify tax-exempt applications, it is believed that the debate over the non-profit status of groups that discriminate against immutable characteristics should fall onto Congress.