Hate the Player and the Game? How Hate Speech Spreads in Online Gaming Communities

• Bookmarks: 141


Since its inception, social media has been a vital tool for democracy, serving as an indispensable platform for people to exercise their rights to speech, expression, and assembly. However, with the recent insurgence of hate crimes like the 2017 Charlottesville attack, policymakers and stakeholders seek to prevent the exploitation of such platforms by hate groups.

In 2017, shortly after the Charlottesville attack that lead to the murder of Heather Heyer, Unicorn Riot leaked conversations had on Discord by white supremacists and neo-Nazis. This led Discord, one of the largest social gaming platforms, to crack down on far-right activity on its platform. But Discord is only one of many social gaming platforms where hate speech is spreading.

To investigate how hate groups target video gaming communities, CPR Science and Technology writer Samuel Israel spoke with Keegan Hankes, the interim research director at the Southern Poverty Law Center’s (SPLC) Intelligence Project and a regular contributor to Hatewatch. Hankes has worked for the SPLC since 2013 tracking neo-Confederates, white nationalists, and the alt-right. He is also involved in the SPLC’s data-driven investigations, particularly in identifying, collecting, and analyzing disparate intelligence streams related to domestic far-right extremism. He joined the SPLC after earning an interdisciplinary humanities degree from the University of Chicago’s New Collegiate Division.

The Southern Poverty Law Center, based in Alabama with offices in Florida, Georgia, Louisiana, Mississippi, and Washington, D.C., is a nonprofit civil rights organization dedicated to fighting hate and bigotry and seeking justice for the most vulnerable members of society.

The following interview has been condensed and edited for clarity.

CPR: Reporting on how hate groups inserted themselves into the gaming community began in 2017 after Charlottesville. From your understanding, how did extremists find their way into games and gaming platforms?

Hankes: Quite a large percentage of the U.S population in some form plays video games. Whether that is on a mobile device, computer, or video game console, you have a wide demographic of the U.S. population playing games in some way.

However, one of the most prominent demographics on video games is white men – especially on online platforms. Because of this, extremists will go onto gaming platforms and try to spread their messages through forums and conversation because they believe they may have a chance to find like-minded people. These online platforms, like Discord, are where white supremacists can play a more active role, with one of the examples of white supremacists organizing and amplifying their voice [by] praising the shooter of the Christchurch massacre.

CPR: Do you have any recommendations for gaming companies on what they can do to prevent the spread of hate speech?

Hankes: When tech companies have very strong terms of service—which a lot of the major ones now have and many younger ones are learning to create—if these terms are put into place early, tech companies can prevent a lot of abuse from white supremacists. Additionally, enforcement is a major component to preventing the spread of hate speech. Twitter is a great example of this: it dragged its feet in effectively moderating white supremacists. When that happens, you can begin to see the problem metastasize.

My recommendation is that when you create and enforce terms of service early on, it will be less likely for extremist groups or individuals to take root there. It can also be pretty devastating for a white supremacist community when all of their infrastructure through posts and comments get deleted on a platform, because when that happens the community may not feel motivated enough to re-build.

CPR: You talked about Twitter. From my understanding, a lot of the platform’s policies seem to be enforced by users like the SPLC bringing these issues to the attention of the platform itself. Has this pattern been the same on video gaming platforms?

Hankes: In general, that’s a good observation of what is happening. In my experience, the platforms have had other people do the work of monitoring and reporting content to platform creators. It’s much more convenient for the platform and costs less time and less money. It shifts the burden of making hard choices on which side your tech organization is with when they can have content flagged by other parties like non-profits.

It gets more complicated when you talk about video games, however, for a few reasons. Video game platform pages are inherently less public than the other types of platforms like Twitter and Facebook, where groups promote growth through being public. If you’re trying to track hate speech, private groups like the ones in Discord are harder to find. That can be better than offering a megaphone to extremists, which major tech companies sometimes do, but this kind of organization provides its own challenges.

 

 

CPR: Courts have repeatedly afforded hate speech constitutional protections, which limits what the government can do to mitigate its harms. What options do the government have to prevent the spread of extremist messaging?

Hankes: The discussion of what role the government should play in moderating or censoring hate speech has been going on for a long time, which is frustrating for those who have been invested in the issue. I don’t know how familiar you are with Section 230 of the Communications Decency Act, but it basically provides a liability shield for content posted on a platform. For instance, if someone on Facebook publishes something using racial hate speech, I can’t sue Facebook for allowing that content to be shown and it is not obligated to remove it. There definitely is a debate playing out about the role of government on this issue, and I think that on a variety of these instances we are going to see government get involved in some way or another.

1764 views
bookmark icon