Redefining Online Gaming Culture: Combating Toxicity and Harm with Restorative Justice

 "To function humanely, societies must establish effective social safeguards against the misuse of institutional power for exploitive and destructive purposes. It should be made difficult for people to remove humanity from their conduct." (Bandura, 1992, p. 116).


Introduction:

Massively Multiplayer Online Games (MMOs) are a highly popular contemporary form of entertainment, with 65% of US gamers playing daily. Online, players engage in a range of positive relationships and perform a wide variety of co-operative actions such as competing, working in teams, trading, mentoring, messaging online and speaking in game building relationships with people from all around the world (Shen et al, 2020)

 On the flipside, however, players also frequently experience hate and harassment online.   In 2020, a representative study of US gamers found that 81% had experienced harassment, with 68% - about 45 million gamers - experiencing more serious forms of harm, such as physical or sexualised threats, stalking and ongoing harassment. Of these, 53% were targeted because of a protected characteristic - gender, race, religion, ability status or sexual orientation.  Gabriela Salinas, analysing toxicity in Multiplayer Video Games in Canada found 72% had witnessed toxic behaviour while playing games that made them want to stop playing, felt it was ruining their gaming experience and felt annoyed and disgusted by what they saw. 


 



                         

What is online toxicity? 

Undesirable behaviours often involve trolling, griefing (deliberately targeting a fellow player with the intention of interrupting or blocking their gameplay with the intent to frustrate them), cyberbullying, flaming (verbally abusing) and cheating. 

These actions fracture gaming communities and reducing player enjoyment, leading to avoidance of games and platforms for 1 in 4 players. 

Beyond the screen, there is concern toxicity online may also behaviour offline, particularly in younger generations, by altering perceptions of what constitutes normative communicative behaviour (Shen et al, 2020)

Why are MMO's so toxic? 

Researchers think that when people enter an online world they can feel  deindividuatedor anonymous and detached from actual life. Ironically it seems feeling "less like yourself" can make players  more likely to conform to group norms, whether these are pro- or antisocial. s (Chesney et al., 2009). In gameplay between opposing teams, strangers with no history together may play together with no history against teams who are part of long-standing guilds - how disindividuated they become often relates to the extent that their awareness of their own or other's identity is decreased by playing in either these groups, with that differing across individuals. A lack of social cues or sense of social presence, in particular,  can give rise to toxic and antisocial behaviour (Shen et al, 2020).

Deindividuation can create a sense of selective moral disengagement from toxic behavior. This idea comes from Albert Bandura, a Canadian born American psychologist best known for his work on aggression, who suggests that typically we all have standards of “good” and “bad”; “right” and “wrong”. These standards guide us to act in some ways, and warn us of what might happen if we act against them so we can redirect our course. 

However, good people can, and do, disengage from moral self rules for various reasons, and in various ways. Good people can and do get swept away in a moment and cause harm to others, even when they're very aware that what they are doing is hurtful and harmful. Hartman (2017) suggests that players, even in early adolescence are well aware that they are playing against other social beings with minds of their own - suggesting that game designers also know this, and to overcome taboos against hurting others,  embed cues that effectively frame violence enacted against seemingly social beings as "okay." This can support feelings of enjoyment and entertainment in violence, and moral disengagement from subjective discomfort, such as guilty feelings - encouraging more play and greater revenue.


    Bandura (1992) Selective Moral Disengagement  (Image created for this blog in canva.com)

Working in education with young teenagers,  I've noticed "banter" in class often sounds like banter in gaming chat - "why don't you just go shoot yourself", "I hope you fall in a trench and die", "Bro has serious skills issues", "STFU noob, why don't you just kill yourself already with your insanely ridiculous levels of dumbwitted incompetence". 

When young people are engaged with in dialogue about this behaviour, they will often say it's just a normal part of gamer talk,  distort the consequences or diffuse and displace from responsibility.

 Beres (2021) found online gamers who normalise "banter" as "just part of the game" and not as a problem score highly in a tendency towards conduct reconstrual (reframing behaviour as not harmful or okay) and distorting consequences.  These players are more likely to show toxic disinhibition online. 

But before we start pointing fingers at "problem gamers" with difficult personalities, it's worth noting research on trolling online has found trolling interactions often include exclusionary language across both trolls AND teammates who are not identified as trolls, with very rapid transitions from "victim" to "perpetrator" and back again when trolling is identified: that is to say, when trolling begins in gaming, usually everyone is at it, or at least playing some sort of role in maintaining it. 

So what can we do? Research suggests that having a strong moral identity, or sense of the standards of "right" and "wrong" can make it less likely that adolescents will disengage and troll or bully others online.

When we we find ourselves in situations that might cause us to act in inhumane ways, we can influence ourselves to act differently. On an ongoing basis, we check if we are acting in line with our own standards of what is “good” and “bad”, or “right and wrong" and choose actions based on the effects we think they will have on us and others. We consider: will doing this make me consider myself in a good or bad light? On an ongoing basis,  judge if what we experience matches our standards (or not). We judge if what we experience matches our circumstances (or not).

Importantly, we can also choose to activate or to  disengage from moral self-rules dependent on context, and, where we decide to disengage, tell ourselves stories about hos this is the "right" thing to do in order to soothe any discomfort that may arise from acting against our moral self-rules (or values). 

Where this is difficult, here also we can consider transformative and restorative approaches.

After all, as we have already seen in this blog series, restorative practices recognise that human beings are sometimes dirty, messy, stupid creatures - inbuilt into this apporach is awareness that good people do bad things, and that while yes, there are sometimes "bad actors" who aim to hurt and cause harm just to watch the world burn, many people who engage in harmful behaviours do so without full awareness, and are eager to repair when they understand the harm they have caused.

These approaches also recognise that in many human conflicts, there may be harmful behaviours enacted on "both sides" of a conflict - that this, in effect, is often what makes conflict so hard to handle in shame-based, punitive systems where resolving conflict is so often about answering "who's to blame" far away from the mirror, pointing fingers at "the other guy".

Trans Activist Kai Cheng Thom puts it succinctly here:

One of the most painful aspects of conflict involving harm is that we are forced to confront seemingly contradictory truths that threaten our foundational beliefs about who we are. Are good or bad people? Are we the harmdoer or the harmed? Are we deserving of dignity and forgiveness? Do we have to forgive everyone who has harmed us?

 




Could this be possible in online gaming communities? Research with the Overwatch community by Xiao, Jhaver and Salehi (2023)  asked whether it was possible to address interpersonal harm in online gaming communities. They found that resources and time presented a significant barrier to implementing these approaches, as well as a lack of awareness about restorative justice and its intentions. Their findings demonstrated that while Overwatch players act as a community in some regards, when issues arise there can be limited trust - either between victims and offenders or the moderators, who would require significant resource intensive training to engage the process effectively. 
They also pointed to the potential for cultural differences to stymy efforts to approach conflict in this way.

To combat this, they suggest;

  1. A shift from rule-based approaches to explaining moderation decisions to one that focuses on the impact of actions, encourages those causing harm to reflect on the harm caused and opt to take accountability. 
  2. Including victims in the moderation process TO ensure they feel heard and supported throughout.
  3. Save time by allowing pre-conferencing (where moderators meet with each party separately, but not together) to give both parties an opportunity to think things through and reflect on harm for the benefit of the community, even if meeting together isn’t appropriate)
  4. Ensure any conferencing between people who have been harmed and people who harm is safe by addressing power imbalances directly and establishing clear agreements on how both parties will behave in the meeting.
  5. Support and empower people who have been harmed by encouraging community members to gather to provide emotional support for anyone facing harm and involve these user groups in changes to content moderation policies.

Bandura, A. (2002). Selective moral disengagement in the exercise of moral agency. Journal of moral education31(2), 101-119.

Beres, N. A., Frommel, J., Reid, E., Mandryk, R. L., & Klarkowski, M. (2021, May). Don’t you know that you’re toxic: Normalization of toxicity in online gaming. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1-15).

Shen, C., Sun, Q., Kim, T., Wolff, G., Ratan, R., & Williams, D. (2020). Viral vitriol: Predictors and contagion of online toxicity in World of Tanks. Computers in Human Behavior108, 106343.

Teng, Z., Nie, Q., Guo, C., & Liu, Y. (2017). Violent video game exposure and moral disengagement in early adolescence: The moderating effect of moral identity. Computers in Human Behavior77, 54-62.

Wang, X., Yang, L., Yang, J., Wang, P., & Lei, L. (2017). Trait anger and cyberbullying among young adults: A moderated mediation model of moral disengagement and moral identity. Computers in Human Behavior73, 519-526.

Xiao, S., Jhaver, S., & Salehi, N. (2023). Addressing Interpersonal Harm in Online Gaming Communities: The Opportunities and Challenges for a Restorative Justice Approach. ACM Transactions on Computer-Human Interaction.


Comments