One of Europe’s Largest Gaming Platforms is Tackling Toxicity with Machine Learning

Jigsaw
Jigsaw
Published in
6 min readOct 23, 2019

--

Online gaming sites are one of the fastest-growing sectors of social media — the industry generates upwards of 300 billion by some estimates — but with that explosive growth comes issues with toxicity and online harassment. That’s the exact problem FACEIT, the leading independent competitive gaming platform for online multiplayer PvP gamers, wanted to solve.

In addition to creating a positive and immersive gaming experience for their more than 15 million users, FACEIT also wanted to incorporate innovative technology to enhance the work of the human moderators, while encouraging new ways for the community to engage with each other that was free of harassment without stripping the community of its personality.

Tackling Toxicity

Moderating gaming platforms typically depends on gamers reporting players engaging in toxic behavior such as harassing or verbally attacking others through the in-game text or voice chat. Previously at FACEIT, user reported incidents would contribute to an index that increased or decreased depending on the context of the reported incident. If that index fell below a certain threshold, FACEIT’s system would trigger an appropriate response, which could be a twelve-hour ban, a warning or a similar consequence. Players who received a consequence could ask for their case to be reviewed by a moderator team to understand when and why the system was triggered.

This process was time and labor intensive which meant that feedback for toxic behavior wouldn’t be communicated to an offending gamer until much later-sometimes as much as thirty-six hours later. Adding to the issue was how quickly the context and meaning of words evolved in the gaming community. Simply monitoring for predetermined keywords would ensure that the platform would be one step behind harassment and not flexible enough to meet the needs of their users.

Courtesy of FACEIT

The delay between violations and feedback also decreased transparency in communication with gamers. The system made it difficult for moderators to pinpoint the exact offending behavior, leaving gamers confused as to why they were banned or being flagged. With such a prominent presence in the community, FACEIT made it a goal to become a leader in addressing toxicity in online gaming. That effort led FACEIT to Perspective API, which provided the opportunity to make gamers more mindful of their behavior at scale, while still giving them access to the platform.

“At FACEIT, we’ve embarked on a mission to directly address toxicity and harassment in gaming. We know our community is always evolving so we had to be as creative and fast moving in how we addressed in-game harassment,” said Maria Laura Scuri, director of business intelligence at FACEIT. “We began testing Perspective because it provided a baseline model of toxicity that we could tailor to fit our community.”

Testing the limits of machine learning

The FACEIT team began by testing Perspective and set the toxicity level very high- meaning messages above a score of about .7 are marked as toxic. Perspective works by giving a “score” (a number between 0.0 and 1.0) that indicates how confident the algorithm is that a comment is similar to toxic comments it has seen in the past. The team then incorporates two additional metrics which considers the score of each toxic comment and the frequency of toxic comments in the match for that user. The combination of these metrics are then used to flag the incident and decide whether a warning or ban to the user should be automatically issued.

Courtesy of FACEIT

Scuri emphasized speed and accuracy as the key reasons for turning to machine learning. “Making sure Perspective met our needs was a months long process, as we tested how the model would react to specific gaming slang and insults. We tested many different toxicity levels as we wanted on one side to minimize false positives and avoid banning or flagging gamers unnecessarily, but on the other to be effective, and applicable to a good percentage of our users and make sure the change could be perceived.”

Full transparency for gamers

Scuri also wanted to pull back the curtain on the moderation policy and show gamers that Perspective wasn’t the mechanism making decision, but rather a tool to help the moderators. “Being upfront with gamers on their behavior meant we needed to also update our policies as we rolled out Perspective. Today if gamers want to dispute or ask questions about a violation, they can reach out to us and even get the full details of their message. Our team will answer questions and use Perspective and our models to pinpoint the exact messages that triggered the incidents. This is one of the most important steps in the moderation process because we hope gamers will start to identify the behaviors that cause bans or flags and be less inclined to repeat it in the future.”

The team also gathers weekly to review reports to make sure no false positives have slipped through. Now, the FACEIT moderation team can accurately identify harassment on their platform and continue to set an example for how the gaming industry can address toxicity. With Perspective, FACEIT has a powerful tool to use that empowers their moderators, and combines machine learning with their community expertise.

Ongoing Model Training

Perspective has analyzed over 160,000,000 messages since it was incorporated into FACEIT’s platform earlier this year. In total, there has been a 7.80% percentage reduction of toxic messages above the set toxicity threshold on total messages and a 20.13% reduction in toxic messages overall. After just a month of using Perspective, there has been a 16.47% decrease in the number of users sending at least one message above the toxicity threshold which equals thousands of fewer users sending toxic messages on a daily basis. FACEIT is now catching the top 5% most toxic gamers on the platform resulting in about 1,000 gamers warned daily.

Since adopting Perspective, FACEIT continues to review the accuracy of flagged comments. FACEIT made sure to create a system that includes the context of any given situation. Unlike the popular belief of machine learning as a “set it and forget it” tool for moderation, FACEIT’s team still takes the time to meet in person and discuss user reports to improve their system through techniques such as identifying instances of false positives and false negatives.

Perspective is the first step of a bigger initiative that the FACEIT team is working on to tackle toxicity at 360 degrees. Toxicity in gaming can be expressed in multiple types of behaviors, including intentionally ruining the game for other players (e.g. griefing, trolling, feeding), hate speech, verbal abuse, cheating, teamkilling, and more. Not all of these behaviors can be detected through text chat, and therefore FACEIT is planning on combining different data sources like videos, voice chat and in-game events to better evaluate the behavior of a player in a match and be able to address toxicity on different levels.

FACEIT has seen promising results in its initial trial and is focusing on the long run to continue to train and improve their system, you can read more about their updated policy here. Eventually, FACEIT hopes to issue warnings and bans in real-time as the violation occurs, which would be impossible to do manually, and continue to improve everyone’s gaming experience.

--

--

Jigsaw
Jigsaw

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.