close
close

How Rec Room reduced toxicity in players’ voice chat by 70%

How Rec Room reduced toxicity in players’ voice chat by 70%

Presented by Modulate


The trust and safety team at social gaming platform Rec Room has seen tremendous success in reducing toxicity over the past 18 months. In this VB Spotlight, we’ll dive into the metrics, tools, and strategies they’ve used to make players happier, increase engagement, and change the game.

Watch now for free on demand!


Improving the player experience and safety should be a top priority for game developers. In this latest VB Spotlight, Mark Frumkin, Director of Account Management at Modulate, and Yasmin Hussain, Head of Trust and Safety at Rec Room, talked about protecting players from toxicity from the perspective of Rec Room’s Trust and Safety team and their work with ToxMod, a proactive voice chat moderation solution powered by machine learning.

Launched in 2016, Rec Room is a social gaming platform with over 100 million lifetime users. Players interact in real-time via text and voice chat on PC, mobile, VR headsets and console, using avatars to bring the experience to life.

“Rec Room was created to make room for millions of worlds and different spaces – not just what we create, but what our players can create,” said Hussain. “Trust and safety are a critical part of that.”

But real-world interactions and real-time voice chat inevitably lead to people behaving badly. How do you change the behavior of players who don’t follow community standards?

In the last year of experimenting and iterating on this idea, Rec Room was able to reduce cases of toxic voice chat by about 70%, Hussain said, but that didn’t happen immediately.

Combating toxicity step by step

The first step was to expand continuous voice moderation to all public spaces. This helped maintain consistency in the platform’s behavioral expectations. The next step was to determine the most effective response when players step out of line. The team ran a wide range of tests, from varying mute and ban durations to two types of warnings – a very stern warning and one that offered positive encouragement about the type of behavior they wanted to see.

They found that the one-hour mute, when they immediately identified violations, had a huge impact on reducing bad behavior. It was an immediate and very tangible reminder to players that toxicity will not be tolerated. This real-time feedback not only changed players’ behavior in the moment, but also kept them in the game, Hussain said.

While this wasn’t a complete cure for in-game toxicity, it did significantly curb the problem. When they dug deeper, they realized that a very small percentage of the player base was responsible for more than half of the violations. How could they directly target this specific group?

“There was a disproportionate association between these very small cohorts of players and a very large number of violations, which then gave us the impetus to do another experiment,” she said. “If we change the way we intervene – muting you the first time or giving you a warning and then muting you again and again but you don’t learn that lesson – maybe we can start to stack our interventions so that they reinforce each other. We’re seeing some great results doing that.”

Creating and conducting test and security experiments

There are certain metrics to keep an eye on to improve players’ moderation strategies, Frumkin said. These include the profile and prevalence of toxicity: What are people saying? How often are they saying it? Who are these rule breakers, how many are there, and how often do they violate the code of conduct?

At the beginning, you also need to be clear about what the hypothesis is, what behavior you want to change, what result you expect and what success looks like.

“The hypothesis is key,” Hussain said. “When we initially tested the interventions and the right way to reduce violations, it was very different than when we tried to change the behavior of a subset of our player population.”

Iteration is also crucial – to learn, fine-tune and optimize. But it is equally important to make sure your experiments run long enough to collect the data you need and influence player behavior.

“We want them to adhere to community standards and be positive members of this community. That means they have to unlearn certain things that they may have been doing for a while,” she said. “We need those three, four, six weeks for that to have an impact while people experience this new normal that they’re in, learn from it and change their behavior.”

However, there is always more to do. Sometimes you make progress on a particular issue, but then the problem evolves. This means you have to constantly improve your moderation strategies and evolve in parallel. For example, moderating speech in real time is a huge challenge, but the Rec Room team is extremely confident that their interventions are now accurate and their players feel more confident.

“We’ve had tremendous success in reducing violations and improving the feel of our platform – around 90 percent of our players say they feel safe, welcome and have fun in the Rec Room, which is incredible,” she said. “We’re realizing that it’s not enough for justice to be done or for us to encourage our players to change their behavior. Other players need to see this happening so they can also have reassurance and reassurance that we’re upholding our community standards.”

The future of AI-powered voice moderation

To ultimately make Rec Room an even safer and more fun place, ToxMod is continually analyzing data on policy violations, language and player interactions, Frumkin said. But moderation should also evolve. You want to discourage behavior that violates standards and rules of conduct – but you also want to encourage behavior that improves the mood or enhances the experience for other Rec Room players.

“We’re also starting to develop our ability to recognize pro-social behavior,” he added. “If players are good partners, if they’re supporting other members in the same room – if they’re good at defusing certain situations that tend to get heated – we want to be able to point out not only where there are problems, but where there are role models. There’s a lot you can do to increase and amplify the impact of those positive influences in your community.”

Voice moderation is extremely complex, especially with real-time audio, but AI-powered tools are having a huge impact on moderation strategies and what teams can actually achieve.

“It means you can raise your ambitions. Things you thought were impossible yesterday suddenly become possible when you start doing them,” Hussain said. “We see that in how available, how efficient and how effective machine learning is becoming at ever-increasing scale. There’s a huge opportunity for us to leverage that and keep our community as safe as possible.”

To learn more about the challenges of toxicity in games, strategies to effectively change player behavior, and how machine learning is changing the game, don’t miss this VB Spotlight, free on demand.


Watch now for free!

agenda

  • How language moderation works to detect hate and harassment
  • Rec Rooms Successes and lessons learned in building a voice moderation strategy
  • Key insights from voice moderation data that every game developer should collect
  • How reducing toxicity can increase player retention and engagement

Moderators

  • Yasmin HussainHead of Trust & Safety, Rec Room
  • Mark FrumkinDirector of Account Management, Modulate
  • Rachel KaserTechnology Writer, VentureBeat (Host)

Leave a Reply

Your email address will not be published. Required fields are marked *