Tag Archives: moderation

Two Causes of Toxic Online Spaces (and some solutions)

This is adapted from my talk at Bridge Keepers.

speeding cars

Have you ever gone over the speed limit?

Probably. In an observational study, 7 out of 10 drivers sped in an urban area. At some point, you were likely part of that 7.

Have you heard of how toxic the community for the game League of Legends is? Legendary. (Pun intended.) And you would expect that this is mainly generated by a bunch of bad actors. But it turns out only about 5% of negativity came from trolls – the rest came from normal folks “having a bad day”.

So what is it that causes relatively normal people to behave so badly?

I think there are two major things at play: Normalization and Lack of Punishment.

Normalization

We like to think that we are individuals, unaffected by others. But it’s just not true.

The amazing researcher Dan Ariely (author of one of my favorite books) did a study in which he found that people would change their choice of beer at a brewery based on what the person before them ordered. If that person ordered what they were planning on ordering, they changed their order.

That’s kind of insane. If you want one type of beer, why order a different type? Well, because there’s a norm at play that we should achieve variety to better know what options we have, and our brain automatically kicks in. Even though, as found in the study, people who make these group-motivated choices are generally unhappier with the result.

What I’m saying is: Observing others’ behavior actually changes ours.

Which means if we see someone tweet something this horrible…

Tweet from Milo Yiannopoulos offensive tweet: There's a new feminist Barbie. When you pull the string it says "Math is hard, let's lie about rape."

…then we think this tweet must be acceptable, because at least it’s a joke! Right? Right??

Offensive tweet: A lesbian, an alcoholic, and a heavily sedated woman walks into a bar. The bartender asks, "What'll you have, Hillary?"

Types of Norms

There are two types of norms.

 Injunctive norms are basically rules.

 Descriptive norms are the norms we understand from our interactions with others or the actual reality we see in the world around us.

Going back to the speeding example: We all know that speeding is illegal. And we know what the rules are. But the descriptive norm is “eh, it’s ok to speed a little bit”. So we do it without thinking of ourselves as criminals.

ivy covering a building

The problem with this is that norms are like ivy. Once they’re firmly rooted, they’re very hard to change.

So, how do we create positive norms?

Clear Guidelines

Obvious, but bears repeating. Your community needs to have guidelines, and they should be:

1. Simple. If they’re complex, nobody will bother reading them.

2. In-line. Nobody is going out of their way to find your rules. Put them in-line where possible. Many subreddits do a great job of this:

Screen Shot 2017-01-17 at 8.29.09 PM

3. General. If your rules are too specific people won’t bother spending the time understanding what they can and can’t do. (And bad actors may try to find a way to technically obey the rules while causing trouble.)

Consistently Applied Guidelines

Studies show that people are much more likely to obey the rules if they’re consistently applied. This means:

1. Train, train, train your team. Being consistent is hard. You should drill, review past moderations together, do flashcards, whatever you need to do to get consistent across your whole org.

2. Create awareness of implicit bias. None of us wants to think we’re biased, but as shown in the beer experiment above, we often are without even realizing it. And when we are rule enforcers, this can be really problematic. There’s immense evidence that police are biased, and that’s something that has very serious consequences. Are these police racist? No, they’re just operating off biases they may not realize they have. There are plenty of great orgs that help teams work on realizing what implicit biases they have. (A few that have been recommended to me: Paradigm, Project Include, Women’s Leadership Institute.)

Self-Moderation

I think this is one of the most criminally underrated set of tools for driving specific behavior.

1. Display true norm rates. People adjust their habits when they see how others behave (like these college students that drank less once they saw the average number of drinks their peers were drinking).

2. Prime people. Putting people in the right mindset can be incredibly effective. League of Legends found that simply displaying messages before a game (like “Players perform better if you give them constructive feedback after a mistake”) they decreased bad behavior by 11%.

3. Add friction. We are obsessed in the tech world with “frictionless” experiences. But if someone’s action may negative affect dozens, hundreds, millions of people? A little friction can be good. Nextdoor added some additional steps you have to take before posting about suspicious people on their platform, and racial profiling dropped by 75%.

Bad news, though…Guidelines backfire if behavior doesn’t match.

coffee-987119_1920 (1)

The more litter we see? The more likely we are to litter.

The more we see rule breakers unpunished? The more likely we are to break rules.

Rules are not effective if they’re not enforced.

Especially if there’s a big benefit to breaking the rules. After all, if:

  • Attention is the goal

  • And negativity generates attention

  • And punishment is rare…

…why would you stop? Especially if you can get a book deal from being horrible?

This is the Tragedy of the Commons.

We all know the world would be better if we all obey the rules. But if you personally gain benefit from you personally breaking the rules, you may do it at the expense of others.

Certainty of Punishment

So how do you make these guidelines affective? Create certainty of punishment. If punishment is unlikely, you will continue offending.

The death penalty? Doesn’t look like it actually decreases crimes. Because even though it’s a severe punishment, you’re unlikely to get caught and unlikely to be given this sentence.

Blood alcohol checkpoints? Very effective at decreasing drinking and driving. Because you’re very likely to be caught and punished.

So, how do we create certainty of punishment?

Automation

This is the baseline stuff you should be doing.

1. Blacklist words. Anyone using unacceptable words should automatically be penalized.

2. Spot suspicious behavior. Multiple posts in a short period of time? Similar posts across sub-forums? Shut it down automatically.

Flagging

User flagging is a key tool in the fight against negative behavior. Your users are on the front line and they will always be faster than your team.

1. Make it prevalent. This functionality should be really easy to find, always.

2. Create specific flows. People often struggle with flagging because so many things get flagged that they get overwhelmed. Consider more complex flows depending on what flag was thrown. Marked as annoying? Deprioritize? Marked as racist? Prioritize. Marked as “bugs me”? Implement a clever self-resolution flow like Greater Good and Facebook did.

Reputation Systems

Treat repeat offenders differently. Go pick up Building Web Reputation Systems and figure out what’s best for you.

1. Create visibility thresholds. Have repeat offenders’ posts be less visible to others (which really hurts their desire for attention). Or require people to get special flags to show up to general audiences, like Kinja did.

2. Have reputation affect flag weight. If a repeat offender’s post gets a single flag, weigh that more heavily than a single flag on a good actor’s post.

The punchline: Investing in moderation now saves money later.

I know it’s hard to prioritize spending for moderation, especially when you’re starting out. But “we’ll deal with that issue if it comes up” and “I’m sure people will behave” clearly don’t bear out. And if you wait until you have truly toxic norms and lack of certainty of punishment, it’s going to be way more costly.

Let’s go back to that ivy example. Have you seen what walls look like after you laboriously remove the ivy?

ivy suckers left on wall

You have to sand those suckers off and repaint.

The equivalent for communities? A whole lot of messaging, a whole lot of banning, and a whole lot of complaints until a new norm is established. Just ask Reddit.


 

A few notes…

Much of the research here is taking from the hard-to-read but incredibly valuable “Building Successful Online Communities”.

Although I have 10 years of experience in the world of community, I am not an expert at moderation or trust & safety. I’m sure I missed things or mischaracterized things. I would love to hear your insights in the comments!

 

My AMA with Bassey Etim, Community Desk Editor at The New York Times

Last week I had the pleasure of hosting an AMA with the very smart, very pleasant Community Desk Editor at The New York Times, Bassey Etim. Taking questions from me and the crowd, Bassey mulled on building moderation teams, the future of journalism, and getting buy-in from coworkers.

(Feel free to skip the first minute, which is mostly me getting set up in Blab and waiting for Bassey to call in.)

Bassey is speaking alongside folks from Etsy, Spark Capital, Genius, and Pure House at CMX Summit East, which I’m organizing!

Tactic Tuesday: Year-end best-of lists (with a twist)

The end of the year offers ample opportunities for rituals in your community. One of the most effective? Best-of lists.

Whether it’s a poll, a bracket, a forum thread, or something else, asking people what their favorite things were for the year creates great energy. Everyone has an opinion and, if carefully managed*, the disagreements can create in-depth debates that deepen connections between community members. Communities thrive through emotional connections, so don’t forget that fighting can be good.

But I love the twist r/comicbooks is giving it over on Reddit. Instead of just nominating top comic books, artists, writers, and the like they’re also nominating top community members of the year. This strengthens emotional connections, validates community members’ time spent on the subreddit, and shows new members that this is a lively group that values their members. It’s a home run.


*The beauty of disagreement in best-of threads is that you can say “ok, make your own list”. Check out how just such a comment simultaneously empowers the angry community member while stopping a potential slugfest:

Reddit thread fighting about comic books

Two things the Washington Post doesn’t understand about comments

The Washington Post wrote an interesting piece on the state of comments on the web in response to the current Kinja/Jezebel offensive comment issues. Their take: maybe it’s just not worth it to have comments. It’s a great discussion to have, and largely I think the article is thought-provoking.

However, I think they missed two points (though I don’t blame them).

1. Sometimes the comments are half the reason you visit a site

io9, which is also on the Gawker network with Jezebel, is probably the site I visit the most outside of Gmail. I’m a huge geek, and I love their articles…but I also like connecting with fellow geeks, learning random facts that even the editors don’t know, and sharing in the joy of fandom – all in the comments.

Without comments, io9 would survive. But because of the comments, they thrive. (They even have a regular open animated gif comment threads).

2. Some content can’t really exist without comments

Similar to my last point, but worth calling out separately.

Jezebel is a women’s site, but more relevantly a feminist site. They spend their days calling out and debating women’s issues. Can you imagine a site like that without comments? These sort of issues are an ongoing discussion, not a piece of news.


That said, I totally understand why the Washington Post missed these points. They largely publish news. People come to news sites for news, not comments. Often, comments can actually misinform the reader about news (which is why I understand science sites like Pacific Standard turning comments off).  And there’s plenty of news that doesn’t really warrant comments – and often providing them can open a can of worms (I think it’s quite interesting how the New York Times only turns comments on for certain posts).

This is a very tough, very important debate to have. Turning comments off for some sites might make sense (though I would consider that a last resort). But let’s keep in mind that this is just not an option for some sites – and when comments are great, they’re incredibly valuable and even powerful.