Tag Archives: moderation

Why Adding Friction Could Make Your Community Healthier

Slippery warning sign

For years, designers have been talking about making things “frictionless”. And for good reason: the web was full of a lot of friction. Signup flows were labyrinthine, uploading and storing files was a hassle, and let’s not even talk about sites that didn’t work on mobile.

But in community design, friction is coming into vogue. Why? Because making it frictionless to say the first thing on your mind can often be a bad thing.

One of the biggest recent examples of this is Nextdoor‘s racial profiling problem. Nextdoor allows members of a neighborhood to create an online community where they can talk about things happening in their neighborhood, trade items, and report crime and safety issues. And that last area is where the problems arose, because people with implicit biases started posting things like concerns about “‘light-skinned black female’ walking her dog and talking on her cellphone” and following up with horrifying comments like “I don’t recognize her. Has anyone described any suspect of crime like her?”

Often, these “suspicious characters” were simply neighbors who wouldn’t have been called out if not for the color of their skin. The posts were offensive, reinforced racial stereotypes, and also made it hard for police to sort out posts talking about actual crime.

Generally, community platforms try to deal with this by having clear guidelines and taking action after an offensive post goes up. But guidelines are often easily ignored, and taking a post down doesn’t lessen the negative impact it had. The most effective way to affect biased behavior like this is to add friction to make people stop and think while they’re taking an action.

Nextdoor has done an amazing job addressing this. As soon as you mention race in a Crime & Safety post, you are required to list additional, non-racial attributes. This, they explain, creates “decision points to get people to stop and think as they’re observing people to cut down on implicit bias.” The result? Racial profiling posts have dropped by 75%.

Nextdoor crime posting

Nextdoor isn’t new to friction. They require those launching neighborhoods on the platform to recruit a certain number of members in a certain period of time, and all members must prove residence. This means many neighborhoods never get off the ground…but also means they avoid empty, inactive communities that would make their service look bad. Stack Exchange does the same thing with new sites on their networks, requiring them to amass a certain amount of activity before they’re publicly launched.

Airbnb, faced with similar racial profiling issues, is taking a number of actions including requiring hosts take a pledge promising not to be biased. You might think a pledge won’t change people’s actions, but studies have found that students required to pledge to obey their school’s honor code were less likely to cheat – even if the school didn’t have an honor code.

Discourse and Product Hunt boldly put friction at the very start of their experience. You previously couldn’t comment on Product Hunt without an invite from an existing member, and the Discourse community platform allows you to set certain achievements (number of votes, number of comments, etc) that a member must hit before they can take greater actions. Over at Reddit, we don’t allow you to create a subreddit unless your account has a minimum level of karma and is at least 30 days old.

Metafilter literally added a payment to their sign up process not in order to make money, but purely to create friction that prevented casual sign-ups. They only wanted people who were truly invested.

It’s exciting to see community design start to step away from traditional (and generally sales-based) design. Too long community professionals have labored within inflexible platforms and struggled to react to issues rather than prevent them. Once we start putting thought into where we create or remove friction, we can build communities that are more successful, productive, and civil.


Full disclosure: I consulted for Nextdoor from 2015-2016, but did not work on the project(s) listed above.

Thank you to the Social Media Clarity podcast for their great work summarizing this trend!

Trolling isn’t outlier behavior, and we can stop it

Large troll standing over a house

For years the picture painted of trolls was pretty straightforward: while most members of online communities are good people, there are a few horrible, unchangeable, malicious people called “trolls” who live to make everyone’s life terrible. Our job was to try to keep them out, ban them when they showed up, and sigh and accept that they were an inevitable part of any online community.

What has become clear is that we were wrong; most trolls are regular people.

Two recently released studies have shown that the majority of “troll” behavior is actually generated by normal people who have been triggered into acting negatively, usually through a combination of their own mental state (i.e. having a bad day) and social norms (e.g. seeing other people troll and get away with it).

  1. The famously toxic League of Legends found that only about 1% of their players were consistently toxic, and those produced only about 5% of the toxicity. “The vast majority was from the average person just having a bad day.”
  2. Scientists from Cornell and Stanford found that people are more likely to troll if they were in a negative mood, late at night, and if the first comment on a thread was a “troll comment”.

This is a game-changer for several reasons.

One, it means we may have been banning or punishing a large number of normal people who were just doing what they saw others doing. It’s likely that we only reinforced their negative behavior, rather than helping them adjust it.

Two, it means there’s a lot more we can do to prevent trolling. A recent experiment on Reddit found that rule posts stuck to the top of a thread increased rule following by 7.3 percentage points and increased newcomer participation by 38.1%. League of Legends found that some simple priming “reduced negative attitudes by 8.3%, verbal abuse by 6.2% and offensive language by 11%”. Some people are further down the rabbit hole of negativity, but even they may be saved. We are not helpless to decrease trolling, and continuing to act like we are is irresponsible.

(You can find my much longer post on ways to create positive online spaces here.)

Three, it means community managers are even more important in any organization that has an interactive online space. We are no longer just reactive janitors, apologizing for the mess. We can be proactive social designers. (Be sure to go seek out some behavioral psychology books and classes, folks.)

To me, this is extremely exciting. It means our online communities can become more positive, safe places. And it means that our work is far from done. Complacency happens in every industry. The community industry has finally started pushing through our complacency about ROI. Next, let’s tackle trolling.


It’s important that I note that these findings don’t mean there aren’t real, horrible people on the internet. It doesn’t mean we need to put up with harassment just because someone had a bad day. I’m not condoning bad behavior – I’m just optimistic that we can change much of it.


Troll photo courtesy of EE Shawn

Two Causes of Toxic Online Spaces (and some solutions)

This is adapted from my talk at Bridge Keepers.

speeding cars

Have you ever gone over the speed limit?

Probably. In an observational study, 7 out of 10 drivers sped in an urban area. At some point, you were likely part of that 7.

Have you heard of how toxic the community for the game League of Legends is? Legendary. (Pun intended.) And you would expect that this is mainly generated by a bunch of bad actors. But it turns out only about 5% of negativity came from trolls – the rest came from normal folks “having a bad day”.

So what is it that causes relatively normal people to behave so badly?

I think there are two major things at play: Normalization and Lack of Punishment.

Normalization

We like to think that we are individuals, unaffected by others. But it’s just not true.

The amazing researcher Dan Ariely (author of one of my favorite books) did a study in which he found that people would change their choice of beer at a brewery based on what the person before them ordered. If that person ordered what they were planning on ordering, they changed their order.

That’s kind of insane. If you want one type of beer, why order a different type? Well, because there’s a norm at play that we should achieve variety to better know what options we have, and our brain automatically kicks in. Even though, as found in the study, people who make these group-motivated choices are generally unhappier with the result.

What I’m saying is: Observing others’ behavior actually changes ours.

Which means if we see someone tweet something this horrible…

Tweet from Milo Yiannopoulos offensive tweet: There's a new feminist Barbie. When you pull the string it says "Math is hard, let's lie about rape."

…then we think this tweet must be acceptable, because at least it’s a joke! Right? Right??

Offensive tweet: A lesbian, an alcoholic, and a heavily sedated woman walks into a bar. The bartender asks, "What'll you have, Hillary?"

Types of Norms

There are two types of norms.

 Injunctive norms are basically rules.

 Descriptive norms are the norms we understand from our interactions with others or the actual reality we see in the world around us.

Going back to the speeding example: We all know that speeding is illegal. And we know what the rules are. But the descriptive norm is “eh, it’s ok to speed a little bit”. So we do it without thinking of ourselves as criminals.

ivy covering a building

The problem with this is that norms are like ivy. Once they’re firmly rooted, they’re very hard to change.

So, how do we create positive norms?

Clear Guidelines

Obvious, but bears repeating. Your community needs to have guidelines, and they should be:

1. Simple. If they’re complex, nobody will bother reading them.

2. In-line. Nobody is going out of their way to find your rules. Put them in-line where possible. Many subreddits do a great job of this:

Screen Shot 2017-01-17 at 8.29.09 PM

3. General. If your rules are too specific people won’t bother spending the time understanding what they can and can’t do. (And bad actors may try to find a way to technically obey the rules while causing trouble.)

Consistently Applied Guidelines

Studies show that people are much more likely to obey the rules if they’re consistently applied. This means:

1. Train, train, train your team. Being consistent is hard. You should drill, review past moderations together, do flashcards, whatever you need to do to get consistent across your whole org.

2. Create awareness of implicit bias. None of us wants to think we’re biased, but as shown in the beer experiment above, we often are without even realizing it. And when we are rule enforcers, this can be really problematic. There’s immense evidence that police are biased, and that’s something that has very serious consequences. Are these police racist? No, they’re just operating off biases they may not realize they have. There are plenty of great orgs that help teams work on realizing what implicit biases they have. (A few that have been recommended to me: Paradigm, Project Include, Women’s Leadership Institute.)

Self-Moderation

I think this is one of the most criminally underrated set of tools for driving specific behavior.

1. Display true norm rates. People adjust their habits when they see how others behave (like these college students that drank less once they saw the average number of drinks their peers were drinking).

2. Prime people. Putting people in the right mindset can be incredibly effective. League of Legends found that simply displaying messages before a game (like “Players perform better if you give them constructive feedback after a mistake”) they decreased bad behavior by 11%.

3. Add friction. We are obsessed in the tech world with “frictionless” experiences. But if someone’s action may negative affect dozens, hundreds, millions of people? A little friction can be good. Nextdoor added some additional steps you have to take before posting about suspicious people on their platform, and racial profiling dropped by 75%.

Bad news, though…Guidelines backfire if behavior doesn’t match.

coffee-987119_1920 (1)

The more litter we see? The more likely we are to litter.

The more we see rule breakers unpunished? The more likely we are to break rules.

Rules are not effective if they’re not enforced.

Especially if there’s a big benefit to breaking the rules. After all, if:

  • Attention is the goal

  • And negativity generates attention

  • And punishment is rare…

…why would you stop? Especially if you can get a book deal from being horrible?

This is the Tragedy of the Commons.

We all know the world would be better if we all obey the rules. But if you personally gain benefit from you personally breaking the rules, you may do it at the expense of others.

Certainty of Punishment

So how do you make these guidelines affective? Create certainty of punishment. If punishment is unlikely, you will continue offending.

The death penalty? Doesn’t look like it actually decreases crimes. Because even though it’s a severe punishment, you’re unlikely to get caught and unlikely to be given this sentence.

Blood alcohol checkpoints? Very effective at decreasing drinking and driving. Because you’re very likely to be caught and punished.

So, how do we create certainty of punishment?

Automation

This is the baseline stuff you should be doing.

1. Blacklist words. Anyone using unacceptable words should automatically be penalized.

2. Spot suspicious behavior. Multiple posts in a short period of time? Similar posts across sub-forums? Shut it down automatically.

Flagging

User flagging is a key tool in the fight against negative behavior. Your users are on the front line and they will always be faster than your team.

1. Make it prevalent. This functionality should be really easy to find, always.

2. Create specific flows. People often struggle with flagging because so many things get flagged that they get overwhelmed. Consider more complex flows depending on what flag was thrown. Marked as annoying? Deprioritize? Marked as racist? Prioritize. Marked as “bugs me”? Implement a clever self-resolution flow like Greater Good and Facebook did.

Reputation Systems

Treat repeat offenders differently. Go pick up Building Web Reputation Systems and figure out what’s best for you.

1. Create visibility thresholds. Have repeat offenders’ posts be less visible to others (which really hurts their desire for attention). Or require people to get special flags to show up to general audiences, like Kinja did.

2. Have reputation affect flag weight. If a repeat offender’s post gets a single flag, weigh that more heavily than a single flag on a good actor’s post.

The punchline: Investing in moderation now saves money later.

I know it’s hard to prioritize spending for moderation, especially when you’re starting out. But “we’ll deal with that issue if it comes up” and “I’m sure people will behave” clearly don’t bear out. And if you wait until you have truly toxic norms and lack of certainty of punishment, it’s going to be way more costly.

Let’s go back to that ivy example. Have you seen what walls look like after you laboriously remove the ivy?

ivy suckers left on wall

You have to sand those suckers off and repaint.

The equivalent for communities? A whole lot of messaging, a whole lot of banning, and a whole lot of complaints until a new norm is established. Just ask Reddit.


 

A few notes…

Much of the research here is taking from the hard-to-read but incredibly valuable “Building Successful Online Communities”.

Although I have 10 years of experience in the world of community, I am not an expert at moderation or trust & safety. I’m sure I missed things or mischaracterized things. I would love to hear your insights in the comments!

 

My AMA with Bassey Etim, Community Desk Editor at The New York Times

Last week I had the pleasure of hosting an AMA with the very smart, very pleasant Community Desk Editor at The New York Times, Bassey Etim. Taking questions from me and the crowd, Bassey mulled on building moderation teams, the future of journalism, and getting buy-in from coworkers.

(Feel free to skip the first minute, which is mostly me getting set up in Blab and waiting for Bassey to call in.)

Bassey is speaking alongside folks from Etsy, Spark Capital, Genius, and Pure House at CMX Summit East, which I’m organizing!

Tactic Tuesday: Year-end best-of lists (with a twist)

The end of the year offers ample opportunities for rituals in your community. One of the most effective? Best-of lists.

Whether it’s a poll, a bracket, a forum thread, or something else, asking people what their favorite things were for the year creates great energy. Everyone has an opinion and, if carefully managed*, the disagreements can create in-depth debates that deepen connections between community members. Communities thrive through emotional connections, so don’t forget that fighting can be good.

But I love the twist r/comicbooks is giving it over on Reddit. Instead of just nominating top comic books, artists, writers, and the like they’re also nominating top community members of the year. This strengthens emotional connections, validates community members’ time spent on the subreddit, and shows new members that this is a lively group that values their members. It’s a home run.


*The beauty of disagreement in best-of threads is that you can say “ok, make your own list”. Check out how just such a comment simultaneously empowers the angry community member while stopping a potential slugfest:

Reddit thread fighting about comic books

Two things the Washington Post doesn’t understand about comments

The Washington Post wrote an interesting piece on the state of comments on the web in response to the current Kinja/Jezebel offensive comment issues. Their take: maybe it’s just not worth it to have comments. It’s a great discussion to have, and largely I think the article is thought-provoking.

However, I think they missed two points (though I don’t blame them).

1. Sometimes the comments are half the reason you visit a site

io9, which is also on the Gawker network with Jezebel, is probably the site I visit the most outside of Gmail. I’m a huge geek, and I love their articles…but I also like connecting with fellow geeks, learning random facts that even the editors don’t know, and sharing in the joy of fandom – all in the comments.

Without comments, io9 would survive. But because of the comments, they thrive. (They even have a regular open animated gif comment threads).

2. Some content can’t really exist without comments

Similar to my last point, but worth calling out separately.

Jezebel is a women’s site, but more relevantly a feminist site. They spend their days calling out and debating women’s issues. Can you imagine a site like that without comments? These sort of issues are an ongoing discussion, not a piece of news.


That said, I totally understand why the Washington Post missed these points. They largely publish news. People come to news sites for news, not comments. Often, comments can actually misinform the reader about news (which is why I understand science sites like Pacific Standard turning comments off).  And there’s plenty of news that doesn’t really warrant comments – and often providing them can open a can of worms (I think it’s quite interesting how the New York Times only turns comments on for certain posts).

This is a very tough, very important debate to have. Turning comments off for some sites might make sense (though I would consider that a last resort). But let’s keep in mind that this is just not an option for some sites – and when comments are great, they’re incredibly valuable and even powerful.