Tag Archives: moderation

Building a Strong Community Culture

Building a strong community culture is incredibly important for the long-term success of any community. Without a great culture, a community can start to scare off new members, harm existing members, and damage your brand. But “culture” is one of those things that seems so touchy-feely that it’s easy to wonder if you have any control over it.

Thankfully, over my 15+ years working with communities at companies like HubSpot and Reddit, I’ve uncovered some of the core building blocks of a strong community culture.

A Strong Cultural Vision

You can’t have a strong culture if you don’t know what you want it to be! Taking the time to think about what vibe you want for your community – and what vibe will allow you to please both members and business – is worthwhile. Are you creating a raucous nightclub or a quiet library? A messy fun run or a professional race? Writing this out will give you a north star to execute against.

Clear Guidelines

Correspondingly, it’s hard to enforce a culture if members don’t know what it’s supposed to be. Capturing the dos and don’ts of your community in a formal document will help guide your members and give you something to point to when you have to hand out punishments. The Coral Project has a great guide to building a code of conduct.

I recommend making your rules specific enough that they’re clear without having them be so targeted (“you can’t say words x, y, and z”) that you have to constantly amend them to factor in new, creative trolls. (And let me tell you, from 5 years at Reddit: they will always come up with new ways to be nasty.) And don’t forget to list them prominently – research has shown that this can decrease problematic posts.

What punishments you assign to transgressions will depend on your community and the level of transgression. In some cases you may want to consider a 3-strikes rule; in others, you may have zero tolerance.

scrabble tiles spelling 'rules'

Great Founding Members

The founding members of your community are going to set the vibe. Whatever content they post will be what the larger member base sees when they join. These are the village elders that newbies will look up to. So you absolutely must carefully screen your founding members to ensure that they both embody your cultural vision and that they’re committed to helping you deliver on it.

Staff That Model Behavior

Just as your members will look up to your founding members, they’ll look up to you and your staff. The moment that you break cultural norms, everyone will think it’s ok. Ensure that your team knows the rules and vibe and diligently stick to them.

Positive Reinforcement

Studies show that positive reinforcement tends to be more effective than negative reinforcement. Shout-out the members you see doing an amazing job living up to your values. Consider awards or surprise-and-delight budgets for these folks. But also consider positive reinforcement for problem children – if you rain praise down on someone when they make the right choice, they may lean away from all the wrong choices they had been making prior.

Consistency of Punishment

Even if it’s not as effective as positive reinforcement, we do need to enforce our rules. Importantly, consistency of punishment is shown to be more effective than severity of punishment; it’s hard to prove any decrease in crime from the death penalty, whereas studies have shown that hard-to-avoid DUI checkpoints are quite effective.

This means making it easy for members to report transgressions, setting up automation to catch troublemakers, and enforcing the same way every time. Even if someone has traditionally been a good member of the community, you have to treat them the same as everyone else.

cars waiting in a line at night

Evolution of Culture as Necessary – WITH Your Community

Communities are living, breathing, evolving entities, and it’s rare that their culture will remain stagnant or a ruleset will cover all situations until the end of time. You will likely need to evolve your guidelines over time. To do this successfully, consider involving your community in the discussion and be default transparent about the changes.

The Secret Weapon

What makes a strong community culture can’t truly fit into a blog post; it’s the day-to-day work of community professionals nurturing, supporting, and enforcing in their communities. If your business hasn’t hired a community professional, that’s my cheat code for you: hire someone who is great at this.


“Rules” photo by Joshua Miranda

Cars photo by Michael Pointner

Why Adding Friction Could Make Your Community Healthier

Slippery warning sign

For years, designers have been talking about making things “frictionless”. And for good reason: the web was full of a lot of friction. Signup flows were labyrinthine, uploading and storing files was a hassle, and let’s not even talk about sites that didn’t work on mobile.

But in community design, friction is coming into vogue. Why? Because making it frictionless to say the first thing on your mind can often be a bad thing.

One of the biggest recent examples of this is Nextdoor‘s racial profiling problem. Nextdoor allows members of a neighborhood to create an online community where they can talk about things happening in their neighborhood, trade items, and report crime and safety issues. And that last area is where the problems arose, because people with implicit biases started posting things like concerns about “‘light-skinned black female’ walking her dog and talking on her cellphone” and following up with horrifying comments like “I don’t recognize her. Has anyone described any suspect of crime like her?”

Often, these “suspicious characters” were simply neighbors who wouldn’t have been called out if not for the color of their skin. The posts were offensive, reinforced racial stereotypes, and also made it hard for police to sort out posts talking about actual crime.

Generally, community platforms try to deal with this by having clear guidelines and taking action after an offensive post goes up. But guidelines are often easily ignored, and taking a post down doesn’t lessen the negative impact it had. The most effective way to affect biased behavior like this is to add friction to make people stop and think while they’re taking an action.

Nextdoor has done an amazing job addressing this. As soon as you mention race in a Crime & Safety post, you are required to list additional, non-racial attributes. This, they explain, creates “decision points to get people to stop and think as they’re observing people to cut down on implicit bias.” The result? Racial profiling posts have dropped by 75%.

Nextdoor crime posting

Nextdoor isn’t new to friction. They require those launching neighborhoods on the platform to recruit a certain number of members in a certain period of time, and all members must prove residence. This means many neighborhoods never get off the ground…but also means they avoid empty, inactive communities that would make their service look bad. Stack Exchange does the same thing with new sites on their networks, requiring them to amass a certain amount of activity before they’re publicly launched.

Airbnb, faced with similar racial profiling issues, is taking a number of actions including requiring hosts take a pledge promising not to be biased. You might think a pledge won’t change people’s actions, but studies have found that students required to pledge to obey their school’s honor code were less likely to cheat – even if the school didn’t have an honor code.

Discourse and Product Hunt boldly put friction at the very start of their experience. You previously couldn’t comment on Product Hunt without an invite from an existing member, and the Discourse community platform allows you to set certain achievements (number of votes, number of comments, etc) that a member must hit before they can take greater actions. Over at Reddit, we don’t allow you to create a subreddit unless your account has a minimum level of karma and is at least 30 days old.

Metafilter literally added a payment to their sign up process not in order to make money, but purely to create friction that prevented casual sign-ups. They only wanted people who were truly invested.

It’s exciting to see community design start to step away from traditional (and generally sales-based) design. Too long community professionals have labored within inflexible platforms and struggled to react to issues rather than prevent them. Once we start putting thought into where we create or remove friction, we can build communities that are more successful, productive, and civil.


Full disclosure: I consulted for Nextdoor from 2015-2016, but did not work on the project(s) listed above.

Thank you to the Social Media Clarity podcast for their great work summarizing this trend!

Trolling isn’t outlier behavior, and we can stop it

Large troll standing over a house

For years the picture painted of trolls was pretty straightforward: while most members of online communities are good people, there are a few horrible, unchangeable, malicious people called “trolls” who live to make everyone’s life terrible. Our job was to try to keep them out, ban them when they showed up, and sigh and accept that they were an inevitable part of any online community.

What has become clear is that we were wrong; most trolls are regular people.

Two recently released studies have shown that the majority of “troll” behavior is actually generated by normal people who have been triggered into acting negatively, usually through a combination of their own mental state (i.e. having a bad day) and social norms (e.g. seeing other people troll and get away with it).

  1. The famously toxic League of Legends found that only about 1% of their players were consistently toxic, and those produced only about 5% of the toxicity. “The vast majority was from the average person just having a bad day.”
  2. Scientists from Cornell and Stanford found that people are more likely to troll if they were in a negative mood, late at night, and if the first comment on a thread was a “troll comment”.

This is a game-changer for several reasons.

One, it means we may have been banning or punishing a large number of normal people who were just doing what they saw others doing. It’s likely that we only reinforced their negative behavior, rather than helping them adjust it.

Two, it means there’s a lot more we can do to prevent trolling. A recent experiment on Reddit found that rule posts stuck to the top of a thread increased rule following by 7.3 percentage points and increased newcomer participation by 38.1%. League of Legends found that some simple priming “reduced negative attitudes by 8.3%, verbal abuse by 6.2% and offensive language by 11%”. Some people are further down the rabbit hole of negativity, but even they may be saved. We are not helpless to decrease trolling, and continuing to act like we are is irresponsible.

(You can find my much longer post on ways to create positive online spaces here.)

Three, it means community managers are even more important in any organization that has an interactive online space. We are no longer just reactive janitors, apologizing for the mess. We can be proactive social designers. (Be sure to go seek out some behavioral psychology books and classes, folks.)

To me, this is extremely exciting. It means our online communities can become more positive, safe places. And it means that our work is far from done. Complacency happens in every industry. The community industry has finally started pushing through our complacency about ROI. Next, let’s tackle trolling.


It’s important that I note that these findings don’t mean there aren’t real, horrible people on the internet. It doesn’t mean we need to put up with harassment just because someone had a bad day. I’m not condoning bad behavior – I’m just optimistic that we can change much of it.


Troll photo courtesy of EE Shawn

Two Causes of Toxic Online Spaces (and some solutions)

This is adapted from my talk at Bridge Keepers.

speeding cars

Have you ever gone over the speed limit?

Probably. In an observational study, 7 out of 10 drivers sped in an urban area. At some point, you were likely part of that 7.

Have you heard of how toxic the community for the game League of Legends is? Legendary. (Pun intended.) And you would expect that this is mainly generated by a bunch of bad actors. But it turns out only about 5% of negativity came from trolls – the rest came from normal folks “having a bad day”.

So what is it that causes relatively normal people to behave so badly?

I think there are two major things at play: Normalization and Lack of Punishment.

Normalization

We like to think that we are individuals, unaffected by others. But it’s just not true.

The amazing researcher Dan Ariely (author of one of my favorite books) did a study in which he found that people would change their choice of beer at a brewery based on what the person before them ordered. If that person ordered what they were planning on ordering, they changed their order.

That’s kind of insane. If you want one type of beer, why order a different type? Well, because there’s a norm at play that we should achieve variety to better know what options we have, and our brain automatically kicks in. Even though, as found in the study, people who make these group-motivated choices are generally unhappier with the result.

What I’m saying is: Observing others’ behavior actually changes ours.

Which means if we see someone tweet something this horrible…

Tweet from Milo Yiannopoulos offensive tweet: There's a new feminist Barbie. When you pull the string it says "Math is hard, let's lie about rape."

…then we think this tweet must be acceptable, because at least it’s a joke! Right? Right??

Offensive tweet: A lesbian, an alcoholic, and a heavily sedated woman walks into a bar. The bartender asks, "What'll you have, Hillary?"

Types of Norms

There are two types of norms.

 Injunctive norms are basically rules.

 Descriptive norms are the norms we understand from our interactions with others or the actual reality we see in the world around us.

Going back to the speeding example: We all know that speeding is illegal. And we know what the rules are. But the descriptive norm is “eh, it’s ok to speed a little bit”. So we do it without thinking of ourselves as criminals.

ivy covering a building

The problem with this is that norms are like ivy. Once they’re firmly rooted, they’re very hard to change.

So, how do we create positive norms?

Clear Guidelines

Obvious, but bears repeating. Your community needs to have guidelines, and they should be:

1. Simple. If they’re complex, nobody will bother reading them.

2. In-line. Nobody is going out of their way to find your rules. Put them in-line where possible. Many subreddits do a great job of this:

Screen Shot 2017-01-17 at 8.29.09 PM

3. General. If your rules are too specific people won’t bother spending the time understanding what they can and can’t do. (And bad actors may try to find a way to technically obey the rules while causing trouble.)

Consistently Applied Guidelines

Studies show that people are much more likely to obey the rules if they’re consistently applied. This means:

1. Train, train, train your team. Being consistent is hard. You should drill, review past moderations together, do flashcards, whatever you need to do to get consistent across your whole org.

2. Create awareness of implicit bias. None of us wants to think we’re biased, but as shown in the beer experiment above, we often are without even realizing it. And when we are rule enforcers, this can be really problematic. There’s immense evidence that police are biased, and that’s something that has very serious consequences. Are these police racist? No, they’re just operating off biases they may not realize they have. There are plenty of great orgs that help teams work on realizing what implicit biases they have. (A few that have been recommended to me: Paradigm, Project Include, Women’s Leadership Institute.)

Self-Moderation

I think this is one of the most criminally underrated set of tools for driving specific behavior.

1. Display true norm rates. People adjust their habits when they see how others behave (like these college students that drank less once they saw the average number of drinks their peers were drinking).

2. Prime people. Putting people in the right mindset can be incredibly effective. League of Legends found that simply displaying messages before a game (like “Players perform better if you give them constructive feedback after a mistake”) they decreased bad behavior by 11%.

3. Add friction. We are obsessed in the tech world with “frictionless” experiences. But if someone’s action may negative affect dozens, hundreds, millions of people? A little friction can be good. Nextdoor added some additional steps you have to take before posting about suspicious people on their platform, and racial profiling dropped by 75%.

Bad news, though…Guidelines backfire if behavior doesn’t match.

coffee-987119_1920 (1)

The more litter we see? The more likely we are to litter.

The more we see rule breakers unpunished? The more likely we are to break rules.

Rules are not effective if they’re not enforced.

Especially if there’s a big benefit to breaking the rules. After all, if:

  • Attention is the goal

  • And negativity generates attention

  • And punishment is rare…

…why would you stop? Especially if you can get a book deal from being horrible?

This is the Tragedy of the Commons.

We all know the world would be better if we all obey the rules. But if you personally gain benefit from you personally breaking the rules, you may do it at the expense of others.

Certainty of Punishment

So how do you make these guidelines affective? Create certainty of punishment. If punishment is unlikely, you will continue offending.

The death penalty? Doesn’t look like it actually decreases crimes. Because even though it’s a severe punishment, you’re unlikely to get caught and unlikely to be given this sentence.

Blood alcohol checkpoints? Very effective at decreasing drinking and driving. Because you’re very likely to be caught and punished.

So, how do we create certainty of punishment?

Automation

This is the baseline stuff you should be doing.

1. Blacklist words. Anyone using unacceptable words should automatically be penalized.

2. Spot suspicious behavior. Multiple posts in a short period of time? Similar posts across sub-forums? Shut it down automatically.

Flagging

User flagging is a key tool in the fight against negative behavior. Your users are on the front line and they will always be faster than your team.

1. Make it prevalent. This functionality should be really easy to find, always.

2. Create specific flows. People often struggle with flagging because so many things get flagged that they get overwhelmed. Consider more complex flows depending on what flag was thrown. Marked as annoying? Deprioritize? Marked as racist? Prioritize. Marked as “bugs me”? Implement a clever self-resolution flow like Greater Good and Facebook did.

Reputation Systems

Treat repeat offenders differently. Go pick up Building Web Reputation Systems and figure out what’s best for you.

1. Create visibility thresholds. Have repeat offenders’ posts be less visible to others (which really hurts their desire for attention). Or require people to get special flags to show up to general audiences, like Kinja did.

2. Have reputation affect flag weight. If a repeat offender’s post gets a single flag, weigh that more heavily than a single flag on a good actor’s post.

The punchline: Investing in moderation now saves money later.

I know it’s hard to prioritize spending for moderation, especially when you’re starting out. But “we’ll deal with that issue if it comes up” and “I’m sure people will behave” clearly don’t bear out. And if you wait until you have truly toxic norms and lack of certainty of punishment, it’s going to be way more costly.

Let’s go back to that ivy example. Have you seen what walls look like after you laboriously remove the ivy?

ivy suckers left on wall

You have to sand those suckers off and repaint.

The equivalent for communities? A whole lot of messaging, a whole lot of banning, and a whole lot of complaints until a new norm is established. Just ask Reddit.


 

A few notes…

Much of the research here is taking from the hard-to-read but incredibly valuable “Building Successful Online Communities”.

Although I have 10 years of experience in the world of community, I am not an expert at moderation or trust & safety. I’m sure I missed things or mischaracterized things. I would love to hear your insights in the comments!

 

My AMA with Bassey Etim, Community Desk Editor at The New York Times

Last week I had the pleasure of hosting an AMA with the very smart, very pleasant Community Desk Editor at The New York Times, Bassey Etim. Taking questions from me and the crowd, Bassey mulled on building moderation teams, the future of journalism, and getting buy-in from coworkers.

(Feel free to skip the first minute, which is mostly me getting set up in Blab and waiting for Bassey to call in.)

Bassey is speaking alongside folks from Etsy, Spark Capital, Genius, and Pure House at CMX Summit East, which I’m organizing!