GREENTECH

Raising the bar on content moderation for social and gaming platforms (VB Live)

How did hate speech, abuse, and extremism become the “cost of being online” — and how can you make social platforms engaging, safe, and ultimately, more profitable? Join analyst and author Brian Solis and Two Hat CEO Chris Priebe to learn more about the changing internet landscape and the role of content moderation. 

Register here for this free VB Live event. 


User retention is the key to long-term success; and user engagement is one of the most essential tools you have to achieve it. Allowing user-generated content, which includes things like comments, private messages, chat, and media uploads, is among the top ways to gain an audience, and then keep them — because users are their own best selling point. But that’s only if the community that these users create with their shared content is safe, affirming, and welcoming to new members.

Somehow that’s become the holy grail, though. Opening up social features on a platform means that businesses make themselves vulnerable to the very things that drive users away: hate speech, social abuse, objectionable or damaging media, and more. Users who are offended are just as likely to dump your business as users who are harassed, and issues like these always damage your brand’s reputation.

The risks of going social

Are the risks worth it? And how big are those risks likely to be? Your community’s demographic is one of the biggest underlying factors when it comes to the kind of potential threats your company and your brand will need to stay on top of. If your users are under 13 (as can happen with video games), it’s necessary to be compliant with COPPA, which means barring all personally identifying information from the platform. If it’s an edtech platform, you’ll also need to be CIPA and FERPA compliant, on top of that.

Over-18 communities ostensibly have adult users who you’d expect to act like adults, but then you see white supremacist mass shootings broadcast live and child predators stalking the comments of young YouTubers.

Determining your community’s voice

These are, of course, extreme examples. In the middle lies a wide spectrum of community voice and style and interactions, and where you draw the line, and why, depends directly on your brand’s voice, style, and tone. And it derives from an understanding of what your brand stands for. It means thinking about what kind of damage a pornographic post would do to your brand, and how your audience and the media would respond — and how that differs from how you would define other potentially problematic messages or behavior.

It also means making sure terms like hate speech, and sexual harassment, and abuse are carefully defined, so that your standards of behavior are clear to your community right from the start. They should also be expected to agree to your terms of service and community rules before they’re permitted to register and begin to contribute.

The global response

The ugliness of so much online behavior is finally reaching critical mass in the collective consciousness of the 4 billion people who are now online (more than half the world’s population). There’s a rallying cry for social platforms across the globe to clear out the mess and create safe online spaces for everyone.

In 2017, Mark Zuckerberg released a statement decrying the kinds of posts, comments, and messages that were slipping through the cracks of their moderation, without ever being reviewed or responded to, including live-streamed suicides and online bullying.

And that turned the conversation from whether it was a violation of freedom of speech to moderate content to the notion that no brand owes a user a platform. We’ve seen it manifest in Twitter’s increasing commitment to the banishment of serial harassers, and recently in the decision from Facebook, Twitter, Apple, and YouTube to remove Alex Jones and InfoWars content from all of those platforms.

It’s become not a question of should we, but how do we go about it?

Making social spaces safe

The increase in interest around creating safer spaces means that best practices are starting to solidify around comment moderation and community curation. Community owners are learning that it’s also important to take a proactive approach to keeping your community safe from the kinds of content you want to eliminate.

This takes sophisticated moderation tools, which include in-house filters and content moderation tools that harness the power of AI, as well as a real person overseeing the action, because human judgement is an important way to identify these kinds of social community risks and edge situations, and handle them sensitively.

The final decision

While social features can pose a risk to your brand and reputation, users flock to positive spaces where like-minded fans of your brand create a safe space to geek out. So if managed properly, the benefits are significant, all the way to your bottom line.

Register for this VB Live event now, and join veteran analyst and market influencer Brian Solis and Two Hat CEO and founder Chris Priebe for a deep dive into the evolving landscape of online content and conversations, and how content moderation best practices and tools are changing the game.


Don’t miss out!

Register here for free.


You’ll learn:

  • How to start a dialogue in your organization around protecting your audience without imposing on free speech
  • The business benefits of joining the growing movement to “raise the bar”
  • Practical tips and content moderation strategies from industry veterans
  • Why Two Hat’s blend of AI+HI (artificial intelligence + human interaction) is the first step towards solving today’s content moderation challenges

Speakers:

  • Brian Solis, Principal Digital Analyst at Altimeter, author of Lifescale
  • Chris Priebe, CEO & founder of Two Hat Security


Sponsored by Two Hat Security 

Please follow and like us:
Verified by MonsterInsights