Ethical Moderation Series Part 1:

What is Ethical Moderation and Why Should You Care? 


This series addresses content moderation from a user-generated content perspective, where “moderators” are social media or community managers working for commercial entities. While we reference research from studies about moderators hired by social media platforms, we do not address the many issues they face here. If you’re interested in learning more about social media platform moderators, we recommend starting with this Financial Times article from May 2023. 


In Part 1 of our 3-part Ethical Moderation Series: we define ethical moderation and why it’s important.


Ethical moderation is…

You may have noticed on our website and Areto Labs documents, we reference “ethical moderation”and our own ethical moderation practices. But what is it? And why is it important? 

Ethical moderation is a practice centered on minimizing the harm caused by online abuse and harassment within a digital community and for those tasked with moderating the community’s content.

✅ Ethical moderation focuses on safeguarding the mental wellbeing of both community members AND moderators while fostering a respectful online environment 

✅ Ethical moderation aims to decrease the necessity for human intervention in handling harmful content 

✅ Ethical moderation employs unbiased machine learning models to ensure diverse perspectives and communities are represented within deployed moderation tools

✅ Ethical moderation reduces or eliminates the reliance on outsourcing moderation tasks, particularly to economically disadvantaged countries

✅ Ethical moderation encompasses actions such as blurring text and issuing warnings to shield individuals from potentially harmful content

 

What is content moderation?

Content moderation is the process of monitoring and managing user-generated content on digital platforms, websites, social media or online communities to ensure that it complies with a set of predefined guidelines. 

For brands who keep content moderation in-house, “content moderation” is often part of the social media or community manager’s role. For those community managers, content moderation is a balance between upholding brand guidelines while connecting and building relationships with customers and fans, while addressing inappropriate comments without alienating fans or causing pile ons.

 

Why ethical moderation means exercising duty of care

While moderating toxicity and spam on social media platforms are essential tasks to maintain safe and respectful online communities, they can have a significant impact on the mental health and wellbeing of the moderators themselves. 

Exposure to hateful or disturbing content can lead to feelings of anxiety, depression, and burnout, which can have long-lasting effects on a person's mental health. 

In this Content Marketing Institute podcast episode called “Who Will Guard the (Social) Guardians”, TED Conferences Facebook platform manager Ella Dawson says, “in most cases, the negative activity is aimed at the brand or a community rather than the anonymous employee working behind that logo. Still, it can be hard not to take certain things personally. It’s very difficult to take your own personal identity out of it…There are moments where it does feel like you’re the one being insulted or attacked.”

Studies have shown that content moderation can take a toll on the wellbeing of those doing it. Imagine sifting through tons of content every day, much of which is disturbing —like hate speech or threats. It's no surprise that this exposure can lead to serious psychological stress. Studies have even shown that content moderators are at a higher risk of developing conditions like post-traumatic stress disorder (PTSD) and depression (Gray et al., 2017). On top of that, the pressure to review content quickly can lead to burnout, as moderators deal with the emotional weight of the job (Roberts et al., 2020). 

Furthermore, moderators often have to make difficult decisions that can have real-life consequences for the users involved. This can lead to a sense of moral distress and guilt, especially when dealing with issues such as hate speech or cyberbullying. 

To make matters more challenging, the rules for content moderation can be unclear and inconsistent, leaving moderators feeling unsupported and exposed to potential backlash (Roberts et al., 2020; Roberts et al., 2021). 

By embracing ethical moderation, you commit not only to protecting online communities but also to safeguarding the psychological wellbeing of those working diligently behind the scenes. As responsible stewards of online spaces, you recognize the imperative of ethical moderation in building a safer, more inclusive and respectful digital world for all.

Are you practicing ethical moderation?

In Part 2 of our Ethical Moderation Series, we walk you through the key questions to ask when evaluating ethical moderation practices.

To learn more about Areto’s ethical moderation practices or to get started on your ethical moderation journey, get in touch today. 

 

References:

Previous
Previous

Nearly 80% of Facebook spam are brand-damaging illegal streams, shopping and betting scams.

Next
Next

🍋 Spam Showdown: Instagram vs. Facebook & The Hidden Costs of Ignoring Spam in Your Content Strategy