The Human Cost of Keeping Platforms Safe
Most users never think about who reviews the content they report. They click a button and move on. Someone else handles it. That someone is a content moderator. They show up for their shift, open a queue, and start working through what other people have flagged. Hate speech. Threats. Graphic violence. Sometimes worse.
One former moderator, who worked for an outsourcing firm for nearly five years, described her experience in an interview with Equal Times. She talked about watching a child being exploited sexually, and how that image never left her. She felt alone doing that work. Her sleep deteriorated. She would see the images in her nightmares and wake up more tired than when she went to bed.
Her story is not unusual. Research using large samples of commercial moderators found that more than a quarter showed moderate to severe mental distress. Another quarter were experiencing low wellbeing. A sociologist who has studied the industry for six years told Equal Times she has never met a moderator who has not suffered mental health problems connected to the job. She compared it directly to coal mining as a hazardous profession.
The content moderator job
Content moderation is one of those jobs that sounds simple from the outside. The title feels administrative. The reality is anything but.
A moderator’s day is spent making rapid decisions about content that ranges from mildly offensive to deeply disturbing. Spam and misleading ads sit at one end. Child sexual abuse material sits at the other. Most moderators see the full range, often within the same shift.
Research published in Cyberpsychology found that commercial moderators show symptoms consistent with repeated trauma exposure. Intrusive thoughts triggered by everyday situations. Avoidance of places or people that remind them of content they have seen at work. Anxiety, cynicism, and emotional detachment that build up slowly over time. The researchers compared the profile of moderators to professionals working in emergency services and social care.
What makes this harder is that moderators are often expected to absorb it quietly. The same research points to a reluctance to talk openly about how the work is affecting them, sometimes because they are not fully aware of it themselves. Burnout and secondary trauma do not arrive all at once. They build in the background until something gives.
A study on volunteer moderators found that mental distress shows up across platforms and communities, not just in the most extreme cases. Even handling everyday toxicity at scale takes a toll that most job descriptions never mention.
And yet this work is essential. Without it, the platforms most people use every day would be unusable.
When platforms grow, the problem grows with them
Individual burnout is serious enough on its own. But there is a structural problem underneath it that makes everything worse.
Online platforms do not stay the same size. They grow. And when they grow, the volume of content that needs moderating grows with them. More users means more posts, more messages, more reports. The queue does not shrink. It just keeps filling up.
To get a sense of the scale, Facebook took action on seven million pieces of bullying and harassment related content in a single quarter in 2023. TikTok removed around 99 million videos in the first quarter of 2024. That is not occasional pressure. That is constant volume, every single day.
If you want to understand what is actually happening in your own community, tools like a Tox Scan can analyse real moderation data and show where volume, risk, and exposure are building up.
The typical response has been to add more moderators. More people, more shifts, more coverage. But adding people to a broken system does not fix the system. It just spreads the damage more widely.
Research on moderator wellbeing points to lack of control as a major factor in burnout. Moderators usually have no say in what comes next. They cannot control the pace, the mix of content, or how long they stay exposed to something difficult. They just work the queue as it comes.
That combination does not make moderator burnout a possibility. It makes it very likely.
A study from the University of Michigan found that time pressure is one of the main drivers of burnout, alongside team conflict and the steady exposure to toxic content. The push to process more, faster, leaves very little room for the kind of care the job requires.
The result is predictable. High turnover. Experienced moderators leave. New ones are hired and trained. And the cycle starts again.
Better systems change the shape of the work
When people talk about improving moderation, the focus often lands on detection. How accurate is the model. How fast can it process content. How much cost can be reduced.
Those things matter. But they are not what moderators feel day to day.
What matters is what ends up in their queue.
In most systems, everything flows toward the same place. Obvious spam, low level toxicity, edge cases, and the most severe violations all compete for attention. The result is volume without structure, and exposure without control.
Better systems deal with that earlier.
They decide what should never reach a human in the first place. The obvious cases are handled instantly. The repetitive ones are absorbed by models trained for exactly that purpose. Only the content that genuinely needs human judgment is passed through, and when it is, it arrives with context.
Aiba’s moderation platform, Amanda, is built around that idea.
Fast deterministic filters handle clear violations immediately. Smaller, specialised language models process most of the volume. Larger reasoning models are used more selectively, where nuance matters.
Most content never reaches a human moderator at all. And the content that does is already filtered, structured, and prioritised.
That does not make the job easy. But it changes what the job feels like to do.
A moderator who is not spending half their shift clearing obvious noise has more capacity for the cases that really require attention. Decision fatigue builds more slowly. The queue becomes something they can work through, instead of something that overwhelms them.
Research consistently points to volume and lack of control as the main drivers of burnout. Systems that reduce unnecessary exposure and give the work some shape before it hits the queue do not remove the problem entirely. But they move it in the right direction.
And for many teams, that shift is the difference between a system that burns people out and one they can sustain.
Retention is a business problem too
Moderator burnout is a human problem first. But it is also a practical one that platforms cannot afford to ignore.
Moderation teams tend to have high turnover. People leave because the work is hard, the support is often lacking, and the mental cost builds faster than most employers acknowledge. Replacing them is expensive. But the bigger cost is what walks out the door with them.
An experienced moderator understands their community in ways that take months to develop. They recognise patterns that do not show up in policy documents. They have built judgment around edge cases that a new hire will have to learn the hard way. When they leave, that knowledge goes with them.
Studies on volunteer moderators show that decisions to quit are closely tied to mental distress. The ones who stay tend to feel supported and valued. The ones who leave early often feel the demands of the role exceed what the system around them can handle.
The same pattern shows up in commercial moderation teams, often with higher stakes. Research comparing moderators to crisis professionals suggests similar conditions apply. Clear structure. Real acknowledgment of the load. Tools that reduce unnecessary exposure.
Retention and wellbeing are tightly connected. Platforms that invest in conditions that make the work sustainable will keep better people for longer. That directly affects moderation quality, consistency, and ultimately the health of the community itself.
A more honest conversation about moderation
The technology behind content moderation has improved a lot in recent years. Detection is faster. Models are more accurate. Systems can handle scale in ways that were not possible five years ago.
But the human side has not kept up.
The people doing this work are still mostly invisible. The mental cost is still treated as an unfortunate side effect rather than something that can be designed for. And too many platforms still measure success by throughput and accuracy without asking what it costs the people producing those numbers.
That is starting to change. Regulators are paying closer attention to working conditions. Research on moderator wellbeing is growing. And a small but increasing number of platforms are beginning to treat moderator welfare as a real operational priority.
The technology will keep improving. Models will get better. Systems will get smarter. The share of content that ever reaches a human will continue to fall.
But humans will remain part of this system for a long time. They will handle the cases that require judgment, cultural understanding, and moral reasoning that no model fully captures.
Those people sit at the end of the queue.
What we choose to send them, and how we structure the work around them, is not just a technical decision. It is a human one.
And that is what good moderation looks like.






