I’ve been thinking about this Guardian piece from last July ever since it crossed my desk again. It’s one of those stories that sticks with you, not because it’s surprising, but because it’s so damn predictable and yet we keep looking away.
The headline says it all: African workers powering the AI revolution for about a dollar an hour. But the real gut punch is the details.
Let me tell you about Mercy. She’s a content moderator in Nairobi, working for Meta through an outsourced office. One day, a flagged video pops up on her screen. It’s a fatal car crash. She zooms in. The victim is her grandfather. She runs out crying. Her supervisor follows, not to console her, but to remind her she needs to finish her shift if she wants to hit her targets. Then more tickets appear: the same crash, from different angles. Her grandfather’s face. Over and over. Four people died. She had to keep watching.
This isn’t an outlier. The researchers spoke to dozens of workers at three data annotation and content moderation centres across Kenya and Uganda. These are the people who make AI work. Content moderators manually trawl through social media posts to remove toxic content. Data annotators label data so algorithms can understand it. Without them, your Facebook feed would be a nightmare, and your ChatGPT wouldn’t know a cat from a car.
But here’s what we don’t talk about: the toll. Workers witness suicides, torture, and rape “almost every day.” They’re expected to process 500 to 1,000 tickets per shift, each one potentially scarring. One moderator said, “Physically you are tired, mentally you are tired, you are like a walking zombie.” Another mentioned colleagues who attempted suicide, spouses who left, lives shattered.
The company policies make it worse. Workers who run away from their desks after seeing something horrific get marked down for not logging the right code. The “wellness counsellor” is a colleague with no training. You get 30 minutes a week. That’s it.
And the pay? About a dollar an hour. Let that sink in. The same companies that spend billions on GPUs and executive compensation are paying the people who literally scrub the poison from their platforms less than a living wage.
I’ve been in tech long enough to know this isn’t new. Outsourcing content moderation to the Global South has been standard practice for over a decade. But with the AI boom, it’s gotten worse. Data annotation is now a massive industry, feeding the machine learning models that power everything from self-driving cars to medical diagnostics. The workers are invisible, but their labor is essential.
What gets me is the hypocrisy. We celebrate AI as this magical, autonomous technology. We talk about “training data” as if it’s just numbers on a server. But behind every label, every moderation decision, there’s a human being staring at a screen, often in a cramped office in Nairobi or Kampala, trying to make sense of the worst humanity has to offer.
And the companies? They’re not evil, just indifferent. They’ve optimized for cost and speed, and humans are the cheapest, fastest component. The trauma is just an externality, not their problem.
I don’t have a neat solution. Better pay, better mental health support, shorter shifts — these are obvious but won’t fix the fundamental issue. As long as we treat content moderation and data annotation as low-skill, high-volume grunt work, we’ll keep exploiting the most vulnerable.
What I do know is that the next time you use an AI tool, remember Mercy. Remember the dollar-an-hour workers who made it possible. And maybe, just maybe, ask yourself if the convenience is worth the cost.
Comments (0)
Login Log in to comment.
Be the first to comment!