The Workers Who Keep the Web Safe in an Unsafe World
In early March, Tonei Glavinic, the Director of Operations for the Dangerous Speech Project, was filing a report on hate speech when they were greeted with an unusual message in the reporting interface. The message read: “We have fewer people to review reports because of the coronavirus (Covid-19) outbreak.”
Other users and researchers—including Data & Society’s Robyn Caplan—noticed these reporting flags as well. Facebook had sent its content moderators home, it appeared and were relying on AI to help manage content and harassment reports.
Content moderators do a very important, and often unacknowledged job, from having to look at extremely violent content to having to decipher what harassment is (or is not). Content moderators are, effectively, the cleaners of the internet.
This critical, human workforce reviews and moderates some of the worst content on the web. But if every data point of harassment is a real person’s experience, there is a person looking at that report. Recently, Facebook was ordered to pay a total of $52 million to content moderators suffering from PTSD, a real risk that comes from the job.
Content moderators are exposed to harm even as they provide a necessary service #
Social networks need this human workforce because AI isn’t up to the job yet, especially when it comes to its understanding of the extreme nuances of harassment and abuse. What content moderators do really does require human skill. Artificial intelligence has an extremely difficult time deciphering context and harm in speech and has a well known, recorded history of amplifying societal bias. Even recent forays into understanding abusive online language can be fraught with bias; for example, DeepText (a variation of which is used by Instagram) was originally and accidentally taught that the word ‘Mexican’ is negative and harmful.
The deeper issues surrounding content moderation as a workforce are really starting to show. How do companies outsource work, and who do companies consider ‘actual employees’ versus contract workers? Content moderators are a mix of full-time employees and contracted, third-party workers.
Acknowledging the dichotomy here is important: full-time employees at tech companies can have better pay, full-time healthcare, better access to internal resources, support, and documents needed to work at the frontlines of the internet, harm, and policy. Contracted workers don’t have access to most of those things, and additionally, it’s been reported that the places that employ contracted workers can have extremely poor working conditions and grueling hours.
This brings into question how content moderators are able to work from home. Can contract employees bring their laptops with them? How is Facebook, or any platform, determining safety, not just for moderators, but the security of the content is being seen?
Suddenly, a work from home order has a lot of challenges, especially when dealing with abusive content and users’ data #
Sarah Roberts is a professor at UCLA and one of the major forces looking at content moderation. She has been studying content moderation and online systems for a decade.
“Where we are today, with the example of Facebook, is a grossly reduced workforce that tends to frontline, generalist content moderation in a product environment. So that really changes the landscape of what is possible. And users have encountered that change when they’ve perhaps gone to report content on various platforms, Facebook, Instagram, Twitter, at least have had notices up acknowledging the reduced workforce,” Roberts shared during our phone interview.
Roberts added that “every platform has its own apparatus for dealing with this [and] there is going to be variance from platform to platform. In the case of Facebook, on March 16, they did announce they would be sending mods home. It was unclear to me in those releases if that was globally or the ones in the United States. It’s important to understand it’s not the flipping of a switch, with the people or AI. A lot of the responses have probably been graduated with those factors.” But this further reliance on AI has created a myriad of challenges. Roberts described AI as being a broad and blunt tool, but also working in a ‘proactive’ way meaning it was catching content before people reported it, which can lead to content that didn’t actually break any violation standards of being removed.
Platforms are removing content that creates misinformation about Covid-19. Facebook across its newsroom sites are updating blog posts with information about Covid-19 misinformation patterns, and what they are doing to keep users safe in regards to content moderation. Platforms, like Facebook, Twitter and Instagram even have removed posts by Brazilian President Jair Bolsonaro that contained misleading or false information related to Covid-19.
Roberts recounted an experience of someone reaching out to her that had reported abusive hate speech. A reporting system operates almost like a phone tree for users: users select a general what kind of harm or content reporting issue it is, and then they select the specific kind of harm be it hate speech, targeted harassment, something including personal information, violence, etc. It’s a triage for ordering and labeling content to be given to a content moderator. The user who reached out to Roberts got to the end of the flow and was told the content they had reported was ‘deprioritized.’
Roberts emphasizes that to this user, and to others, this kind of content is urgent, serious and important. “We are seeing users posting and responding [about] this reduced workforce in a way we haven’t seen before- articulating their feelings, frustrations and their experiences in seeing this content.”
When there’s a reduced workforce, harms like suicide and violence to others take precedent. It becomes this disturbing “hierarchy of bad things” and of what to prioritize, and when.
Roberts stressed: “This might be the first time that users, even users who have reported content before through these various systems on the platforms, have really encountered the fact that it’s a human workforce on the other side for those reports, and when they are gone, there is no one to pick up the slack.”
What can platforms do better? #
Right now, platforms are making a concerted effort to take down Covid-19 misinformation and disinformation but as Dia Kayyali, program manager of Tech and Advocacy at Witness pointed to over Zoom, lots of previous content have had offline, real-world effects, too. "We’ll hear people say if you could put this much resources into this where people’s lives are threatened, why can't you put more resources in Myanmar or India where people’s lives are also threatened?” As Kayyali highlighted, these online harms, like Hindu extremist posts on Twitter currently, result in offline harm, too, just like corona misinformation.
Kayyali suggested focusing on content moderators and their needs, during this moment and beyond it. “[The platforms] should make content moderators full-time employees, they shouldn’t be contracting out.” More importantly, Kayyali explained, “I really hope they look at what are the different error rates between full-time employees and contracted content moderators, what are the self-reported mental health repercussions, what’s the difference between what full-time employees have access to and the contractors have access to.”
All employees, right now more than ever, need the support of the companies they work for or have been contracted out to. Content moderators, especially in this time, and once this time is over, need support, full-time employment, healthcare, mental health access, and good tools to do their job, otherwise the safety of the internet could be at stake.