Late last year, the Electronic Frontier Foundation and Visualizing Impact launched an ambitious and much-needed project to gather data on social media content moderation. Last week OnlineCensorship.org bore its first fruit with an in-depth report based on 161 user submissions.
As we’ve reported before, social media companies are free to police content any way they see fit, since they are not government entities. But in anecdotal reports of content takedowns, companies often seem to ignore their own stated policies and also operate from a decidedly American standpoint on the appropriateness of content. Naturally most companies also do not voluntarily release data on content takedowns or their reasoning for particular decisions. That’s where the Online Censorship project comes in, trying to provide a modicum of transparency and data visualization to the process.
In the report released last week, the majority of users who had content taken down were on Facebook. On that platform the most common reason cited for censorship was nudity, with 25 reports. The second most common was alleged noncompliance with Facebook’s “real name” policy, at 19 reports. Due to the frequency of those two supposed infractions, the report’s authors took a closer look at both.
Facebook’s “real name” rule has been honed somewhat since it made headlines by snaring drag queens, people using aliases for safety reasons, and Native Americans. Nevertheless, 16 users who submitted reports to Online Censorship had been suspended for “false identity.” Ten of those appealed the decision, implying they believed themselves to be in compliance with the rules, and three more said they would have appealed if they had known how. Four of the users who did appeal never got any response from Facebook whatsoever. One beleaguered user submitted three separate forms of identification but still did not get her account restored because Facebook judged them all to be false.
The other major snag for Facebook users reporting to Online Censorship was nudity, often in forms that are allegedly permitted under the site’s Community Standards. Even though the rules contain a hard-won exception for breastfeeding photos, for instance, the report says that “one case involved an image of a mother breastfeeding and resulted in both the takedown of the user’s photo and suspension of their account. An appeal by the user was unsuccessful in restoring either.”
Another exception, often covered in these pages, is for “photographs of paintings, sculptures, and other art that depicts nude figures.” Facebook claims such art is always allowed, but the report says one page administrator had a different experience:
The user operated a page that featured paintings of nude women, including paintings by famous artists such as Picasso, Renoir, and Dali, as well as some by contemporary artists. This user was aware that the rules allowed an exception for nude paintings. After a user or users reported the page, Facebook deleted it. The administrator reports that, although he appealed and Facebook responded within two days, the appeal was unsuccessful and a reason for the takedown was not provided.
Overall, 14 Facebook users who reported to Online Censorship had appealed their takedowns or suspensions for nudity, but only one had their content restored. More than other social media platforms, Facebook attempts to tailor its moderation of nudity regionally, claiming that “some audiences within our global community may be sensitive to this type of content–particularly because of their cultural background or age.” Instead these attempts sometimes come across as Western-centric and paternalistic, as when the site repeatedly took down Indian comic artist Orijit Sen’s drawing that depicts a topless Punjabi woman in the midst of dressing or undressing. Sen and his followers responded by sharing their favorite nudes from all of art history.
Most users who submitted reports to Online Censorship suspected they were the victims of targeted flagging by fellow users misusing community moderation tools for censorship purposes. Particularly in the case of Facebook, account suspensions often wound up having a greater-than-expected effect on users’ online lives because they had linked their Facebook logins to other sites such as Tinder and Instagram, and found themselves locked out of those sites as well. Ultimately, the report’s authors formulated a series of recommendations for social media companies to follow in their moderation practices, condensed below:
- We recommend that companies work with local communities to ensure that their global policies reflect linguistic diversity and other local needs.
- We recommend that companies devote greater human resources to their internal content moderation teams, continue to broaden their geographic and language reach, and pay particular attention to increasing quality control.
- We implore companies to improve their appeals processes to ensure that users whose content has been erroneously taken down can easily restore it.
- We recommend that other companies follow Twitter’s lead and submit government takedown requests to Lumen.
- We recommend that companies review how users are affected by such [long-term] bans and make changes to ensure that their professional lives and access to third-party platforms are not impacted.
- We recommend that companies review [“real-name” and nudity] policies and consider changing how they are implemented, particularly in non-public spaces (e.g., “secret” Facebook groups or private Instagram accounts).
To that, we would add one more recommendation to the users themselves: please report any unjustified content takedowns to the good people of Online Censorship. No, they most likely will not be able to get your specific content restored, but over time they will aggregate a database which certainly can bring pressure to bear on the online behemoths to be more transparent and thoughtful about moderation decisions. But to build that database they need data!
Contributing Editor Maren Williams is a reference librarian who enjoys free speech and rescue dogs.