Johnsen’s experience is common in the pro-choice activist community. Most of the people who spoke to WIRED say their content appeared to have been removed automatically by AI, rather than being reported by another user.
Activists also worry that even if content is not removed entirely, its reach might be limited by the platform’s AI.
While it’s nearly impossible for users to discern how Meta’s AI moderation is being implemented on their content, last year the company announced it would be deemphasizing political and news content in users’ News Feed. Meta did not respond to questions about whether abortion-related content is categorized as political content.
Just as the different abortion activists who spoke to WIRED experienced varying degrees of moderation on Meta’s platform, so too did users in different locations around the world. WIRED experimented with posting the same phrase, “Abortion pills are available by mail,” from Facebook and Instagram accounts in the UK, US, Singapore, and the Philippines in English, Spanish, and Tagalog. Instagram removed English posts of the phrase when posted from the US, where abortion was newly restricted in some states after last week’s court decision, and the Philippines, where it is illegal. But a post made from the US written in Spanish and a post made from the Philippines in Tagalog both stayed up.
The phrase remained up on both Facebook and Instagram when posted in English from the UK. When posted in English from Singapore, where abortion is legal and widely accessible, the phrase remained up on Instagram but was flagged on Facebook.
Ensley told WIRED that Reproaction’s Instagram campaigns on abortion access in Spanish and Polish were both very successful and saw none of the issues that the group’s English-language content has faced.
“Meta, in particular, relies pretty heavily on automated systems that are extremely sensitive in English and less sensitive in other languages,” says Katharine Trendacosta, associate director of policy and advocacy at the Electronic Frontier Foundation.
WIRED also tested Meta’s moderation with a Schedule 1 substance that is legal for recreational use in 19 states and for medicinal use in 37 states, sharing the phrase “Marijuana is available by mail” on Facebook in English from the US. The post was not flagged.
“Content moderation with AI and machine learning takes a long time to set up and a lot of effort to maintain,” says a former Meta employee familiar with the organization’s content moderation practices, who spoke on condition of anonymity. “As circumstances change, you need to change the model, but that takes time and effort. So when the world is changing quickly, those algorithms are often not operating at their best, and may enforce with less accuracy during periods of intense change.”
However, Trendacosta worries that law enforcement could flag content for removal as well. In Meta’s 2020 transparency report, the company noted that it had “restricted access to 12 items in the United States reported by various state Attorney Generals related to the promotion and sale of regulated goods and services, and to 15 items reported by the US Attorney General as allegedly engaged in price gouging.” All the posts were later reinstated. “The states’ attorneys general being able to just say to Facebook, ‘Take this stuff down,’ and Facebook doing it, even if they ultimately put it back up, that’s incredibly dangerous,” Trendacosta says.