As Israeli forces illegally banished Palestinians from their homes throughout the Occupied Territories, attacked Islamic places of worship, brutally repressed demonstrators, and bombed civilian infrastructure in the besieged Gaza Strip, many people "turn[ed] to social media to document, raise awareness, and condemn the latest cycle" of abuses, Human Rights Watch (HRW) noted.
According to HRW:
Instagram, which is owned by Facebook, removed posts, including reposts of content from mainstream news organizations. In one instance, Instagram removed a screenshot of headlines and photos from three New York Times opinion articles for which the Instagram user added commentary that urged Palestinians to "never concede" their rights. The post did not transform the material in any way that could reasonably be construed as incitement to violence or hatred.
In another instance, Instagram removed a photograph of a building with a caption that read, "This is a photo of my family's building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building." The company also removed the reposting of a political cartoon whose message was that Palestinians are oppressed and not fighting a religious war with Israel.
All of these posts were removed for containing "hate speech or symbols" according to Instagram. These removals suggest that Instagram is restricting freedom of expression on matters of public interest. The fact that these three posts were reinstated after complaints suggests that Instagram’s detection or reporting mechanisms are flawed and result in false positives. Even when social media companies reinstate wrongly suppressed material, the error impedes the flow of information concerning human rights at critical moments.
The preceding examples documented by HRW represent a small sliver of reported restrictions.
"Users and digital rights organizations also reported hundreds of deleted posts, suspended or restricted accounts, disabled groups, reduced visibility, lower engagement with content, and blocked hashtags," said HRW. The group "reviewed screenshots from people who were sharing content about the escalating violence and who reported restrictions on their accounts, including not being able to post content, livestream videos on Instagram, post videos on Facebook, or even like a post."
Although HRW "was not able to verify or determine that each case constituted an unjustified restriction due to lack of access to the underlying data needed for verification, and because Facebook refused to comment on specific details of various cases and accounts citing privacy obligations," the organization said that "the range and volume of restrictions reported warrant an independent investigation."
Deborah Brown, a senior digital rights researcher at HRW, said in a statement that "instead of respecting people's right to speak out, Facebook is silencing many people arbitrarily and without explanation, replicating online some of the same power imbalances and rights abuses we see on the ground."
On September 14, the Facebook Oversight Board recommended that the social media corporation authorize an independent probe to determine whether its content moderation in Arabic and Hebrew is being applied without bias, and share the report.
Progressive advocates are particularly concerned about Facebook's reliance on automated tools, which HRW called "notoriously poor at interpreting contextual factors."
"Automated content moderation," HRW warned, "can lead to overbroad limits on speech and inaccurate labeling of speakers as violent, criminal, or abusive. Automated content moderation of content that platforms consider to be 'terrorist and violent extremist' has in other contexts led to the removal of evidence of war crimes and human rights atrocities from social media platforms, in some cases before investigators know that the potential evidence exists."
Along with other groups, HRW also objected to the fact that "in addition to removing content based on its own policies, Facebook often does so at the behest of governments."
"The Israeli government has been aggressive in seeking to remove content from social media," according to HRW, which added:
The Israeli Cyber Unit, based within the State Attorney's Office, flags and submits requests to social media companies to "voluntarily" remove content. Instead of going through the legal process of filing a court order based on Israeli criminal law to take down online content, the Cyber Unit makes appeals directly to platforms based on their own terms of service. A 2018 report by Israel’s State Attorney's office notes an extremely high compliance rate with these voluntary requests, 90% across all platforms.
Human Rights Watch is not aware that Facebook has ever disputed this claim. In a letter to Human Rights Watch, the company stated that it has "one single global process for handling government requests for content removal." Facebook also provided a link to its process for assessing content that violates local law, but that does not address voluntary requests from governments to remove content based on the company’s terms of service.
"Acceding to Israeli governmental requests raises concern," HRW said, "since Israeli authorities criminalize political activity in the West Bank using draconian laws to restrict peaceful speech and to ban more than 430 organizations, including all the major Palestinian political movements, as Human Rights Watch has documented. These sweeping restrictions on civil rights are part of the Israeli government's crimes against humanity of apartheid and persecution against millions of Palestinians."
HRW stressed that Facebook's "acknowledgment of errors and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, or adequately explain why they occurred in the first place."
Facebook, HRW added, should comply with the Oversight Board's motion to commission an external "investigation into content moderation regarding Israel and Palestine, particularly in relation to any bias or discrimination in its policies, enforcement, or systems, and to publish the investigation's findings. Facebook has 30 days from the day the decision was issued to respond to the board's recommendations."
This content originally appeared on Common Dreams - Breaking News & Views for the Progressive Community and was authored by Kenny Stancil.