Facebook Violated Palestinians' Right to Free Speech, Report Commissioned by Meta Says

Meta has finally released the findings of an external report examining the impact of its content moderation policies on Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, by Business for Social Responsibility (BSR), concluded that Facebook and Instagram violated the Palestinians' right to freedom of expression.

"Based on the data reviewed, the review of individual cases and related documents, and the engagement of external stakeholders, Meta's actions in May 2021 appear to have had a negative impact on human rights. on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation and non-discrimination, and therefore on the ability of Palestinians to share information and ideas about their experiences as they happened,” BSR wrote in its report.

The report also notes that "a review of individual cases" showed that some Israeli accounts were also wrongly banned or restricted during this time. But the report's authors highlight several systemic issues that they say disproportionately affect Palestinians.

According to the report, "Arabic content was more enforced" and "proactive detection rates for potentially infringing Arabic content were significantly higher than proactive detection rates for potentially infringing Hebrew content." The report also notes that Meta had an internal tool to detect "hostile speech" in Arabic, but not Hebrew, and that Meta's systems and moderators had lower accuracy when evaluating Palestinian Arabic. .

As a result, many user accounts were hit with "false warnings" and posts were wrongly deleted by Facebook and Instagram. “These warnings remain in place for users who have not appealed erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Supervisory Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its dangerous people and organizations (DOI) policy. The company says it "has initiated a policy development process to revisit our definitions of praise, support, and representation in our DOI policy," and is "working on ways to make the experience user of our DOI strikes simpler and more transparent".

Meta also notes that it has "begun experimenting with creating a dialect-specific Arabic classifier" for written content, and has changed its internal process for managing keywords and "block lists " that affect content deletions.

Meta specifically states that it is "evaluating the feasibility" of a recommendation that it warns users when it places a "feature limitation and search limitation" on users' accounts after they have received a warning. Instagram users have long complained that the app bans or reduces their account visibility when posting on certain topics. These complaints increased last spring when users reported that they had been barred from posting about Palestine or had their posts reduced in reach. At the time, Meta blamed an unspecified "glitch". BSR's report notes that the company also implemented emergency "ice-breaker" measures that temporarily restricted any "repeatedly shared content".

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

Facebook Violated Palestinians' Right to Free Speech, Report Commissioned by Meta Says

Meta has finally released the findings of an external report examining the impact of its content moderation policies on Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, by Business for Social Responsibility (BSR), concluded that Facebook and Instagram violated the Palestinians' right to freedom of expression.

"Based on the data reviewed, the review of individual cases and related documents, and the engagement of external stakeholders, Meta's actions in May 2021 appear to have had a negative impact on human rights. on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation and non-discrimination, and therefore on the ability of Palestinians to share information and ideas about their experiences as they happened,” BSR wrote in its report.

The report also notes that "a review of individual cases" showed that some Israeli accounts were also wrongly banned or restricted during this time. But the report's authors highlight several systemic issues that they say disproportionately affect Palestinians.

According to the report, "Arabic content was more enforced" and "proactive detection rates for potentially infringing Arabic content were significantly higher than proactive detection rates for potentially infringing Hebrew content." The report also notes that Meta had an internal tool to detect "hostile speech" in Arabic, but not Hebrew, and that Meta's systems and moderators had lower accuracy when evaluating Palestinian Arabic. .

As a result, many user accounts were hit with "false warnings" and posts were wrongly deleted by Facebook and Instagram. “These warnings remain in place for users who have not appealed erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Supervisory Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its dangerous people and organizations (DOI) policy. The company says it "has initiated a policy development process to revisit our definitions of praise, support, and representation in our DOI policy," and is "working on ways to make the experience user of our DOI strikes simpler and more transparent".

Meta also notes that it has "begun experimenting with creating a dialect-specific Arabic classifier" for written content, and has changed its internal process for managing keywords and "block lists " that affect content deletions.

Meta specifically states that it is "evaluating the feasibility" of a recommendation that it warns users when it places a "feature limitation and search limitation" on users' accounts after they have received a warning. Instagram users have long complained that the app bans or reduces their account visibility when posting on certain topics. These complaints increased last spring when users reported that they had been barred from posting about Palestine or had their posts reduced in reach. At the time, Meta blamed an unspecified "glitch". BSR's report notes that the company also implemented emergency "ice-breaker" measures that temporarily restricted any "repeatedly shared content".

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow