Post by account_disabled on Mar 5, 2024 9:16:31 GMT
The world's largest social media company, Facebook, is under scrutiny over its policy abuses, particularly around the November US presidential election; published the content moderation estimate in its quarterly report.
Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which were proactively identified, compared to 22.5 million the previous quarter.
The company defines "take action" as removing content, covering it with a warning, disabling accounts, or escalating it to outside agencies.
This summer, civil rights groups Chile Mobile Number List organized a broad advertising boycott to try to pressure Facebook to act against hate speech.
The company agreed to disclose hate speech metrics, calculated by examining a representative sample of content viewed on Facebook, and undergo an independent audit of its enforcement history.
In a call to reporters, Facebook's head of security and integrity, Guy Rosen, said the audit would be completed "over the course of 2021."
The Facebook Metric
The Anti-Defamation League, one of the groups behind the boycott, said Facebook's new metric still lacked enough context for a full assessment of its performance.
We do not yet know from this report exactly how many pieces of content are being flagged and whether or not action was taken. That data is important, as there are many forms of hate speech that are not being removed, even after being flagged.
Todd Gutnick, ADL spokesman.
Twitter and YouTube, owned by Alphabet Inc., do not disclose comparable prevalence metrics.
The measures taken
Rosen reported that Facebook from March 1 through the November 3 election removed more than 265,000 pieces of content from Facebook and Instagram in the United States for violating its voter interference policies.
In October, Facebook said it was updating its hate speech policy to ban content that denies or distorts the Holocaust, a change in public comments that Facebook CEO Mark Zuckerberg had made about what should be allowed.
Facebook also said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content.
Earlier this week, Zuckerberg and Twitter Inc. CEO Jack Dorsey were grilled by Congress about their companies' content moderation practices, from Republican accusations of political bias to decisions on violent speech.
Elizabeth Culliford and Katie Paul tell Reuters that Zuckerberg told an all-staff meeting that Trump's former White House adviser, Steve Bannon, had not violated company policies enough to warrant suspension when he urged the company. beheading of two American officials.
The company has also been criticized in recent months for allowing large Facebook groups that share false election claims and violent rhetoric to gain traction.
Facebook said its search rates for content that violates rules before users report it increased in most areas due to improvements in artificial intelligence tools and the expansion of its detection technologies to more languages.
In a post, Facebook commented that the COVID-19 pandemic continued to affect its content review staff, although some enforcement measures were returning to pre-pandemic levels.
An open letter from more than 200 Facebook content moderators published Wednesday accused the company of forcing these workers to return to the office and "unnecessarily risking" lives during the pandemic.
Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which were proactively identified, compared to 22.5 million the previous quarter.
The company defines "take action" as removing content, covering it with a warning, disabling accounts, or escalating it to outside agencies.
This summer, civil rights groups Chile Mobile Number List organized a broad advertising boycott to try to pressure Facebook to act against hate speech.
The company agreed to disclose hate speech metrics, calculated by examining a representative sample of content viewed on Facebook, and undergo an independent audit of its enforcement history.
In a call to reporters, Facebook's head of security and integrity, Guy Rosen, said the audit would be completed "over the course of 2021."
The Facebook Metric
The Anti-Defamation League, one of the groups behind the boycott, said Facebook's new metric still lacked enough context for a full assessment of its performance.
We do not yet know from this report exactly how many pieces of content are being flagged and whether or not action was taken. That data is important, as there are many forms of hate speech that are not being removed, even after being flagged.
Todd Gutnick, ADL spokesman.
Twitter and YouTube, owned by Alphabet Inc., do not disclose comparable prevalence metrics.
The measures taken
Rosen reported that Facebook from March 1 through the November 3 election removed more than 265,000 pieces of content from Facebook and Instagram in the United States for violating its voter interference policies.
In October, Facebook said it was updating its hate speech policy to ban content that denies or distorts the Holocaust, a change in public comments that Facebook CEO Mark Zuckerberg had made about what should be allowed.
Facebook also said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content.
Earlier this week, Zuckerberg and Twitter Inc. CEO Jack Dorsey were grilled by Congress about their companies' content moderation practices, from Republican accusations of political bias to decisions on violent speech.
Elizabeth Culliford and Katie Paul tell Reuters that Zuckerberg told an all-staff meeting that Trump's former White House adviser, Steve Bannon, had not violated company policies enough to warrant suspension when he urged the company. beheading of two American officials.
The company has also been criticized in recent months for allowing large Facebook groups that share false election claims and violent rhetoric to gain traction.
Facebook said its search rates for content that violates rules before users report it increased in most areas due to improvements in artificial intelligence tools and the expansion of its detection technologies to more languages.
In a post, Facebook commented that the COVID-19 pandemic continued to affect its content review staff, although some enforcement measures were returning to pre-pandemic levels.
An open letter from more than 200 Facebook content moderators published Wednesday accused the company of forcing these workers to return to the office and "unnecessarily risking" lives during the pandemic.