How to Fix Facebook, According to Facebook Employees


Facebook rejects the allegation. “At the heart of these stories is a premise which is false,” mentioned spokesperson Kevin McAlister in an e mail. “Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie.”

On the opposite hand, the corporate lately fessed up to the exact criticism from the 2019 paperwork. “In the past, we didn’t address safety and security challenges early enough in the product development process,” it mentioned in a September 2021 blog post. “Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach. Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it.” McAlister pointed to Live Audio Rooms, launched this 12 months, for instance of a product rolled out below this course of.

If that’s true, it’s factor. Similar claims made by Facebook through the years, nonetheless, haven’t all the time withstood scrutiny. If the corporate is severe about its new strategy, it’s going to want to internalize a couple of extra classes.

Your AI Can’t Fix Everything

On Facebook and Instagram, the worth of a given put up, group, or web page is principally decided by how seemingly you might be to stare at, Like, touch upon, or share it. The larger that likelihood, the extra the platform will suggest that content material to you and have it in your feed.

But what will get individuals’s consideration is disproportionately what enrages or misleads them. This helps clarify why low-quality, outrage-baiting, hyper-partisan publishers achieve this properly on the platform. One of the interior paperwork, from September 2020, notes that “low integrity Pages” get most of their followers by News Feed suggestions. Another recounts a 2019 experiment wherein Facebook researchers created a dummy account, named Carol, and had it comply with Donald Trump and some conservative publishers. Within days the platform was encouraging Carol to be a part of QAnon teams.

Facebook is conscious of those dynamics. Zuckerberg himself defined in 2018 that content material will get extra engagement because it will get nearer to breaking the platform’s guidelines. But quite than reconsidering the knowledge of optimizing for engagement, Facebook’s reply has principally been to deploy a mixture of human reviewers and machine studying to discover the dangerous stuff and take away or demote it. Its AI instruments are extensively thought-about world-class; a February blog post by chief expertise officer Mike Schroepfer claimed that, for the final three months of 2020, “97% of hate speech taken down from Facebook was spotted by our automated systems before any human flagged it.”

The inside paperwork, nonetheless, paint a grimmer image. A presentation from April 2020 notes that Facebook removals had been lowering the general prevalence of graphic violence by about 19 %, nudity and pornography by about 17 %, and hate speech by about 1 %. A file from March 2021, beforehand reported by The Wall Street Journal, is much more pessimistic. In it, firm researchers estimate “that we may action as little as 3-5% of hate and ~0.6% of [violence and incitement] on Facebook, despite being the best in the world at it.”



Source link