Skip to main content

Facebook civil rights audit urges ‘mandatory’ algorithmic bias detection

A woman looks at the Facebook logo on an iPad in this photo illustration.
A woman looks at the Facebook logo on an iPad in this photo illustration.
Image Credit: REUTERS/Regis Duvignau/Illustration

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


When it comes to Facebook’s progress on civil rights issues, an independent review found the company’s efforts to detect algorithmic bias fall dangerously short and leave users vulnerable to manipulation.

According to the audit released earlier today, Facebook’s efforts to detect algorithmic bias remain primarily in pilot projects conducted by only a handful of teams. The authors of the report, civil rights attorneys Laura Murphy and Megan Cacace, note that the company is increasingly reliant on artificial intelligence for such tasks as predicting which ads users might click on and weeding out harmful content.

But these tools, as well as other tentative efforts Facebook has made in areas like diversity of its AI teams, must go much further and faster, the report says. While the group looked uniquely at Facebook during its two-year review, any company embracing AI would do well to look at algorithmic bias issues.

“Facebook has an existing responsibility to ensure that the algorithms and machine learning models that can have important impacts on billions of people do not have unfair or adverse consequences,” the report says. “The Auditors think Facebook needs to approach these issues with a greater sense of urgency.”

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The report comes as Facebook faces a historic advertising boycott. The “Stop Hate for Profit” campaign is backed by more than 396 advertisers, who have halted spending on the platform to demand Facebook take bolder steps against racism, misogyny, and disinformation.

Earlier this week, Facebook CEO Mark Zuckerberg met with civil rights groups but insisted his company would not respond to financial pressure, leaving attendees disappointed.

In a blog post, COO Sheryl Sandberg sought to score points by claiming Facebook is the “first social media company to undertake an audit of this kind.” She also nodded toward the timing of the report, which was commissioned two years ago. Her post — “Making Progress on Civil Rights — But Still a Long Way to Go” — emphasized Facebook’s view that it is fighting the good fight.

“There are no quick fixes to these issues — nor should there be,” Sandberg wrote. “This audit has been a deep analysis of how we can strengthen and advance civil rights at every level of our company — but it is the beginning of the journey, not the end. What has become increasingly clear is that we have a long way to go. As hard as it has been to have our shortcomings exposed by experts, it has undoubtedly been a really important process for our company. We would urge companies in our industry and beyond to do the same.”

The authors, while noting many of Facebook’s internal efforts, were less complimentary.

“Many in the civil rights community have become disheartened, frustrated, and angry after years of engagement where they implored the company to do more to advance equality and fight discrimination, while also safeguarding free expression,” the authors wrote.

The report dissects Facebook’s work on civil rights accountability, elections, census, content moderation, diversity, and advertising. But it also gives special attention to the subject of algorithmic bias.

“AI is often presented as objective, scientific, and accurate, but in many cases it is not,” the report says. “Algorithms are created by people who inevitably have biases and assumptions, and those biases can be injected into algorithms through decisions about what data is important or how the algorithm is structured, and by trusting data that reflects past practices, existing or historic inequalities, assumptions, or stereotypes. Algorithms can also drive and exacerbate unnecessary adverse disparities … As algorithms become more ubiquitous in our society, it becomes increasingly imperative to ensure that they are fair, unbiased, and non-discriminatory, and that they do not merely magnify preexisting stereotypes or disparities.”

The authors highlighted Facebook’s Responsible AI (RAI) efforts, led by a team of “ethicists, social and political scientists, policy experts, AI researchers, and engineers focused on understanding fairness and inclusion concerns associated with the deployment of AI in Facebook products.”

Part of that RAI work involves developing tools and resources that can be used across the company to ensure AI fairness. To date, the group has developed a “four-pronged approach to fairness and inclusion in AI at Facebook.”

  1. Create guidelines and tools to limit unintentional bias.
  2. Develop a fairness consultation process.
  3. Engage with external discussions on AI bias.
  4. Diversify the AI team.

As part of the first pillar, Facebook has created the Fairness Flow tool to assess algorithms by detecting unintended problems with the underlying data and spotting flawed predictions. But Fairness Flow is still in a pilot stage, and the teams with access use it on a purely voluntary basis. Late last year, Facebook also began a fairness consultation pilot project to allow teams that detect a bias issue in a product to reach out internally to teams with more expertise for feedback and advice. While the authors saluted these steps, they also urged Facebook to expand such programs across the company and make their use mandatory.

“Auditors strongly believe that processes and guidance designed to prompt issue-spotting and help resolve fairness concerns must be mandatory (not voluntary) and companywide,” the report says. “That is, all teams building models should be required to follow comprehensive best practice guidance, and existing algorithms and machine learning models should be regularly tested. This includes both guidance in building models and systems for testing models.”

The company has also created an AI Task Force to lead initiatives for improving employee diversity. Facebook is now funding a deep learning course at Georgia Tech to increase the pipeline of underrepresented job candidates. It’s also in discussions with several other universities to expand the program. And its tapping nonprofits, research, and advocacy groups to broaden its hiring pool.

But again, the review found these initiatives to be too limited in scope and called for an expansion of hiring efforts, as well as greater training and education across the company.

“While the Auditors believe it is important for Facebook to have a team dedicated to working on AI fairness and bias issues, ensuring fairness and non-discrimination should also be a responsibility for all teams,” the report says. “To that end, the Auditors recommend that training focused on understanding and mitigating against sources of bias and discrimination in AI should be mandatory for all teams building algorithms and machine-learning models at Facebook and part of Facebook’s initial onboarding process.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.