×

Facebook whistleblower documents offer new revelations about Jan. 6 response

By Chris Looft and Layla Ferris, ABC News Oct 25, 2021 | 7:24 AM


Diy13/iStock

(WASHINGTON) — The day of the Jan. 6 insurrection, Facebook noticed a rise in social media posts calling for violence and incitement around the certification of the U.S. presidential election result and the storming of the Capitol.

How the social media giant prepared for that day, and how it responded to the sudden onslaught of misleading information and violent rhetoric on both Facebook and Instagram is detailed in internal documents obtained by ABC News and a group of news organizations.

The documents were disclosed to the U.S. Securities and Exchange Commission by Facebook whistleblower Frances Haugen, a former employee, and provided to Congress in redacted form by Haugen’s legal counsel. It was provided to ABC News by a congressional staffer.

In her filing, Haugen alleged that Facebook had misled investors and the public about potential harms associated with the platform.

In response to a series of Wall Street Journal articles based largely on the documents provided by Haugen, Facebook’s vice president of global affairs, Nick Clegg, in a statement rejected the idea that Facebook “systematically and willfully ignores” research that is “inconvenient for the company.”

One of the documents, updated on Jan. 7, shows that Facebook analyzed some of the Capitol riot’s impact on its platforms.

The document shows there was a spike in the volume of reports from Facebook and Instagram users complaining about posts inciting violence on Jan. 6. There were around seven times as many hourly reports about posts containing incitement to violence as in the previous week, according to the document.

Another set of documents created during the events of Jan. 6 show a wider range of restrictions explored by Facebook to limit potentially harmful content and mitigate violence and incitement. The documents indicate that more severe restrictions, known internally as “break glass” measures, had been active earlier, in 2020, and then removed or rolled back. Several of the restrictions listed as having been previously rolled back focused on groups, such as freezing commenting on some group posts or preventing groups from changing their names to include terms that aimed to delegitimize the election result.

Ciaran O’Connor, a disinformation analyst with the London-based think tank Institute for Strategic Dialogue, said keeping these type of measures in place may have prevented extremism on Jan. 6 that was inflamed by a “dangerous mix of disinformation and conspiracy theories.”

“This action was entirely irresponsible and illustrative of wider failings in Facebook in wrongly prioritizing platform growth over safety,” he said.

When asked about the rollback of the measures, a Facebook official stressed that they were only part of Facebook’s preparations for the election, and that specific metrics were used to determine whether or not to disable them. Measures that were disabled, the official said, were done so gradually.

During the chaos of Jan. 6, Facebook considered re-enabling some of its old strategies and implementing new ones, according to the documents, to respond to the risk of “violence and incitement” in connection with the day’s events.

Some of the proposed restrictions, such as demoting content promoting the storming of the capitol, were described in the documents as a “first line of reactive defense” on the day.

A Jan. 19 report by the Tech Transparency Project linked Facebook’s groups feature to the growth of the “Stop The Steal” movement after the 2020 election. Some members of those groups made calls to overthrow the U.S. government by force to reverse the election result. Facebook removed the first large “Stop The Steal” group, but copycats and similar groups by other names remained on the platform through the Jan. 6 insurrection, according to the report.

In another document, which BuzzFeed News obtained and published in April, Facebook researchers said that harmful movements, such as Stop the Steal, were coordinated efforts that ultimately “helped incite the Capitol Insurrection.”

O’Connor suggested there are potential dangers with Facebook’s groups feature. “Groups have proven to be hubs for misinformation and harmful content, where there often [are] no gatekeepers and false information is allowed to flourish,” O’Connor said.

When asked for comment on the potential harms of groups, a Facebook official pointed to an Oct. 20 update to the feature, as well as previous measures including changes to group recommendations and using independent fact-checkers to flag misinformation in groups.

After the events of Jan. 6, Facebook took further safety measures, according to O’Connor, including limiting the amount of political content shown to some users, including in the U.S., the suspension of Trump’s page, and tools for group administrators to limit toxic or harmful conversations.

Facebook said on Oct. 20 that it will start “demoting” content posted in groups by people who have broken community guidelines anywhere on the platform. The announcement was made with the aim to keep rule breakers from reaching others in their community, according to Facebook’s update.

Another internal Facebook document provides insight into how “demotion” — a way of limiting the exposure of content thought likely to break the platform’s rules — was used in an effort to minimize the spread of potentially harmful information in connection with the events of Jan. 6. The document was posted on Feb. 19.

According to Facebook’s Transparency Center, “problematic or low quality content” may be “demoted” in an effort to reduce the number of users who see it.

The practice of demotion was aided by custom-built algorithmic “pipelines.” Two of those pipelines on Jan. 6 tracked false claims that circulated widely on Facebook amid the storming of the Capitol: that Antifa protesters were responsible, and that then-President Donald Trump had invoked the Insurrection Act, according to the document.

The Insurrection Act, passed in 1807, allows the president to deploy military troops to respond to domestic unrest and disasters. It was last invoked in 1992 by the George H.W. Bush administration during the Los Angeles riots. Trump threatened to invoke the Act in 2020 amid protests over the police killing of George Floyd.

The team responsible for these anti-misinformation pipelines encountered challenges, according to the document, including questions about which posts constituted misinformation.

Employees involved in this effort aimed to avoid “false positives” that did not contain misinformation, using “exclusion terms” like “MAGA losers” to rule out posts that criticized, rather than promoted, the storming of the Capitol.

The document says employees sought guidance from colleagues to resolve “ambiguities,” such as whether a claim that former President Trump would invoke the Insurrection Act was misinformation.

Facebook’s employees, according to the document, judged that claims that Trump would invoke the Act around Jan. 6 were valid, while false claims that he had already done so were considered misinformation.

“In close collaboration with Misinfo Policy, we determined that statements in the future are not considered misinfo. However, users posting the phrase “he signed the Insurrection Act” was considered unambiguous enough and judged as misinfo,” wrote the author of the document, who has not been identified.

In a statement issued on the evening of Jan. 6, the company said that in addition to demoting content that likely violated its rules, it had begun removing certain types of posts, including those that praised the storming of the Capitol or that called for further violence.

Facebook later said in a Jan. 11 statement that it would remove content containing the phrase “stop the steal.”

Haugen told the SEC that the company elects to demote content despite knowing it is “an ineffective response” due to concerns about being criticized over accidentally removing “false positives,” or posts that do not violate its rules.

Facebook executives frequently cite the protection of free speech on the platform as a bedrock principle, such as in an October 2019 speech by CEO Mark Zuckerberg that called for its protection despite its “messiness.”

During Haugen’s Oct. 5 Senate testimony, she criticized Facebook’s “closed design” which she alleged hides information from researchers and regulators.

“As long as Facebook is operating in the shadows, hiding its research from public scrutiny, it is unaccountable,” Haugen said to a Senate Commerce subcommittee.

O’Connor said that despite the changes it implemented after Jan. 6, the company still has work to do.

“Facebook, and other online companies, must do better and take action to limit the dangers of falsified information and extremist actors on their platforms,” he said.

Facebook’s Oversight Board said on Oct. 11 that it would meet with Haugen “over the coming weeks” to discuss the claims she made about the platform’s content-moderation decisions.

Facebook, along with other social media companies like Twitter and Snapchat, was the subject of a request for records announced on Aug. 27 by the House Select Committee on the Jan. 6 attack.

On Oct. 22, another former Facebook employee, who has not been identified, was revealed to have submitted whistleblower documents to the SEC. The former employee told the Washington Post that Facebook prioritizes profit over safety on its platforms, and failed to take adequate action to address issues including illegal activity like drug dealing and antiquities trafficking.

A Facebook spokesperson said in response to the Washington Post’s report that the story was “beneath” the newspaper and set a “dangerous precedent” by relying on a single source, but did not deny the second whistleblower’s claims.

Copyright © 2021, ABC Audio. All rights reserved.