Cadastre-se gratuitamente nas nossas newsletters

Leia em português

HIGHLIGHTS
* Cooperation between platforms and authorities has advanced, but monitoring what happens on social networks is still a challenge for authorities like Brazil’s Supreme Electoral Court (TSE)

* Role of emerging platforms is a point of concern

* Platforms lack transparency in detailing actions being taken in preparation for elections

* Platforms should have plans in place for the possibility of offline violence, like the invasion of the U.S. Capitol

The relationship between Brazilian authorities and social media platforms has evolved significantly in recent years, but, on the eve of the 2022 electoral cycle, there is still a crucial point to be addressed: how to monitor what happens within these platforms?

This is an assessment by Katie Harbath, a former Facebook executive who was in Brazil earlier this year and gave Núcleo an interview after her visit to the country.

In 2021, Harbath left Facebook (now called Meta), where she held the position of Director of Public Policy and led the elections team. After her departure, she founded Anchor Change, a civic technology company that develops strategies and solutions in technology, politics, and business, reflecting on global issues related to democracy, elections, and online civic engagement.

CONTEXT

In February, Brazil’s Supreme Electoral Court (TSE) signed an agreement with platforms to combat electoral disinformation.

Meta (which includes Facebook, Instagram, and WhatsApp) Twitter, Google (which is also responsible for YouTube), Kwai, and TikTok signed documents in which they commit to taking action to combat disinformation. Telegram stayed out of the agreement, but signed its adhesion to the program on March 25, a week after Supreme Court Justice Alexandre de Moraes determined to discontinue the service in Brazil.

This year's October elections will be the first general elections to take place under this formal partnership between platforms and the TSE, as there was no such initiative in 2018. The agreement was signed in mid-February, and exists under the Countering Disinformation Program, created by the court in August 2019 and made permanent in August 2021.

According to Harbath, as important as it is to have a commitment from the platforms, information monitoring is still a problem to be solved, especially with regard to "emerging" platforms, which have come to play a more central role in recent years.

"This is challenging, especially considering that Facebook is not accepting new people to its Crowdtangle platform and that you have lots of new platforms that are hard to monitor and to understand how they are being used, like Telegram, TikTok, and Instagram, becoming new players and being used by people to get news and information," Harbath said in an interview with Núcleo.

The challenge remains of how to help external actors actually monitor what is happening around these various platforms

In Harbath's assessment, the role of these emerging platforms, which may not have had the experience of past elections and do not have a background on how to deal with the challenges that arise in this context, is one of the main points of concern for the October elections in Brazil.

Harbath also points out the difference between what platforms promise and how much they do to deliver on those promises.

As an example, she cited a study by the Directory of Public Policy Analysis at the Getúlio Vargas Foundation (FGV-Dapp) in partnership with the TSE published in February that analyzed the spread of links about distrust in the electoral system between November 2020 and January 2022. An additional survey conducted by the researchers at the request of the Folha de S. Paulo newspaper revealed that most of the links circulated without any labels.

There is also a significant gap between the tools that Meta and other platforms have announced (so far) for Brazilian elections in comparison to the tools seen in the last election cycle in the United States.

So far I have only seen the labels being extended to other countries, but I would like to see more of these tools and features being expanded to other places in the world where they make sense

BAD ACTORS

In October 2019, over a year before election day in the U.S., Meta had already announced a series of measures to protect the integrity of the presidential elections.

For Brazil, Núcleo could not find a similar announcement from the company that summarizes measures implemented for the elections. Sparse public announcements were made in December, when the company said it would start inserting labels in publications about elections on FB and Instagram, and in February, when it announced the creation of a reporting channel exclusive to the TSE.

More broadly, Harbath believes that transparency is lacking not only from Meta but also from other platforms, in terms of demonstrating what actions are being taken to protect civic integrity.

Speaking from the perspective of a company insider, the former Facebook director acknowledges that there is a delicate balance between being transparent and giving away too much information, such that it can be used by "bad actors" to engage in harmful behavior without actually violating the rules.

In 2018, The New York Times, The Guardian, and Channel 4 News gained access to the information that the company Cambridge Analytica had allegedly improperly collected, since 2014, personally identifiable information from 87 million Facebook. This data was used to profile voters and influence public opinion in various countries and civic events. Steve Bannon, former adviser and strategist to former U.S. President Donald Trump, was a member of Cambridge Analytica's board..

According to her, this is one of the points where legislation is important since it can offer companies some legal cover for sharing data with academics and civil society, for example. Bill 2630, according to its rapporteur's final version, presented last week, determines that platforms must facilitate academic institutions' access to disaggregated data for research.

OFFLINE VIOLENCE

The storming of the U.S. Capitol in January 2021 forced Facebook and other social media platforms to make unprecedented decisions. Katie Harbath was at the company when the events unfolded on January 6, and part of her decision to leave Meta is explained by the company's stance that day and after.

In an interview with the Wall Street Journal, Harbath said the company should reflect on "whether it could have done more to prevent violence of the kind that broke out on January 6 and what roles its platforms have played in making politics more violent."

The former director added to the WSJ that, unless there is urgent intervention by governments and technology platforms, social media is likely to "incubate future political violence, such as that which led to the invasion of the Capitol on January 6, 2021."

During her visit to Brazil in February, the topic of offline violence was back on Harbath's agenda (that is, if it ever was off her agenda). Among the concerns she heard from people she met in Brazil — members of the government, civil society, and the private sector — the risk of violence came to the surface, she told Núcleo.

LESSONS AND SUGGESTIONS

According to Harbath, there are lessons to be learned from the invasion of the Capitol, as well as from what is happening now in Ukraine.

The first lesson that she considers important is for both platforms and governments to be more transparent about what they are seeing and what is worrying them, given that this can help people prepare for what may come.

"It's important to think in depth and have more steps thought out about what to do if there is content that could lead to violence," Harbath said. "Maybe taking down or keeping content on the air are not the only two options, maybe there are other options, like reducing reach, applying labels, and thinking up different things."

Finally, Harbath says that it is necessary to make sure there are policies in place that define when to consider deplatforming or suspending certain profiles, "depending on what they're talking about or how they're trying to fuel any kind of violence."

Just hours after the Capitol invasion, Facebook began deplatforming former U.S. President Donald Trump for inciting violence. A 24-hour suspension turned into an indefinite ban. In June 2021, the company announced that Trump's suspension would last two years and released a new protocol for such exceptional cases.

The protocol followed a recommendation by the Oversight Board, which criticized the company's initial decision to apply an indeterminate penalty with no standards. Twitter was more radical: on January 8 it permanently suspended Trump's account for future risk of incitement to violence.

Harbath draws attention to a certain lack of understanding about policy-making at social media companies. She explains that exceptional decisions like suspending Trump or allowing calls of violence against Russian officials and soldiers are not made with the intention of being permanent.

"These are decisions that companies don't necessarily make with the intention of being permanent — at least from FB's point of view, they are policies made to adapt to situations that exist in the world," she told Núcleo.

I think this is something we will see more and more of, because the world is changing so fast, and what is acceptable or not changes constantly, so it is hard to have permanent policies for these things

As an example, Harbath cited changes linked to COVID-19.

According to Harbath, there will need to be more flexibility with regard to platforms so that they can better react to these events.

"We need to be willing to allow them to change their minds depending on how a situation unfolds and we need to understand that they will do that, and that it is not an opportunity to attack them," she explained.

ELECTION WAVE

In an op-ed published in The New York Times in late January, Harbath argued that Facebook is not ready for the wave of elections that will take place in 2022. She argues that the deadline is tight, but that there is still time for the company to publicly announce the measures it intends to implement to combat disinformation and hate speech around the world.

One of the issues that lacks attention, the former director writes in her article, is that the company needs “people with country-specific language and culture expertise to make tough decisions about speech or behavior that might violate the platform’s rules.”

This issue of an absence of cultural and linguistic nuance outside the United States appears frequently in the Facebook Papers.

Asked how this shortcoming impacts the company's efforts in election periods, Harbath said it's not so much about the content moderation teams — which she assesses are well assisted and equipped in this regard — but rather the product development teams.

"The people who are building the tools, like machine learning classifiers, to try to proactively identify potentially problematic content so that it can be flagged for content moderators in the first place. That’s where I feel like some of the nuance of that conversation gets lost," Harbath said.

REPORTING AND WRITING LAÍS MARTINS
EDITING ALEXANDRE ORRICO
Translated by Clara Simas Ferraz
FacebookMeta
Acesse o NúcleoHub, nossa comunidade no Discord.