EU DisinfoLab’s annual conference on disinformation is underway this week! While registration is closed, journalists or media organisations that would like to cover the conference can reach us at conference(at)disinfo.eu.

The Deterrence Project

Donald Trump’s 2016 election campaign categorized 3.5 million Black Americans as ‘Deterrence’, or voters they wanted to stay home on election day.  Channel 4 has revealed a database that contained data on on almost 200 million Americans. “In 16 key battleground states, millions of Americans were separated by an algorithm into one of eight categories, also described as ‘audiences’, so they could then be targeted with tailored ads on Facebook and other platforms. One of the categories was named ‘Deterrence’, which was later described publicly by Trump’s chief data scientist as containing people that the campaign ‘hope don’t show up to vote’.” The vice president of the National Association for the Advancement of Colored People (NAACP) described the campaign as modern-day voter suppression. Read the story here.

Brand Safety’s new brand

Last Wednesday, the World Federation of Advertisers (WFA), along with YouTube, Facebook, Twitter and other marketing and advertising stakeholders, adopted a brand safety action plan. The plan is part of the WFA’s cross-industry initiative, the Global Alliance for Responsible Media (GARM), and emerges after 15 months of discussions. The stakeholders have finally agreed on common definitions for hate speech and other kinds of harmful content like misinformation. They also make new commitments, including a plan for monitoring the implementation of “safe and sustainable” advertising measures. This voluntary, self-regulatory initiative follows a massive advertiser boycott of Facebook earlier this year that included more than 1,000 companies, and continued efforts led by the organization Stop Hate for Profit. It also comes as European legislators are poised to address the status quo of digital advertising on social media through direct regulation under the Digital Services Act. Agreeing on definitions for hate speech and harmful content is an important accomplishment, but currently political advertising seems beyond the scope of this initiative. 

Measuring the Impact of Disinformation

Ben Nimmo, the director of investigations at Graphika, has published a comparative model for measuring the impact of Influence Operations (IOs). Objectively assessing the impact of influence campaigns has proven particularly challenging for investigators and operational researchers. Nimmo’s “Breakout Scale” divides influence operations into six categories, mainly based on their presence across multiple platforms and communities. He’s posted an overview of the framework on Twitter, with examples of each category from past operations, along with the reminder that the same operation may move between these categories. The Breakout Scale will certainly be useful for the research community, but it is also important to policy makers, influencers, and media professionals, who struggle to know when to cast light on and respond to influence operations, and when to avoid amplifying them excessively.

In the news

  • Critics of Facebook’s Oversight Board are launching a ‘rival’ mechanism to oversee the overseers. The “Real Facebook Oversight Board” gathers many advocates of the Facebook advertising boycott, and so far includes the former president of Estonia and Facebook’s former head of election integrity. They’re also on TikTok.
  • On September 23rd, the United States Justice Department unveiled a proposal for Section 230 reform, issued by Attorney General William Barr. If you need help wading through the frenzied proposals on Section 230, Politico Morning Tech can help you out.

Studies

  • Jack Cable and Zoe Huczok at the Stanford Internet Observatory have document a network of 94 Facebook pages with coordinated activity operated by the Guinean president’s political party, supporting his campaign for a third term in elections this October.
  • Stefano Cresci explores the role of bots on social networks in a longitudinal survey, “A Decade of Social Bot Detection”. In addition to extensive analysis, the paper offers suggestions for the focus of future deception detection techniques.

Good reads

  • Casey Newton’s long form article on Facebook’s internal struggles, Mark in the Middle, is definitely not to be missed this week. The piece is woven together from leaked audio recording from inside the company. BTW Casey Newton is also leaving the Verge and launching a new thing
  • Kaitlyn Tiffany in the Atlantic: Reddit has been more successful at squashing Qanon conspiracy theories. Surely there can’t be just one reason, but its infrastructure and platform policies have certainly played a role. 

Events and Announcements

  • Raphaël Glucksmann (S&D) has been elected chair of the European Parliament’s new Foreign Interference Committee. 
  • In a joint statement on September 23, the UN and partners call on Member States to combat disinformation while respecting free expression, “to develop and implement action plans to manage the infodemic by promoting the timely dissemination of accurate information”. 
  • Twitter will begin prompting users to read articles before they retweet them, based on findings from experiments earlier this year.
  • 2 October – Oslo Metropolitan University will be streaming their seminar: “A right to know. How can we ensure reliable information in times of crisis?” More info here.
  • 5 October – 4 November – The Knight Center is offering a new MOOC “Digital investigations for journalists: How to follow the digital trail of people and entities”. Learn more here.

Jobs