Anatomy of a disinformation campaign: The who, what and why of deliberate falsehoods on Twitter

9th February 2021 By: Africa Check

Anatomy of a disinformation campaign: The who, what and why of deliberate falsehoods on Twitter

In recent years, the UK’s Oxford Internet Institute has tracked the manipulation of public opinion online. Since 2018, South Africa has featured on a growing list of countries where social media is used to spread disinformation and computational propaganda. Twitter is a prominent platform for social media manipulation in South Africa, the institute found.

In this three-part series, Africa Check and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) take a closer look at disinformation on Twitter in South Africa. Part one focuses on disinformation actors, their behaviour and content. In part two, we ask: How much damage can a hashtag cause? And finally, we consider what individual social media users can do about disinformation.

The people behind disinformation – deliberate falsehoods spread online – exploit an information ecosystem that “prioritises what is popular over what is true” to cause widespread harm.

The root of the problem is that political conversations take place on platforms built for viral advertising, says Renée DiResta, research manager at Stanford University’s Internet Observatory, a US programme that focuses on the abuse of social media.

“[Social media algorithms] will show you what you want to see, but they don’t have any kind of value judgment and this is where we see things like radicalisation beginning to be an increasing problem because the recommendation engine does not actually understand what it is suggesting.”

The foreign influence operations of countries like Russia, and increasingly China, loom large in discussions about online disinformation. But the threat of domestic disinformation is as real.

The recently released 2020 edition of the Oxford Internet Institute’s Global Inventory of Organised Social Media Manipulation identified 77 countries where government or political party actors used disinformation on social media to manipulate public opinion. These disinformation campaigns are mostly run within the country, the researchers told Africa Check.

Twelve African countries are among the 77.

They include South Africa, where government agencies, politicians and political parties, private contractors, citizens and influencers are involved in social media manipulation. This ranges from attacking the opposition to trolling those who disagree into silence.

Examples are the ANC’s pre-election “boiler room”, which reportedly organised “party activists” to attack opposition leaders on Twitter, and the Economic Freedom Fighters’ Twitter “troll army”, which targeted journalists.

Amelie Henle, research assistant at the Oxford Internet Institute’s Computational Propaganda Research Project, says the Gupta-backed online influence operation in which the now defunct British public relations firm Bell Pottinger had a hand was the biggest example of disinformation it had identified in South Africa in recent years.

Disinformation ultimately damages democracy. University of Washington associate professor Kate Starbird writes: “While a single disinformation campaign may have a specific objective – for instance, changing public opinion about a political candidate or policy – pervasive disinformation works at a more profound level to undermine democratic societies.”

A disinformation ABC

One way to look at online disinformation is to break it down into three “vectors”: manipulative actors, deceptive behaviour and harmful content.

The three are “often intertwined”, explains Camille François, chief innovation officer at network analysis company Graphika.

But each campaign may have a different primary vector.

For example, a manipulative actor might distribute “uplifting, empowering” content, but the way it is distributed – the behaviour – makes it disinformation.

“We’ve seen this in many information operations: the first set of posts in the first months is often just putting out this uplifting, engaging, kind of feel-good content, because you want to amass followers and create an audience around those accounts,” says François. “And then once you have an audience … you can start ‘weaponising’ these accounts and use them for more divisive and more political content.”

Dr Danil Mikhailov, executive director of the data-science-for-good platform data.org, describes this as the investment of time capital to acquire social capital, as people like and follow your content. This causes search engines and social media algorithms to amplify the content, earning cultural capital or recognition that is then used to influence communities.

disinformation

Manipulative actors

Disinformation actors design their campaigns to hide their identities and intentions, writes François.

Manipulative actors include the creators of sock-puppet accounts, which use false identities to spread or boost disinformation, and trolls who bully those who get in the way of its spread.

You might have encountered a troll when questioning the veracity of a tweet. What they do better than bots (automated accounts that mimic human behaviour) is launch personal attacks to silence people who ask questions about whether something is true, according to a report on information disorder.

The report – by Dr Claire Wardle, co-founder of the anti-misinformation non-profit First Draft, and media researcher Hossein Derakhshan – says the actors in a disinformation campaign don’t necessarily share motives. “For example, the motivations of the mastermind who ‘creates’ a state-sponsored disinformation campaign are very different from those of the low-paid ‘trolls’ tasked with turning the campaign’s themes into specific posts.”

Possible reasons for creating and spreading mis- and disinformation include:

  • Money – pushing traffic to false-news websites for advertising income is an example
  • The desire to connect with an online “tribe”, such as fellow supporters of a political party or a cause
  • Wanting to influence public opinion by, for example, discrediting a political opponent

The money motive was at work when a South African municipal employee posed as a racist white woman on Twitter in 2020 to drive traffic to his websites.

Seeking to influence public opinion featured prominently in the Radical Economic Transformation (RET) disinformation campaign on Twitter in South Africa in 2016 and 2017.

The African Network of Centers for Investigative Reporting (ANCIR) found that the RET network set out to undermine the institutions that ran South Africa’s economy, used “white monopoly capital” to divert attention from state capture, and attacked critics of the Gupta family and former president Jacob Zuma.

ANCIR analysed 200,247 of the RET network’s tweets. A full 98% of these were retweets, showing how a network of fake accounts was used to amplify messages “to give the illusion that the content they are sharing resonates with a wider group”.

Typically, a fake account would tweet something, which was then boosted by more fake accounts. Prominent – and real – Twitter users who were tagged in some of these tweets created a “bridge” between the network of fake accounts and the rest of Twitter.

There is still an RET community on South African Twitter. A 2020 analysis of 14 million tweets by the Superlinear blog found that it had merged with the Economic Freedom Fighters community. These groups have displaced “the mainstream media from the centre of the conversation”, the blog, run by a data scientist, says.

Deceptive behaviour

“Deceptive behaviours have a clear goal,” says François. “To enable a small number of actors to have the perceived impact that a greater number of actors would have if the campaign were organic.”

In January 2021, Twitter and Facebook removed accounts that used deceptive behaviour to benefit Ugandan president Yoweri Museveni ahead of the country’s election.

Facebook said a network linked to Uganda’s ICT ministry was involved in “coordinated inauthentic behaviour”.

The platform defines this as groups of pages or people who “work together to mislead others about who they are or what they are doing … When we take down one of these networks, it’s because of their deceptive behaviour. It’s not because of the content they are sharing.”

International news agency AFP reported that the Ugandan network’s tactics included using “fake and duplicate accounts to manage pages, comment on other people’s content, impersonate users [and] re-share posts in groups to make them appear more popular than they were”.

The DFRLab identified related “suspicious behaviour” on Twitter, including accounts that responded to negative tweets about Museveni with identical copied and pasted text.

Read: How the #PutSouthAfricansFirst disinformation network coordinated to create the perception of a legitimate movement.

Online disinformation actors use both manual and automated techniques to deceive.

In 2020, the Oxford Internet Institute identified 58 countries where “cyber troops” – government or political party actors – had used bots to manipulate public opinion. Fake human-run social media accounts were even more widespread, found in 81 countries.

Based in part on an analysis of media reports, the institute found that cyber troops in South Africa used bots, fake human-run and hacked or stolen accounts.

disinformation

The second part of this series gives a snapshot of disinformation on Twitter in South Africa, but a disinformation campaign typically spans multiple platforms.

Disinformation could, for example, arrive on open social networks like Twitter after making its way from anonymous online spaces to closed or semi-closed groups – WhatsApp or Twitter direct messages, for example – to conspiracy communities on Reddit or YouTube, explains First Draft’s Wardle.

Journalists might pick it up from an open social network like Twitter, particularly if a politician or influencer repeats the falsehood, she says. Malicious actors often bank on this media amplification (more on this in part three).

If an operation is effective, writes DiResta, “sympathetic real people” will find the message in their feeds and amplify it too.

These “unwitting agents” might even be in the majority, says Starbird.

A Russian influence operation exposed in 2020 went as far as deceiving both the people who consumed its content and some of those who created content for the campaign. Ghanaians linked to a human rights NGO appeared to be unaware that they were part of an operation targeting black communities in the United States.

Dubbed Double Deceit, the operation is an example of the evolving tactics disinformers use and the challenge these shifts pose, says François.

South Africans were involved in an operation, linked to a financier of the Russian Internet Research Agency troll farm, that targeted the Central African Republic and Southern Africa. It used locals from the Central African Republic and South Africa to manage activities and create content, “likely to avoid detection and help appear more authentic,” according to Facebook.

Harmful content

Content that is manipulated to deceive can take many forms.

As part of the #PutSouthAfricansFirst disinformation campaign, a photo taken in a Nigerian hospital in 2019 was used out of context on Twitter to falsely claim that South Africans were sleeping on hospital floors in 2020 because foreign nationals were occupying the beds.

This is an example of false context: where content is taken out of its original context and recirculated.

The information disorder report highlights other types of false content. These include falsely creating the impression that content was created by an official source (imposter content) and making something up (fabricated content).

To increase the likelihood that a message will be shared, it might include a “powerful visual component”.

The Media Manipulation Casebook identifies several tactics that use visuals. These include memes, misinfographics (false or misleading infographics) and evidence collages (“compiling information from multiple sources into a single, shareable document, usually as an image, to persuade or convince a target audience”).

Disinformation actors have also had success with content that triggers an emotional response.

For this reason, one thing a user can do to avoid falling for disinformation is to pause before taking action if a social media post makes them scared or angry.

Get more advice on dealing with disinformation in part three of this series.

As DiResta reminds us: in an information war, our minds are the territory. “If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.”

Researched by Liesl Pretorius

This report was written by Africa Check., a non-partisan fact-checking organisation. View the original piece on their website.