In the run up to the 2013 Italian elections, a social media post exposing the corruption of parliament went viral. Italian politicians were quietly certain that, win or lose, they would be financially secure by taking money from the taxpayer. Parliament had quietly passed a special welfare bill specially designed to protect policy-makers by ensuring them an incredible unemployment package should they lose their seat in the upcoming election. The bill, proposed by Senator Cirenga, allocated an amazing €134 billion to political unemployment. The Italian Senate had voted 257 in favour and 165 in abstention.
The post caused considerable and understandable uproar. It was covered in several leading newspapers and cited by mainstream political organizations. But there was one problem: the entire story was fake. Not even a good fake at that. For those interested in Italian politics, there were a number of obvious problems with the story. First of all, there is no Senator Cirenga. The number of votes doesn’t work either, because Italy doesn’t even have that many senators. And the incredible sum would have accounted for roughly 10% of Italy’s GDP.
So what happened? How did such an obvious fake fool so many people?
Walter Quattrociocchi, the head of the Laboratory of Computational Social Science at IMT Lucca in Italy, has been studying the phenomenon of misinformation online. His work helped to inform the World Economic Forum’s latest Global Risks Report. We spoke with him to find out more.
Why is this happening?
Before the web, you got your information from magazines, television and the newspapers. Now anyone can create a blog, have a Tumblr page or post their opinions online. From there, you can spread that information rapidly through Twitter, Facebook and a whole smorgasbord of other social media platforms.
The problem is that while traditional media had editors, producers and other filters before information went public, individual publishing has no filter. You simply say what you want and put it out there.
The result is that everyone can produce or find information consistent with their own belief system. An environment full of unchecked information maximizes the tendency to select content by confirmation bias.
Recent studies that focus on misinformation online pointed out that the selective exposure to specific content leads to «echo chambers» in which users tend to shape and reinforce their beliefs.
What is an echo chamber?
An echo chamber is an isolated space on the web, where the ideas being exchanged essentially just confirm one another. It can be a space of likeminded people sharing similar political views, or a page about a specific conspiracy theory. Once inside one of these spaces, users are sharing information that is all very similar, basically «echoing» each other.
Content belonging to the different topics are consumed in a very similar way by users. Likes and shares remain more or less the same across topics.
If we focus on the comments section however, we notice a remarkable difference within topics. Users polarized on geopolitics are the most persistent in commenting, whereas those focused on diet are less persistent.
We also found that users «jump» from one topic to another. Once they begin to «like» something, they do this more and more, like a snowball effect. Once engaged in a conspiracy corpus, a user tends to join the overall conversation, and begins to «jump» from one topic to another. The probability increases with user engagement (number of likes on a single specific topic). Each new like on the same conspiracy topic increases the probability to pass to a new one by 12%.
What kind of rumours are spreading?
Pages about global conspiracies, chem-trails, UFOs, reptilians. One of the more publicized conspiracies is the link between vaccines and autism.
These alternative narratives, often in contrast to the mainstream one, proliferate on Facebook. The peculiarity of conspiracy theories is that they tend to reduce the complexity of reality. Conspiracy theories create (or reflect) a climate of disengagement from mainstream society and from officially recommended practices – e.g. vaccinations, diet, etc.
Among the most fascinating social dynamics observed is trolling. Before, trolls were mainly people who just wanted to stir up a crowd, but the practice has evolved. Trolls today act to mock the «believe anything» culture of these echo-chambers. They basically attack contradictions through parody.
Trolls’ activities range from controversial and satirical content to the fabrication of purely fictitious statements, heavily unrealistic and sarcastic. For instance, conspiracist trolls aggregate in groups and build Facebook pages as a caricature of conspiracy news. A recent example was a fake publication of «findings» that showed chemtrails had traces of viagra in them.
What makes their activity so interesting is that, quite often, these jokes go viral and end up used as evidence in online debates from political activists.
How have you been studying this phenomenon?
On Facebook, likes, shares, and comments allow us to understand social dynamics from a totally new perspective. Using this data, we can study the driving forces behind the diffusion and consumption of information and rumours.
In our study of 2.3 million individuals, we looked at how Facebook users consumed different information at the edge of political discussion and news during the 2013 Italian elections. Pages were categorized, according to the kind of content reported on.
1. Mainstream media
2. Online political activism
3. Alternative information sources (topics that are neglected by science and mainstream media)
We followed 50 public pages and their users’ interactions (like, comments and shares) for six months.
Each action has a particular meaning. A like gives positive feedback; a share expresses the will to increase the visibility; and comments expand the debate.
What we found was that neither a post’s topic nor its quality of information had any effect on the outcome. Posts containing unsubstantiated claims, or about political activism, as well as regular news, all had very similar engagement patterns.
So people are reacting to posts based on their beliefs, regardless of where those posts originated from?
Exactly. It’s not that people are reacting the same way to all content, but that everyone is agreeing within their specific community.
People are looking for information which will confirm their existing beliefs. If today an article comes out from the WHO supporting your claims, you like it and share it. If tomorrow a new one comes out contradicting your claims, you criticise it, question it.
The results show that we are back to «echo chambers», there is selective exposure followed by confirmation bias.
To verify this, we performed another study, this time with a sample of 1.2 million users. We wanted to see how information related to very distinct narratives – i.e. mainstream scientific and conspiracy news – are consumed and shaped by communities on Facebook.
What we found is that polarized communities emerge around distinct types of content and consumers of conspiracy news tend to be extremely focused on specific content.
Users who like posts do so on the pages of one specific category 95% of the time. We also looked at commenting, and found that polarized users of conspiracy news are more focused on posts from their community. They are more prone to like and share posts from conspiracy pages.
On the other hand, people who consume scientific news share and like less, but comment more on conspiracy pages.
Our findings indicate that there is a relationship between beliefs and the need for cognitive closure. This is the driving factors for digital wildfires.
Does that mean we know what will go viral next?
Viral phenomena are generally difficult to predict. This insight does allow us to at least understand the users that are more prone to interact with false claims.
We wanted to understand if such a polarization in the consumption of content affects the structure of the user’s friendship networks. In another study, Viral Misinformation: The role of homophily and polarization we found that a user’s engagement in a specific narrative goes hand in hand with the number of friends having a similar profile.
That provides an important insight about the diffusion of unverified rumours. It means that through polarization, we can detect where misleading rumours are more likely to spread.
But couldn’t we combat that by spreading better information?
No. In fact, there is evidence that this only makes things worse.
In another study, we found that people interested in a conspiracy theory are likely to become more involved in the conversation when exposed to «debunking». In other words, the more the exposure to contrasting information a person is given, the more it reinforces their consumption pattern. Debunking within an echo chamber can backfire, and reinforce people’s bias.
In fact, distrust in institutions is so high and confirmation bias so strong, that theWashington Post has recently discontinued their weekly debunking column.
What can be done to fight misinformation?
Misinformation online is very difficult to correct. Confirmation bias is extremely powerful. Once people have found «evidence» of their views, external and contradicting versions are simply ignored.
One proposal to counteract this trend is algorithmic-driven solutions. For example, Google is developing a trustworthiness score to rank the results of queries. Similarly, Facebook has proposed a community-driven approach where users can flag false content to correct the newsfeed algorithm.
This issue is controversial, however, because it raises fears that the free circulation of content may be threatened and the proposed algorithms might not be accurate or effective. Often users denounce attempts to debunk false information, such as the link between vaccination and autism, as acts of misinformation.