RON
Rumours on Networks

Project and goals

The proposed project aims to increase our understanding of the factors that allow rumours and misinformation to spread on (online) social networks. Given the global corona pandemic and the rise of the so-called “post-truth” society, the harm that misinformation may cause has been recognized, and it can hardly be overstated.
Political examples of this harm abound. While the literature has come a long way in uncovering various factors that influence the diffusion of misinformation, it is still not perfectly understood. However, only increased understanding will allow us to fight the persistence of misinformation and rumours.

This project aims to employ theoretical, empirical, and simulation tools to: 

  • uncover motivations to share unverified information;
  • model how ideological identity groups may form on a network;
  • analyze how specific network characteristics interact with information characteristics to either hinder or help the diffusion of information.

This work is built on earlier work, funded through an MSCA fellowship, and is part of a long-term research subject, whose aim it is to find efficient ways to deal with the spread of misinformation without impacting freedom of expression.

Papers

Merlino L. P., Tabasso N., "Optimal Verification of Rumours in Networks"

This paper has actually started a long time before the RON project, but has finally been completed as part of it. In it, we ask whether an increase in verification of information is always necessarily a good thing for a society. On purpose we look at a context where verification is not only possible, but perfect: If incorrect information (which we summarise as "rumors") is verified, the truth will be revealed. We think of scenarios like the questions whether climate change is a real threat, whether there is a link between HIV and AIDS, or whether the MMR vaccine for children causes autism or not, i.e., the scientific consensus is clear (yes, yes, and no) and it is possible to access this information. More than that, we focus on how many people are correctly informed, as ignorance of the truth (like the HIV-AIDS link) may be just as harmful as believing a wrong thing.

To answer the question whether in such cases more verification is always better, we assume two things:

  1. people ignore messages that contradict their worldview, unless these messages are verified (e.g., if you generally believe that climate science is conducted correctly, you tend to ignore messages that claim climate change is a hoax);
  2. how much verification happens in a society can be influenced by policy makers, for example through information literacy programmes in schools, guidelines to spot misinformation etc. 

We show that these two conditions lead to multiple outcomes:

  1. if verification rates are high enough, rumours will disappear, and only truthful information will survive in society;
  2. rumours can in fact increase the amount of correct information in a society: every time a person that hears a rumor that disagrees with their worldview, they ignore it and remain uninformed, unless they verify it. This means that some people become correctly informed exactly because they heard a rumor and verified it;
  3. when there is only little communication – so only few people become informed of anything – and policy makers have only intermediate budgets available to allow people to verify, it is in fact possible that more verification leads to fewer people being informed correctly.

Our results do not claim that it is always good for a society to allow incorrect information to circulate. More than anything, they highlight the complexity of the relationship between truthful and incorrect messages, and the importance for policy makers to understand as much as possible about the variables that surround the diffusion of any particular rumor, as even a policy as seemingly straightforward as increasing verification may backfire.

Ghiglino C., Tabasso N., “Endogenous Identity in a Social Network”

DOI: https://doi.org/10.48550/arXiv.2406.10972

Homophily, the tendency to connect to others that are similar to ourselves, is a prevalent aspect of human behavior. In online communications it can, however, lead to "echo chambers", whereby people tend to receive similar messages over and over again, and become convinced that the view of their community is also the prevalent one in society overall. Or that something is correct (since many people told you about it), even if it is not (because you basically just hear the same message over and over, just from different people in your group). In this paper, my co-author and myself do not focus on information once groups have formed, but on what forces might lead them to form groups in the first place.

Here, we look at identity theory, and show how people have incentives to choose the same identity as their neighbours. This then leads them to coordinate their actions. We derive conditions under which multiple identities exist in society, which could be different social classes (or groups with different world views on certain topics), just as we observe in reality. Our results back up arguments that mixing across social classes can be an important driver for social mobility: when people have friends of different identities, this makes it easier for them to choose an identity that matches their own characteristics best, as doing so does not automatically mean being different from their friends.

Currarini S., Tabasso N., Ursino G., “Verification Behavior and Homophily, An Experiment”
(preliminary title)

Echo chambers have often been cited as increasing the spread of rumours. Theoretically, if people are able to verify information they receive, they tend to verify information that confirms their worldview less than messages that contradicts this view. This means that increases in homophily will lower verification in society, as more homophily means being more likely to meet others with the same worldview as yourself, who are more likely to convey you messages in line with that view. However, if people take this into consideration, they should start to verify information more as homophily increases. In this paper, we aim to find in a lab experiment whether people truly verify confirming and contradictory messages differently, and if they do, how these verification rates change with homophily. This paper is currently a work in progress, as we design an experiment to look into these questions.

Rodriguez Barraquer T., Tabasso N., “Optimal Virality of Memes”
(preliminary title)

In nature, different viruses have different infection characteristics as well as different recovery rates. For example, it is very easy to catch an airborne virus like the cold and very easy to recover from it; other viruses need a much closer connection to be transmitted, but it might also take longer (or be impossible) to recover from them, e.g., HIV. Many cultural phenomena spread similar to viruses, but there is little discussion for how the characteristics of a meme might be adjusted to maximise its spread in a population.