U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news, disinformation and misinformation in social media: a review

Esma aïmeur.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Sabrine Amri

Gilles brassard, associated data.

All the data and material are available in the papers cited in the references.

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Introduction

Context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, 2 that the Earth was flat, 3 that aliens had invaded us, 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, 5 while in 2018, only one-fifth of them say they often get news via social media. 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified 9 as fake and in some cases as dangerous and will never cure the infection.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig1_HTML.jpg

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 13 and in 2018 14 as well as by the Collins dictionary in 2017. 15 , 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

  • We present the general context from which the fake news problem emerged (i.e., online deception)
  • We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.
  • We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.
  • We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.
  • We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.
  • We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.
  • We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

  • RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?
  • RQ2: What are the existing challenges and issues related to fake news?
  • RQ3: What are the available techniques used to perform fake news detection in social media?

Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, 19 IEEE Xplore, 20 Springer Link, 21 ScienceDirect, 22 Scopus, 23 ACM Digital Library. 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

List of keywords for searching relevant articles

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

Inclusion and exclusion criteria

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

Classification of fake news definitions based on the used term and features

A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union 28 and NATO. 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig2_HTML.jpg

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

A comparison between used terms based on intent and authenticity

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig3_HTML.jpg

The features used for fake news definition

Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig4_HTML.jpg

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

A comparison between the different types of intent-based fake news

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

  • the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);
  • the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;
  • the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;
  • and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

Humans are the weakest factor due to the lack of awareness

Recent statistics 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig5_HTML.jpg

Classification of fake news detection approaches

News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig6_HTML.jpg

News content-based category: news content representation and detection techniques

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

The features and datasets used in the news content-based approaches

a https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

b https://mediabiasfactcheck.com/ , last access date: 26-12-2022

c https://github.com/KaiDMML/FakeNewsNet , last access date: 26-12-2022

d https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

e https://www.cs.ucsb.edu/~william/data/liar_dataset.zip , last access date: 26-12-2022

f https://www.kaggle.com/mrisdal/fake-news , last access date: 26-12-2022

g https://github.com/BuzzFeedNews/2016-10-facebook-fact-check , last access date: 26-12-2022

h https://www.politifact.com/subjects/fake-news/ , last access date: 26-12-2022

i https://www.kaggle.com/rchitic17/real-or-fake , last access date: 26-12-2022

j https://www.kaggle.com/jruvika/fake-news-detection , last access date: 26-12-2022

k https://github.com/MKLab-ITI/image-verification-corpus , last access date: 26-12-2022

l https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view , last access date: 26-12-2022

Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig7_HTML.jpg

Social context-based category: social context representation and detection techniques

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

The features, detection cues and datasets used int the social context-based approaches

a https://www.dropbox.com/s/7ewzdrbelpmrnxu/rumdetect2017.zip , last access date: 26-12-2022 b https://snap.stanford.edu/data/ego-Twitter.html , last access date: 26-12-2022

Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

The features and datasets used in the hybrid approaches

Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

  • Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.
  • Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).
  • Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook 34 started addressing false information through independent fact-checkers in 2017, followed by Google 35 the same year. Two years later, Instagram 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, 38 snopes.com, 39 Reuters, 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig8_HTML.jpg

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

Comparison of AI-based fake news detection techniques

Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

  • The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.
  • The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).
  • The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( ✓ ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

A classification of popular blockchain-based approaches for fake news detection in social media

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

Fake news detection approaches classification

News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

  • Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.
  • News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.
  • Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.
  • Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.
  • Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Author Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Availability of data and material

Declarations.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

1 https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

2 https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

3 https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

4 https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

5 https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

6 https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

7 https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

8 https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

9 https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

10 https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

11 https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

12 https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

13 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

14 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

15 https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

16 https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

17 https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

18 https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

19 https://scholar.google.ca/ , last access date: 26-12-2022.

20 https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

21 https://link.springer.com/ , last access date: 26-12-2022.

22 https://www.sciencedirect.com/ , last access date: 26-12-2022.

23 https://www.scopus.com/ , last access date: 26-12-2022.

24 https://www.acm.org/digital-library , last access date: 26-12-2022.

25 https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

26 https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

27 https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

28 https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

29 https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

30 https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

31 https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

32 https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

33 https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

34 https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

35 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

36 https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

37 https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

38 https://www.politifact.com/ , last access date: 26-12-2022.

39 https://www.snopes.com/ , last access date: 26-12-2022.

40 https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Esma Aïmeur, Email: ac.laertnomu.ori@ruemia .

Sabrine Amri, Email: [email protected] .

Gilles Brassard, Email: ac.laertnomu.ori@drassarb .

  • Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 10.1109/ICSCC.2019.8843612
  • Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 10.1109/AISP48273.2020.9073583
  • Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F. Mapping the scholarship of fake news research: a systematic review. J Pract. 2022; 16 (1):56–86. doi: 10.1080/17512786.2020.1805791. [ CrossRef ] [ Google Scholar ]
  • Ahmed S, Hinkelmann K, Corradini F. Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng. 2020; 14 (12):454–460. [ Google Scholar ]
  • Aïmeur E, Brassard G, Rioux J. Data privacy: an end-user perspective. Int J Comput Netw Commun Secur. 2013; 1 (6):237–250. [ Google Scholar ]
  • Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 10.1109/CSCI46756.2018.00244
  • Alemanno A. How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul. 2018; 9 (1):1–5. doi: 10.1017/err.2018.12. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017; 31 (2):211–36. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. 2020 doi: 10.1126/sciadv.aay3539. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allington D, Duffy B, Wessely S, Dhavan N, Rubin J. Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. 2020 doi: 10.1017/S003329172000224X. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 10.37757/MR2020.V22.N2.12 [ PubMed ] [ CrossRef ]
  • Altay S, Hacquin AS, Mercier H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 2022; 24 (6):1303–1324. doi: 10.1177/1461444820969893. [ CrossRef ] [ Google Scholar ]
  • Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. 10.1109/IJCNN48605.2020.9206973
  • Andersen J, Søe SO. Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun. 2020; 35 (2):126–139. doi: 10.1177/0267323119894489. [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B. Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform. 2021; 56 :101475. doi: 10.1016/j.tele.2020.101475. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B, Tunca EA, Gever CV. The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. 2022 doi: 10.1177/09610006221096477. [ CrossRef ] [ Google Scholar ]
  • Aswani R, Ghrera S, Kar AK, Chandra S. Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min. 2017; 7 (1):1–10. doi: 10.1007/s13278-017-0461-2. [ CrossRef ] [ Google Scholar ]
  • Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , 10.37016/mr-2020-033
  • Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 10.1145/3308560.3316494
  • Bahad P, Saxena P, Kamal R. Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci. 2019; 165 :74–82. doi: 10.1016/j.procs.2020.01.072. [ CrossRef ] [ Google Scholar ]
  • Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 10.1109/SocialSens.2018.00009
  • Balmas M. When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res. 2014; 41 (3):430–454. doi: 10.1177/0093650212453600. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. Understanding fake news consumption: a review. Soc Sci. 2020 doi: 10.3390/socsci9100185. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. A working definition of fake news. Encyclopedia. 2022; 2 (1):632–645. doi: 10.3390/encyclopedia2010043. [ CrossRef ] [ Google Scholar ]
  • Bastick Z. Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav. 2021; 116 :106633. doi: 10.1016/j.chb.2020.106633. [ CrossRef ] [ Google Scholar ]
  • Batailler C, Brannon SM, Teas PE, Gawronski B. A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci. 2022; 17 (1):78–98. doi: 10.1177/1745691620986135. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). 10.5210/fm.v21i11.7090
  • Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113
  • Bhuiyan MM, Zhang AX, Sehat CM, Mitra T. Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact. 2020; 4 (CSCW2):1–26. doi: 10.1145/3415164. [ CrossRef ] [ Google Scholar ]
  • Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun. 2015; 65 (4):619–638. doi: 10.1111/jcom.12166. [ CrossRef ] [ Google Scholar ]
  • Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques. Inf Sci. 2019; 497 :38–55. doi: 10.1016/j.ins.2019.05.035. [ CrossRef ] [ Google Scholar ]
  • Bovet A, Makse HA. Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun. 2019; 10 (1):1–14. doi: 10.1038/s41467-018-07761-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci. 2021 doi: 10.1073/pnas.2020043118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brewer PR, Young DG, Morreale M. The impact of real news about “fake news”: intertextual processes and political satire. Int J Public Opin Res. 2013; 25 (3):323–343. doi: 10.1093/ijpor/edt015. [ CrossRef ] [ Google Scholar ]
  • Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2022; 19 (2):165–179. doi: 10.1080/19331681.2021.1945988. [ CrossRef ] [ Google Scholar ]
  • Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. 10.1007/978-3-319-60131-1_24
  • Burshtein S. The true story on fake news. Intell Prop J. 2017; 29 (3):397–446. [ Google Scholar ]
  • Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)
  • Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. 10.24251/HICSS.2019.332
  • Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 10.14763/2020.2.1481
  • Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. 10.1007/978-3-030-45002-1_19
  • Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796
  • Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580
  • Chiu MM, Oh YW. How fake news differs from personal lies. Am Behav Sci. 2021; 65 (2):243–258. doi: 10.1177/0002764220910243. [ CrossRef ] [ Google Scholar ]
  • Chung M, Kim N. When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res. 2021; 47 (1):1–24. doi: 10.1093/hcr/hqaa010. [ CrossRef ] [ Google Scholar ]
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020 doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E, et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav. 2020; 42 (4):1073–1095. doi: 10.1007/s11109-019-09533-0. [ CrossRef ] [ Google Scholar ]
  • Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 10.1007/978-981-15-3380-8_49
  • Conroy NK, Rubin VL, Chen Y. Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol. 2015; 52 (1):1–4. doi: 10.1002/pra2.2015.145052010082. [ CrossRef ] [ Google Scholar ]
  • Cooke NA. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q. 2017; 87 (3):211–221. doi: 10.1086/692298. [ CrossRef ] [ Google Scholar ]
  • Coscia M, Rossi L. Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface. 2020; 17 (167):20200020. doi: 10.1098/rsif.2020.0020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dame Adjin-Tettey T. Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human. 2022; 9 (1):2037229. doi: 10.1080/23311983.2022.2037229. [ CrossRef ] [ Google Scholar ]
  • Deepak S, Chitturi B. Deep neural approach to fake-news identification. Procedia Comput Sci. 2020; 167 :2236–2243. doi: 10.1016/j.procs.2020.03.276. [ CrossRef ] [ Google Scholar ]
  • de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union
  • Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 2019; 13 (2):1–22. doi: 10.1145/3316809. [ CrossRef ] [ Google Scholar ]
  • Demuyakor J, Opata EM. Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. 2022 doi: 10.54963/jic.v2i1.56. [ CrossRef ] [ Google Scholar ]
  • Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12
  • Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH. Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis. 2022; 74 (Supplement–3):e34–e39. doi: 10.1093/cid/ciac109. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dias N, Pennycook G, Rand DG. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. 2020 doi: 10.37016/mr-2020-001. [ CrossRef ] [ Google Scholar ]
  • DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, 10.1007/978-3-030-42699-6_14
  • Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F. Understanding conspiracy theories. Polit Psychol. 2019; 40 :3–35. doi: 10.1111/pops.12568. [ CrossRef ] [ Google Scholar ]
  • Edgerly S, Mourão RR, Thorson E, Tham SM. When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q. 2020; 97 (1):52–71. doi: 10.1177/1077699019864680. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, 10.1007/978-3-030-33509-0_86
  • ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)
  • ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)
  • Escolà-Gascón Á. New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) Comput Hum Behav Rep. 2021; 3 :100049. doi: 10.1016/j.chbr.2020.100049. [ CrossRef ] [ Google Scholar ]
  • Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016/mr-2020-009. [ CrossRef ] [ Google Scholar ]
  • Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016; 59 (7):96–104. doi: 10.1145/2818717. [ CrossRef ] [ Google Scholar ]
  • Flynn D, Nyhan B, Reifler J. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol. 2017; 38 :127–150. doi: 10.1111/pops.12394. [ CrossRef ] [ Google Scholar ]
  • Fraga-Lamas P, Fernández-Caramés TM. Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020; 22 (2):53–59. doi: 10.1109/MITP.2020.2977589. [ CrossRef ] [ Google Scholar ]
  • Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020 doi: 10.1017/S0033291720001890. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media
  • García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C. The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. 2020 doi: 10.3390/socsci9050073. [ CrossRef ] [ Google Scholar ]
  • Garrett RK, Bond RM. Conservatives’ susceptibility to political misperceptions. Sci Adv. 2021 doi: 10.1126/sciadv.abf1234. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 10.1007/978-3-030-51310-8_17
  • Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, 10.1145/3201064.3201100
  • Goldani MH, Momtazi S, Safabakhsh R. Detecting fake news with capsule neural networks. Appl Soft Comput. 2021; 101 :106991. doi: 10.1016/j.asoc.2020.106991. [ CrossRef ] [ Google Scholar ]
  • Goldstein I, Yang L. Good disclosure, bad disclosure. J Financ Econ. 2019; 131 (1):118–138. doi: 10.1016/j.jfineco.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019; 363 (6425):374–378. doi: 10.1126/science.aau2706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 10.4018/978-1-7998-7291-7.ch013
  • Guess A, Nagler J, Tucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2019 doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728
  • Guo B, Ding Y, Yao L, Liang Y, Yu Z. The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 2020; 53 (4):1–36. doi: 10.1145/3393880. [ CrossRef ] [ Google Scholar ]
  • Gupta A, Li H, Farnoush A, Jiang W. Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res. 2022; 140 :670–683. doi: 10.1016/j.jbusres.2021.11.032. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ha L, Andreu Perez L, Ray R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci. 2021; 65 (2):290–315. doi: 10.1177/0002764219869402. [ CrossRef ] [ Google Scholar ]
  • Habib A, Asghar MZ, Khan A, Habib A, Khan A. False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min. 2019; 9 (1):1–20. doi: 10.1007/s13278-019-0595-5. [ CrossRef ] [ Google Scholar ]
  • Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. 10.4018/978-1-7998-2543-2.ch014
  • Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ. An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst. 2021; 117 :47–58. doi: 10.1016/j.future.2020.11.022. [ CrossRef ] [ Google Scholar ]
  • Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. 10.1007/978-3-030-36987-3_17
  • Hameleers M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc. 2022; 25 (1):110–126. doi: 10.1080/1369118X.2020.1764603. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Powell TE, Van Der Meer TG, Bos L. A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun. 2020; 37 (2):281–301. doi: 10.1080/10584609.2019.1674979. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Brosius A, de Vreese CH. Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. 2022 doi: 10.1177/02673231211072667. [ CrossRef ] [ Google Scholar ]
  • Hartley K, Vu MK. Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci. 2020; 53 (4):735–758. doi: 10.1007/s11077-020-09405-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hasan HR, Salah K. Combating deepfake videos using blockchain and smart contracts. IEEE Access. 2019; 7 :41596–41606. doi: 10.1109/ACCESS.2019.2905689. [ CrossRef ] [ Google Scholar ]
  • Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 10.1016/B978-0-12-818699-2.00004-4
  • Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)
  • Huckle S, White M. Fake news: a technological approach to proving the origins of content, using blockchains. Big Data. 2017; 5 (4):356–371. doi: 10.1089/big.2017.0071. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 10.1145/3313831.3376375
  • Ireton C, Posetti J. Journalism, fake news & disinformation: handbook for journalism education and training. Paris: UNESCO Publishing; 2018. [ Google Scholar ]
  • Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min. 2020; 10 (1):1–20. doi: 10.1007/s13278-020-00696-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ismailov M, Tsikerdekis M, Zeadally S. Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet. 2020; 12 (9):148. doi: 10.3390/fi12090148. [ CrossRef ] [ Google Scholar ]
  • Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium
  • Jamieson KH. Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford: Oxford University Press; 2020. [ Google Scholar ]
  • Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. 10.1007/978-3-030-32233-5_49
  • Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence
  • Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, 10.1007/978-3-319-99007-1_88
  • Jones-Jang SM, Mortensen T, Liu J. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci. 2021; 65 (2):371–388. doi: 10.1177/0002764219869406. [ CrossRef ] [ Google Scholar ]
  • Jungherr A, Schroeder R. Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. 2021 doi: 10.1177/2056305121988928. [ CrossRef ] [ Google Scholar ]
  • Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 10.1109/CCAA.2018.8777343
  • Kaliyar RK, Goswami A, Narang P, Sinha S. Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res. 2020; 61 :32–44. doi: 10.1016/j.cogsys.2019.12.005. [ CrossRef ] [ Google Scholar ]
  • Kapantai E, Christopoulou A, Berberidis C, Peristeras V. A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc. 2021; 23 (5):1301–1326. doi: 10.1177/1461444820959296. [ CrossRef ] [ Google Scholar ]
  • Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44
  • Kaur S, Kumar P, Kumaraguru P. Automating fake news detection system using multi-level voting model. Soft Comput. 2020; 24 (12):9049–9069. doi: 10.1007/s00500-019-04436-y. [ CrossRef ] [ Google Scholar ]
  • Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. 10.1109/I2CACIS.2019.8825029
  • Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. 10.1145/3159652.3159734
  • Klein D, Wueller J. Fake news: a legal perspective. J Internet Law. 2017; 20 (10):5–13. [ Google Scholar ]
  • Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763
  • Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000; 62 (3):790–816. doi: 10.1111/0022-3816.00033. [ CrossRef ] [ Google Scholar ]
  • Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559
  • Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. 10.1145/2872427.2883085
  • La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. 10.1007/978-3-030-45442-5_26
  • Lanius C, Weber R, MacKenzie WI. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min. 2021; 11 (1):1–15. doi: 10.1007/s13278-021-00739-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, et al. The science of fake news. Science. 2018; 359 (6380):1094–1096. doi: 10.1126/science.aao2998. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. 10.1145/3341161.3342875
  • Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. 10.1146/annurev-publhealth-090419-102409 [ PubMed ]
  • Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361
  • Luo M, Hancock JT, Markowitz DM. Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res. 2022; 49 (2):171–195. doi: 10.1177/0093650220921321. [ CrossRef ] [ Google Scholar ]
  • Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang. 2019; 58 :101964. doi: 10.1016/j.gloenvcha.2019.101964. [ CrossRef ] [ Google Scholar ]
  • Maertens R, Anseel F, van der Linden S. Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol. 2020; 70 :101455. doi: 10.1016/j.jenvp.2020.101455. [ CrossRef ] [ Google Scholar ]
  • Mahabub A. A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences. 2020; 2 (4):1–9. doi: 10.1007/s42452-020-2326-y. [ CrossRef ] [ Google Scholar ]
  • Mahbub S, Pardede E, Kayes A, Rahayu W. Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv. 2019; 15 (2):139–158. doi: 10.1504/IJWGS.2019.099561. [ CrossRef ] [ Google Scholar ]
  • Marsden C, Meyer T, Brown I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev. 2020; 36 :105373. doi: 10.1016/j.clsr.2019.105373. [ CrossRef ] [ Google Scholar ]
  • Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. 10.1145/3410566.3410599
  • Mazzeo V, Rapisarda A. Investigating fake and reliable news sources using complex networks analysis. Front Phys. 2022; 10 :886544. doi: 10.3389/fphy.2022.886544. [ CrossRef ] [ Google Scholar ]
  • McGrew S. Learning to evaluate: an intervention in civic online reasoning. Comput Educ. 2020; 145 :103711. doi: 10.1016/j.compedu.2019.103711. [ CrossRef ] [ Google Scholar ]
  • McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46 (2):165–193. doi: 10.1080/00933104.2017.1416320. [ CrossRef ] [ Google Scholar ]
  • Meel P, Vishwakarma DK. Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl. 2020; 153 :112986. doi: 10.1016/j.eswa.2019.112986. [ CrossRef ] [ Google Scholar ]
  • Meese J, Frith J, Wilken R. Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020; 177 (1):30–46. doi: 10.1177/1329878X20952165. [ CrossRef ] [ Google Scholar ]
  • Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res. 2020; 47 (1):3–28. doi: 10.1177/0093650215613136. [ CrossRef ] [ Google Scholar ]
  • Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773
  • Mihailidis P, Viotty S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci. 2017; 61 (4):441–454. doi: 10.1177/0002764217701217. [ CrossRef ] [ Google Scholar ]
  • Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653
  • Mishra S, Shukla P, Agarwal R. Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. 2022 doi: 10.1155/2022/1575365. [ CrossRef ] [ Google Scholar ]
  • Molina MD, Sundar SS, Le T, Lee D. “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci. 2021; 65 (2):180–212. doi: 10.1177/0002764219878224. [ CrossRef ] [ Google Scholar ]
  • Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be
  • Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. 10.1145/3091478.3091523
  • Nagel TW. Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ. 2022; 14 (1):29–42. doi: 10.23860/JMLE-2022-14-1-3. [ CrossRef ] [ Google Scholar ]
  • Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374
  • Nekmat E. Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. 2020 doi: 10.1177/2056305119897322. [ CrossRef ] [ Google Scholar ]
  • Nygren T, Brounéus F, Svensson G. Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ. 2019; 18 (2):87–109. doi: 10.4119/jsse-917. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Reifler J. Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci. 2015; 2 (1):81–93. doi: 10.1017/XPS.2014.22. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav. 2020; 42 (3):939–960. doi: 10.1007/s11109-019-09528-x. [ CrossRef ] [ Google Scholar ]
  • Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. 10.1109/AINS47559.2019.8968706
  • Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. 10.1007/978-3-030-29238-6_8
  • Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A. 2020; 540 :123174. doi: 10.1016/j.physa.2019.123174. [ CrossRef ] [ Google Scholar ]
  • Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. 10.1109/HICSS.2015.288
  • Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441.10.1109/MIPR.2018.00093
  • Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020
  • Paschen J. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag. 2019; 29 (2):223–233. doi: 10.1108/JPBM-12-2018-2179. [ CrossRef ] [ Google Scholar ]
  • Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362
  • Peng J, Detchon S, Choo KKR, Ashman H. Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp. 2017; 29 (17):e4013. doi: 10.1002/cpe.4013. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci. 2019; 116 (7):2521–2526. doi: 10.1073/pnas.1806781116. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020; 88 (2):185–200. doi: 10.1111/jopy.12476. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020; 66 (11):4944–4957. doi: 10.1287/mnsc.2019.3478. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638
  • Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. 10.1007/978-3-030-43722-0_22
  • Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. 10.1609/aaai.v34i01.5386
  • Qayyum A, Qadir J, Janjua MU, Sher F. Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof. 2019; 21 (4):16–24. doi: 10.1109/MITP.2019.2910503. [ CrossRef ] [ Google Scholar ]
  • Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. 10.24963/ijcai.2018/533
  • Raza S, Ding C. Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal. 2022; 13 (4):335–362. doi: 10.1007/s41060-021-00302-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/
  • Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019; 5 (1):1–10. doi: 10.1057/s41599-019-0279-9. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016//mr-2020-008. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S. Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci. 2020; 7 (10):201199. doi: 10.1098/rsos.201199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17
  • Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. 10.1145/3132847.3132877
  • Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240
  • Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. 10.1109/ICDCS.2019.00160
  • Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. 10.1109/ICIS.2018.8466516
  • Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. 10.1145/2872518.2890098
  • Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nat Commun. 2018; 9 (1):1–9. doi: 10.1038/s41467-018-06930-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL. Anatomy of an online misinformation network. PLoS ONE. 2018; 13 (4):e0196087. doi: 10.1371/journal.pone.0196087. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 2019; 10 (3):1–42. doi: 10.1145/3305260. [ CrossRef ] [ Google Scholar ]
  • Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309
  • Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF. Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc. 2019; 21 (2):438–463. doi: 10.1177/1461444818799526. [ CrossRef ] [ Google Scholar ]
  • Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
  • Shi P, Zhang Z, Choo KKR. Detecting malicious social bots based on clickstream sequences. IEEE Access. 2019; 7 :28855–28862. doi: 10.1109/ACCESS.2019.2901864. [ CrossRef ] [ Google Scholar ]
  • Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl. 2017; 19 (1):22–36. doi: 10.1145/3137597.3137600. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , 10.1089/big.2020.0062 [ PubMed ]
  • Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. 10.1109/MIPR.2018.00092
  • Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. 10.1145/3289600.3290994
  • Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. 10.1145/3341161.3342927
  • Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H. Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov. 2020; 10 (6):e1385. doi: 10.1002/widm.1385. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Liu H. Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press. 2020; 14 :626–637. [ Google Scholar ]
  • Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 10.1007/978-3-030-42699-6_1
  • Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666
  • Singh VK, Ghosh I, Sonagara D. Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci. 2021; 72 (1):3–17. doi: 10.1002/asi.24359. [ CrossRef ] [ Google Scholar ]
  • Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. 10.14778/3358701.3358708 [ CrossRef ]
  • Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020
  • Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 10.1109/ICCE.2019.8661978
  • Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 10.1145/3359229
  • Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24
  • Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020; 34 :118–122. doi: 10.1016/j.cobeha.2020.02.015. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr, Thomas RJ, Bishop L. What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun. 2021; 9 (1):110–119. doi: 10.17645/mac.v9i1.3331. [ CrossRef ] [ Google Scholar ]
  • Tchakounté F, Faissal A, Atemkeng M, Ntyam A. A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information. 2020; 11 (6):319. doi: 10.3390/info11060319. [ CrossRef ] [ Google Scholar ]
  • Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 10.1007/978-3-030-30796-7_20
  • Treen KMd, Williams HT, O’Neill SJ. Online misinformation about climate change. Wiley Interdiscip Rev Clim Change. 2020; 11 (5):e665. doi: 10.1002/wcc.665. [ CrossRef ] [ Google Scholar ]
  • Tsang SJ. Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. 2020 doi: 10.1177/1077699020952129. [ CrossRef ] [ Google Scholar ]
  • Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. 10.1145/3184558.3188722
  • Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B. Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min. 2022; 12 (1):1–19. doi: 10.1007/s13278-022-00878-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. 10.4324/9780429295379-11
  • Van der Linden S, Panagopoulos C, Roozenbeek J. You are fake news: political bias in perceptions of fake news. Media Cult Soc. 2020; 42 (3):460–470. doi: 10.1177/0163443720906992. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Muñiz C, Santos M. Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. 2022 doi: 10.1177/19401612221088988. [ CrossRef ] [ Google Scholar ]
  • Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS
  • Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. 10.1007/s10588-020-09307-8
  • Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). 10.2139/ssrn.3007971
  • Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces
  • Vishwakarma DK, Varshney D, Yadav A. Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res. 2019; 58 :217–229. doi: 10.1016/j.cogsys.2019.07.004. [ CrossRef ] [ Google Scholar ]
  • Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. 10.3115/v1/W14-2508
  • von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. 10.1145/3394171.3414692
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017; 39 (5):621–645. doi: 10.1177/1075547017731776. [ CrossRef ] [ Google Scholar ]
  • Waldman AE. The marketplace of fake news. Univ Pa J Const Law. 2017; 20 :845. [ Google Scholar ]
  • Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648
  • Wang L, Wang Y, de Melo G, Weikum G. Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min. 2019; 9 (1):1–17. doi: 10.1007/s13278-019-0580-z. [ CrossRef ] [ Google Scholar ]
  • Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. 10.1007/978-3-030-23407-2_11
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019; 240 :112552. doi: 10.1016/j.socscimed.2019.112552. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. 10.1609/aaai.v34i01.5389
  • Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020
  • Wardle C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit J. 2018; 6 (8):951–963. doi: 10.1080/21670811.2018.1502047. [ CrossRef ] [ Google Scholar ]
  • Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep. 2017; 27 :1–107. [ Google Scholar ]
  • Weiss AP, Alwan A, Garcia EP, Garcia J. Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr. 2020; 16 (1):1–30. doi: 10.1007/s40979-019-0049-x. [ CrossRef ] [ Google Scholar ]
  • Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. 10.1145/3159652.3159677
  • Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009
  • Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl. 2019; 21 (2):80–90. doi: 10.1145/3373464.3373475. [ CrossRef ] [ Google Scholar ]
  • Wu Y, Ngai EW, Wu P, Wu C. Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. 2022 doi: 10.1108/INTR-05-2021-0294. [ CrossRef ] [ Google Scholar ]
  • Xu K, Wang F, Wang H, Yang B. Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol. 2019; 25 (1):20–27. doi: 10.26599/TST.2018.9010139. [ CrossRef ] [ Google Scholar ]
  • Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. 10.1145/3308558.3314119
  • Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. 10.1109/ICASSP.2019.8683164
  • Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. 10.1145/3313831.3376213
  • Yavary A, Sajedi H, Abadeh MS. Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min. 2020; 10 (1):1–8. doi: 10.1007/s13278-019-0616-4. [ CrossRef ] [ Google Scholar ]
  • Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S. Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng. 2020; 14 (2):38–42. doi: 10.5281/zenodo.3669287. [ CrossRef ] [ Google Scholar ]
  • Zannettou S, Sirivianos M, Blackburn J, Kourtellis N. The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 2019; 11 (3):1–37. doi: 10.1145/3309699. [ CrossRef ] [ Google Scholar ]
  • Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616
  • Zhang X, Ghorbani AA. An overview of online fake news: characterization, detection, and discussion. Inf Process Manag. 2020; 57 (2):102025. doi: 10.1016/j.ipm.2019.03.004. [ CrossRef ] [ Google Scholar ]
  • Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180
  • Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. 10.1145/3308558.3313718
  • Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 10.1109/WIFS47025.2019.9035107
  • Zhou X, Zafarani R. A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 2020; 53 (5):1–40. doi: 10.1145/3395046. [ CrossRef ] [ Google Scholar ]
  • Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 2018; 51 (2):1–36. doi: 10.1145/3161603. [ CrossRef ] [ Google Scholar ]

How misinformation spreads on social media—And what to do about it

Subscribe to the center for middle east policy newsletter, chris meserole chris meserole fellow - foreign policy , strobe talbott center for security, strategy, and technology @chrismeserole.

May 9, 2018

As widespread as misinformation online is, opportunities to glimpse it in action are fairly rare. Yet shortly after the recent attack in Toronto, a journalist unwittingly carried out a kind of natural experiment on Twitter. This piece originally appeared on Lawfare .

“We take misinformation seriously,” Facebook CEO Mark Zuckerberg  wrote  just weeks after the 2016 election. In the year since, the question of how to counteract the damage done by “fake news” has become a pressing issue both for technology companies and governments across the globe.

Yet as widespread as the problem is, opportunities to glimpse misinformation in action are fairly rare. Most users who generate misinformation do not share accurate information too, so it can be difficult to tease out the effect of misinformation itself. For example, when President Trump  shares misinformation on Twitter , his tweets tend to go viral. But they may not be going viral because of the misinformation: All those retweets may instead owe to the popularity of Trump’s account, or the fact that he writes about politically charged subjects. Without a corresponding set of accurate tweets from Trump, there’s no way of knowing what role misinformation is playing.

For researchers, isolating the effect of misinformation is thus extremely challenging. It’s not often that a user will share both accurate and inaccurate information about the same event, and at nearly the same time.

Yet shortly after  the recent attack in Toronto , that is exactly what a CBC journalist did. In the chaotic aftermath of the attack,  Natasha Fatah  published two competing eyewitness accounts: one (wrongly, as it turned out) identifying the attacker as  “angry” and “Middle Eastern,”  and another correctly identifying him as  “white.”

Fatah’s tweets are by no means definitive, but they do represent a natural experiment of sorts. And the results show just how fast misinformation can travel. As the graphic below illustrates, the initial tweet—which wrongly identified the attacker as Middle Eastern—received far more engagement than the accurate one in the roughly five hours after the attack:

social media misinformation essay

Worse, the tweet containing correct information did not perform much better over a longer time horizon, up to 24 hours after the attack:

social media misinformation essay

(Data and code for the graphics above are  available here .)

Taken together, Fatah’s tweets suggest that misinformation on social media genuinely is a problem. As such, they raise two questions: First, why did the incorrect tweet spread so much faster than the correct one? And second, what can be done to prevent the similar spread of misinformation in the future?

The Speed of Misinformation on Twitter

For most of Twitter’s history, its newsfeed was straightforward: The app showed tweets in reverse chronological order. That changed in 2015 with the introduction of Twitter’s  an algorithmic newsfeed , which displayed tweets based on a calculation of “ relevance ” rather than recency.

Last year, the company’s engineering team  revealed how its current algorithm works . As with  Facebook  and  YouTube , Twitter now relies on a deep learning algorithm that has learned to prioritize content with greater prior engagement. By combing through Twitter’s data, the algorithm has taught itself that Twitter users are more likely to stick around if they see content that has already gotten a lot of retweets and mentions, compared with content that has fewer.

The flow of misinformation on Twitter is thus a function of both human and technical factors. Human biases  play an important role : Since we’re more likely to react to content that taps into our existing grievances and beliefs, inflammatory tweets will generate quick engagement. It’s only after that engagement happens that the technical side kicks in: If a tweet is retweeted, favorited, or replied to by enough of its first viewers, the newsfeed algorithm will show it to more users, at which point it will tap into the biases of those users too—prompting even more engagement, and so on. At its worse, this cycle can turn social media into a kind of  confirmation bias machine , one perfectly tailored for the spread of misinformation.

If you look at Fatah’s tweets, the process above plays out almost to a tee. A small subset of Fatah’s followers immediately engaged with the tweet reporting a bystander’s account of the attacker as “angry” and “Middle Eastern,” which set off a cycle in which greater engagement begat greater viewership and vice versa. By contrast, the tweet that accurately identified the attacker received little initial engagement, was flagged less by the newsfeed algorithm, and thus never really caught on. The result is the graph above, which shows an exponential increase in engagement for the inaccurate tweet, but only a modest increase for the accurate one.

What To Do About It

Just as the problem has both a human and technical side, so too does any potential solution.

Where Twitter’s algorithms are concerned, there is no shortage of low-hanging fruit. During an attack itself, Twitter could promote police or government accounts so that accurate information is disseminated as quickly as possible. Alternately, it could also display a warning at the top of its search and trending feeds about the unreliability of initial eyewitness accounts.

Even more, Twitter could update its “While You Were Away” and search features. In the case of the Toronto attack, Twitter could not have been expected to identify the truth faster than the Toronto police. But once the police had identified the attacker, Twitter should have had systems in place to restrict the visibility of Fatah’s tweet and other trending misinformation. For example, over ten days after the attack, the top two results for a search of the attacker  were these :

Tweet reading: "#AlekMinassian So a Muslim terrorist killed 9 people using a van. What else is new. Still wondering why the news was quick to mention it was a Ryder rental van but not the religion or this evil POS"

(I conducted the above search while logged into my own Twitter account, but a search while logged out produced the same results.)

Unfortunately, these were not isolated tweets. Anyone using Twitter to follow and learn about the attack has been greeted with  a wealth of misinformation and invective . This is something Twitter can combat: Either it can hire an editorial team to track and remove blatant misinformation from trending searches, or it can introduce a new reporting feature for users to flag misinformation as they come across it. Neither option is perfect, and the latter would not be trivial to implement. But the status quo is worse. How many Twitter users continue to think the Toronto attack was the work of Middle Eastern jihadists, and that Prime Minister Justin Trudeau’s immigration policies are to blame?

Ultimately, however, the solution to misinformation will also need to involve the users themselves. Not only do Twitter’s users need to better understand their own biases, but journalists in particular need to better understand how their mistakes can be exploited. In this case, the biggest errors were human ones: Fatah tweeted out an account without corroborating it, even though the eyewitness in question, a man named David Leonard,  himself noted  that “I can’t confirm or deny whether my observation is correct.”

To counter misinformation online, we can and should demand that newsfeed algorithms not amplify our worst instincts. But we can’t expect them to save us from ourselves.

Related Content

Chris Meserole

April 25, 2018

Media & Journalism

Foreign Policy

Center for Middle East Policy

Isaac Baley, Laura Veldkamp

April 17, 2024

Molly Kinder

April 12, 2024

Online Only

2:00 pm - 3:00 pm EDT

June 21, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread

By Giovanni Luca Ciampaglia , Filippo Menczer & The Conversation US

social media misinformation essay

Roy Scott Getty Images

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

Social media are among the  primary sources of news in the U.S.  and across the world. Yet users are exposed to content of questionable accuracy, including  conspiracy theories ,  clickbait ,  hyperpartisan content ,  pseudo science  and even  fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud  are lucrative for criminals , and government and political propaganda yield  both partisan and financial benefits . But the fact that  low-credibility content spreads so quickly and easily  suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our  Observatory on Social Media  at Indiana University is building  tools  to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause  information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that  some ideas go viral despite their low quality —even when people prefer to share high-quality content.*

To avoid getting overwhelmed, the brain uses a  number of tricks . These methods are usually effective, but may also  become biases  when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are  very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is  who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed  Fakey , a mobile news literacy game (free on  Android  and  iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to  determine the political leanings of a Twitter user  by simply looking at the partisan preferences of their friends. Our analysis of the structure of these  partisan communication networks  found social networks are particularly efficient at disseminating information – accurate or not – when  they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into  “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built  Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were  almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed  advertising tools built into many social media platforms  let disinformation campaigners exploit  confirmation bias  by  tailoring messages  to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will  tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the  homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this  popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by  social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s  Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as  boosting disinformation  or falsely  creating the appearance of a grassroots movement , also called “astroturfing.” We found  evidence of this type of manipulation  in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots called  Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as  15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are  many questions  left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will  not likely be only technological , though there will probably be some technical aspects to them. But they must take into account  the cognitive and social aspects  of the problem.

*Editor’s note: This article was updated on Jan. 10, 2019, to remove a link to a study that has been retracted. The text of the article is still accurate, and remains unchanged.

This article was originally published on The Conversation . Read the original article .

What are you looking for?

The researchers sought to understand how the reward structure of social media sites drives users to develop habits of posting misinformation on social media. (Photo/AdobeStock)

USC study reveals the key reason why fake news spreads on social media

The USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online.

USC researchers may have found the biggest influencer in the spread of fake news: social platforms’ structure of rewarding users for habitually sharing information.

The team’s findings, published Monday by Proceedings of the National Academy of Sciences , upend popular misconceptions that misinformation spreads because users lack the critical thinking skills necessary for discerning truth from falsehood or because their strong political beliefs skew their judgment.

Just 15% of the most habitual news sharers in the research were responsible for spreading about 30% to 40% of the fake news.

The research team from the USC Marshall School of Business and the USC Dornsife College of Letters, Arts and Sciences wondered: What motivates these users? As it turns out, much like any video game, social media has a rewards system that encourages users to stay on their accounts and keep posting and sharing. Users who post and share frequently, especially sensational, eye-catching information, are likely to attract attention.

“Due to the reward-based learning systems on social media, users form habits of sharing information that gets recognition from others,” the researchers wrote. “Once habits form, information sharing is automatically activated by cues on the platform without users considering critical response outcomes, such as spreading misinformation.”

Posting, sharing and engaging with others on social media can, therefore, become a habit.

“[Misinformation is] really a function of the structure of the social media sites themselves.” — Wendy Wood , USC expert on habits

“Our findings show that misinformation isn’t spread through a deficit of users. It’s really a function of the structure of the social media sites themselves,” said Wendy Wood , an expert on habits and USC emerita Provost Professor of psychology and business.

“The habits of social media users are a bigger driver of misinformation spread than individual attributes. We know from prior research that some people don’t process information critically, and others form opinions based on political biases, which also affects their ability to recognize false stories online,” said Gizem Ceylan, who led the study during her doctorate at USC Marshall and is now a postdoctoral researcher at the Yale School of Management . “However, we show that the reward structure of social media platforms plays a bigger role when it comes to misinformation spread.”

In a novel approach, Ceylan and her co-authors sought to understand how the reward structure of social media sites drives users to develop habits of posting misinformation on social media.

Why fake news spreads: behind the social network

Overall, the study involved 2,476 active Facebook users ranging in age from 18 to 89 who volunteered in response to online advertising to participate. They were compensated to complete a “decision-making” survey approximately seven minutes long.

Surprisingly, the researchers found that users’ social media habits doubled and, in some cases, tripled the amount of fake news they shared. Their habits were more influential in sharing fake news than other factors, including political beliefs and lack of critical reasoning.

Frequent, habitual users forwarded six times more fake news than occasional or new users.

“This type of behavior has been rewarded in the past by algorithms that prioritize engagement when selecting which posts users see in their news feed, and by the structure and design of the sites themselves,” said second author Ian A. Anderson , a behavioral scientist and doctoral candidate at USC Dornsife. “Understanding the dynamics behind misinformation spread is important given its political, health and social consequences.”

Experimenting with different scenarios to see why fake news spreads

In the first experiment, the researchers found that habitual users of social media share both true and fake news.

In another experiment, the researchers found that habitual sharing of misinformation is part of a broader pattern of insensitivity to the information being shared. In fact, habitual users shared politically discordant news — news that challenged their political beliefs — as much as concordant news that they endorsed.

Lastly, the team tested whether social media reward structures could be devised to promote sharing of true over false information. They showed that incentives for accuracy rather than popularity (as is currently the case on social media sites) doubled the amount of accurate news that users share on social platforms.

The study’s conclusions:

  • Habitual sharing of misinformation is not inevitable.
  • Users could be incentivized to build sharing habits that make them more sensitive to sharing truthful content.
  • Effectively reducing misinformation would require restructuring the online environments that promote and support its sharing.

These findings suggest that social media platforms can take a more active step than moderating what information is posted and instead pursue structural changes in their reward structure to limit the spread of misinformation.

About the study:  The research was supported and funded by the USC Dornsife College of Letters, Arts and Sciences Department of Psychology, the USC Marshall School of Business and the Yale University School of Management.

Related Articles

Residency classmates-turned-usc faculty achieve simultaneous funding success, 2 usc faculty members named 2024 guggenheim fellows, trojan’s big idea to cut auto emissions: mini evs.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 10 March 2022

Misinformation: susceptibility, spread, and interventions to immunize the public

  • Sander van der Linden   ORCID: orcid.org/0000-0002-0269-1744 1  

Nature Medicine volume  28 ,  pages 460–467 ( 2022 ) Cite this article

61k Accesses

151 Citations

506 Altmetric

Metrics details

  • Communication

The spread of misinformation poses a considerable threat to public health and the successful management of a global pandemic. For example, studies find that exposure to misinformation can undermine vaccination uptake and compliance with public-health guidelines. As research on the science of misinformation is rapidly emerging, this conceptual Review summarizes what we know along three key dimensions of the infodemic: susceptibility, spread, and immunization. Extant research is evaluated on the questions of why (some) people are (more) susceptible to misinformation, how misinformation spreads in online social networks, and which interventions can help to boost psychological immunity to misinformation. Implications for managing the infodemic are discussed.

Similar content being viewed by others

social media misinformation essay

Combining interventions to reduce the spread of viral misinformation

Joseph B. Bak-Coleman, Ian Kennedy, … Jevin D. West

social media misinformation essay

Psychological inoculation protects against the social media infodemic

Robert McPhedran, Michael Ratajczak, … Natalie Gold

social media misinformation essay

Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal

Francesco Pierri, Brea L. Perry, … John Bryden

In early 2020, the World Health Organization (WHO) declared a worldwide ‘infodemic’. An infodemic is characterized by an overabundance of information, particularly false and misleading information 1 . Although researchers have debated the effect of fake news on the outcomes of major societal events, such as political elections 2 , 3 , the spread of misinformation has much clearer potential to cause direct and notable harm to public health, especially during a pandemic. For example, research across different countries has shown that the endorsement of COVID-19 misinformation is robustly associated with people being less likely to follow public-health guidance 4 , 5 , 6 , 7 and having reduced intentions to get vaccinated 4 , 5 and to recommend the vaccine to others 4 . Experimental evidence has found that exposure to misinformation about vaccination resulted in about a 6-percentage-point decrease in the intention to get vaccinated among those who said that they would otherwise “definitely accept a vaccine”, undermining the potential for herd immunity 8 . Analyses of social-network data estimate that, without intervention, anti-vaccination content on social platforms such as Facebook will dominate discourse in the next decade 9 . Other research finds that exposure to misinformation about COVID-19 has been linked to the ingestion of harmful substances 10 and an increased propensity to engage in violent behaviors 11 . Of course, misinformation was a threat to public health long before the pandemic. The debunked link between the MMR vaccine and autism was associated with a significant drop in vaccination coverage in the United Kingdom 12 , Listerine manufacturers falsely claimed that their mouthwash cured the common cold for many decades 13 , misinformation about tobacco products has influenced attitudes toward smoking 14 and, in 2014, Ebola clinics were attacked in Liberia because of the false belief that the virus was part of a government conspiracy 15 .

Given the unprecedented scale and pace at which misinformation can now travel online, research has increasingly relied on models from epidemiology to understand the spread of fake news 16 , 17 , 18 . In these models, the key focus is on the reproduction number ( R 0 )—in other words, the number of individuals who will start posting fake news (that is, secondary cases) following contact with someone who is already posting misinformation (the infectious individual). It is therefore helpful to think of misinformation as a viral pathogen that can infect its host, spreading rapidly from one individual to another within a given network, without the need for physical contact. One benefit of this epidemiological approach lies in the fact that early detection systems could be designed to identify, for example, superspreaders, which would allow for the timely deployment of interventions to curb the spread of viral misinformation 18 .

This Review will provide readers with a conceptual overview of recent literature on misinformation, along with a research agenda (Box 1 ) that covers three major theoretical dimensions aligned with the viral analogy: susceptibility, spread, and immunization. What makes individuals susceptible to misinformation in the first place? Why and how does it spread? And what can we do to boost public immunity?

Before reviewing the extant literature to help answer these questions, it is worth briefly discussing what the term ‘misinformation’ means, because inconsistent definitions affect not only the conceptualization of research designs but also the nature and validity of key outcome measures 19 . Indeed, misinformation has been referred to as an ‘umbrella category of symptoms’ 20 not only because definitions vary, but also because the behavioral consequences for public health might differ depending on the type of misinformation. The term ‘fake news’ is often especially regarded as problematic because it insufficiently describes the full spectrum of misinformation 21 and has become a politicized rhetorical device in itself 22 . Box 2 provides a more detailed discussion of the problems associated with different scholarly definitions of misinformation 23 but for the purpose of this Review, I will simply define misinformation in its broadest possible sense: ‘false or misleading information masquerading as legitimate news,’ regardless of intent 24 . Although disinformation is often differentiated from misinformation insofar as it involves a clear intention to deceive or harm other people, intent can be difficult to establish, so in this Review my treatment of misinformation will cover both intentional and unintentional forms of misinformation.

Box 1 Agenda and recommendations for future research

Research question 1: What factors make people susceptible to misinformation?

Better integrate accuracy-driven with social, political, and cultural motivations to explain people’s susceptibility to misinformation.

Define, develop, and validate standardized instruments for assessing general and domain-specific susceptibility to misinformation.

Research question 2: How does misinformation spread in social networks?

Outline with greater clarity the conditions under which ‘exposure’ is more or less likely to lead to ‘infection,’ including the impact of repeated exposure, the micro-targeting of fake news on social media, contact with superspreaders, the role of echo chambers, and the structure of the social network itself.

Provide more accurate population-level estimates of exposure to misinformation by (1) capturing more diverse types of misinformation and (2) linking exposure to fake news across different kinds of traditional and social-media platforms.

Research question 3: Can we inoculate or immunize people against misinformation?

Focus on evaluating the relative efficacy of different debunking methods in the field, as well as how debunking (therapeutic) and prebunking (prophylactic) interventions could be combined to maximize their protective properties.

Model and evaluate how psychological inoculation methods can spread online and influence real-world sharing behavior on social media.

Box 2 The challenges with defining and operationalizing misinformation

One of the most frequently cited definitions of fake news is “fabricated information that mimics news media content in form but not in organizational process or intent” 119 . This definition implies that what matters in determining whether a story is misinformation or not is the journalistic or editorial process. Other definitions echo similar sentiments insofar the view is taken that misinformation producers do not adhere to editorial norms 120 and that the defining attribution of ‘fake-ness’ happens at the level of the publisher and not at the level of the story 3 . However, others have taken a completely different view by defining misinformation either in terms of the veracity of its content or the presence or absence of common techniques used to produce it 109 .

It could be argued that some definitions are overly narrow because news stories do not need to be completely false in order to be misleading. A highly salient example comes from the Chicago Tribune , a generally credible outlet, which re-published a story in January 2021 with the headline “A healthy doctor died two weeks after getting a COVID-19 vaccine”. This story would not be classified as false on the basis of the source or even the content, as the events were true when considered in isolation. However, it is highly misleading—even considered unethical—to suggest that the doctor died specifically because of the COVID-19 vaccine when there was no evidence to make such a causal connection at the time of publication. This is not an obscure example: it was viewed over 50 million times on Facebook in early 2021 (ref. 121 ).

Another potential challenge with purely content-based definitions is that when expert consensus on a public-health topic is rapidly emerging and subject to uncertainty and change, the definition of what is likely to be true and false can shift over time, making overly simplistic ‘real’ versus ‘fake’ categorizations a potentially unstable property. For example, although news media initially claimed that ibuprofen could worsen coronavirus symptoms, this claim was later retracted as more evidence became available 122 . The problem is that researchers often ask people how accurate or reliable they find a selective series of true and fake headlines that were either created or selected by the researchers on the basis of different definitions of what constitutes misinformation.

There is also variation in outcome measures; sometimes the relevant outcome measure is misinformation susceptibility, and sometimes it is the difference between fake and real news detection, or so-called ‘truth discernment’. The only existing instrument that uses a psychometrically validated set of headlines is the recent Misinformation Susceptibility Test, a standardized measure of news veracity discernment that is normed to the test population 123 . Overall, this means that hundreds of emerging studies on the topic of misinformation have outcome measures that are non-standardized and not always easily comparable.

Susceptibility

Although people use many cognitive heuristics to make judgments about the veracity of a claim (for example, perceived source credibility) 25 , one particularly prominent finding that helps explain why people are susceptible to misinformation is known as the ‘illusory truth’ effect: repeated claims are more likely be judged as true than non-repeated (or novel) claims 26 . Given the fact that many falsehoods are often repeated by the popular media, politicians, and social-media influencers, the relevance of illusory truth has increased substantially. For example, the conspiracy theory that the coronavirus was bio-engineered in a military laboratory in Wuhan, China, and the false claim that “COVID-19 is no worse than the flu” have been repeated many times in the media 27 . The primary cognitive mechanism responsible for the fact that people are more likely to think that repeated claims are true is known as processing fluency: the more a claim is repeated, the more familiar it becomes and the easier it is to process 28 . In other words, the brain uses fluency as a signal for truth. Importantly, research shows that (1) prior exposure to fake news increases its perceived accuracy 29 ; (2) illusory truth can occur for both plausible and implausible claims 30 ; (3) prior knowledge does not necessarily protect people against illusory truth 31 ; and (4) illusory truth does not appear to be moderated by thinking styles such as analytical versus intuitive reasoning 32 .

Although illusory truth can affect everyone, research has noted that some people are still more susceptible to misinformation than others. For example, some common findings include the observation that older individuals are more susceptible to fake news 33 , 34 , potentially owing to factors such as cognitive decline and greater digital illiteracy 35 , although there are exceptions: in the context of COVID-19, older individuals appear less likely to endorse misinformation 4 . Those with a more extreme and right-wing political orientation have also consistently shown to be more susceptible to misinformation 3 , 4 , 33 , 36 , 37 , even when the misinformation in question is non-political 38 , 39 . Yet, the link between ideology and misinformation susceptibility is not always consistent across different cultures 4 , 37 . Other factors such as greater numeracy skills 4 and cognitive and analytic thinking styles 36 , 40 , 41 have consistently been revealed to have a negative correlation with misinformation susceptibility—although other scholars have identified partisanship as a potential moderating factor 42 , 43 , 44 . In fact, these individual differences have given rise to two competing overarching theoretical explanations 45 , 46 for why people are susceptible to misinformation. The first theory is often referred to as the classical ‘inattention’ account; the second is often dubbed the ‘identity-protective’ or ‘motivated cognition’ account. I will discuss emerging evidence for both theories in turn.

The inattention account

The inattention or ‘classical reasoning’ account argues that people are committed to sharing accurate content but the context of social media simply distracts people from making news-sharing decisions that are based on a preference for accuracy 45 . For example, consider that people are often bombarded with news content online, much of which is emotionally charged and political, which, coupled with the fact that people have limited time and resources to think about the veracity of a piece of news, might significantly interfere with their ability to accurately reflect on such content. The inattention account is based on a ‘classical’ reasoning perspective insofar as it draws on dual-process theories of human cognition, which suggest that people rely on two qualitatively different processes of reasoning 47 . These processes are often referred to as System 1, which is predominantly automatic, associative, and intuitive, and System 2, which is more reflective, analytical, and deliberate. A canonical example is the Cognitive Reflection Test (CRT), which administers a series of puzzles in which the intuitive or first answer that comes to mind is often wrong and thus a correct answer requires people to pause and reflect more carefully. The basic point is that activating more analytical System 2-type reasoning can override erroneous System 1-type intuitions. Evidence for the inattention account comes from the fact that people who score higher on the CRT 36 , 41 , who deliberate more 48 , who have greater numeracy skills 4 , and who have higher knowledge and education 37 , 49 are consistently better able to discern between true and false news—regardless of whether the content is politically congruent 36 . In addition, experimental interventions that ‘prime’ people to think more analytically or consider the accuracy of news content 50 , 51 have been shown to improve the quality of people’s news-sharing decisions and decrease acceptance of conspiracy theories 52 .

The motivated reasoning account

In stark contrast to the inattention account stands the theory of (politically) motivated reasoning 53 , 54 , 55 , which posits that information deficits or lack of reflective reasoning are not the primary driver of susceptibility to misinformation. Motivated reasoning occurs when someone starts out their reasoning process with a pre-determined goal (for example, someone might want to believe that vaccines are unsafe because that belief is shared by their family members), so individuals interpret new (mis)information in service of reaching that goal 53 . The motivated account therefore argues that the types of commitments that people have to their affinity groups is what leads them to selectively endorse media content that reinforces deeply held political, religious, or social identities 56 , 57 . There are several variants of the politically motivated reasoning account, but the basic premise is that people pay attention to not just the accuracy of a piece of news content, but also the goals that such information may serve. For example, a fake news story could be viewed as much more plausible when it happens to offer positive information about someone’s political group, or equally when it offers negative information about a political opponent 42 , 57 , 58 . A more extreme and scientifically contentious version of this model, also known as the ‘motivated numeracy’ 59 account, suggests that more reflective and analytical System 2 reasoning abilities do not help people make more accurate assessments but in fact are frequently hijacked in service of identity-based reasoning. Evidence for this claim comes from the fact that partisans with the highest numeracy and education levels tend to be the most polarized on contested scientific issues, such as climate change 60 or stem-cell research 61 . Experimental work has also shown that when people are asked to make causal inferences about a data problem, such as the benefits of a new skin rash treatment, people with greater numeracy skills performed better when the problem was non-political. By contrast, people became more polarized and less accurate when the same data were presented as results from a new study on gun control 59 . These patterns were more pronounced among those with higher numeracy skills. Other research has found that politically conservative individuals are much more likely to (mistakenly) judge misinformation as true when the information is presented as coming from a conservative source than when that same information is presented as coming from a liberal source, and vice versa for politically liberal individuals—highlighting the key role of politics in truth discernment 62 .

Susceptibility: limitations and future research

It is worth mentioning that both accounts face significant critiques and limitations. For example, independent replications of interventions designed to nudge accuracy have revealed mixed findings 63 , and questions have been raised about the conceptualization of partisan bias in these studies 43 , including the possibility that the intervention effects are moderated by people’s political identities 44 . In turn, the motivated numeracy account has faced several failed and mixed replications 64 , 65 , 66 . For example, one large nationally representative study in the United States showed that, although polarization on global warming was indeed greatest among the highest educated partisans at baseline, this effect was neutralized and even reversed by an experimental intervention that induced accuracy motivations by highlighting the scientific consensus on global warming 66 . These findings have led to the discovery of a much larger confound in the motivated-reasoning literature, in that partisan bias could simply be due to selective exposure rather than motivated reasoning 66 , 67 , 68 . This is so because the role of politics is confounded with people’s prior beliefs 66 . Although people are polarized on many issues, this does not mean that they are unwilling to update their (misinformed) beliefs in line with the evidence. Moreover, people might refuse to update their beliefs not because of a motivation to reject the information (because it is incongruent with their political worldview) but simply because they find the information not credible, either because they discount the source or the veracity of the content itself for what appear to be legitimate reasons to those individuals. This ‘equivalence paradox’ 69 makes it difficult to causally disentangle accuracy from motivation-based preferences.

Future research should therefore not only carefully manipulate people’s motivations in the processing of (mis)information that is politically (dis)concordant, but also offer a more integrated theoretical account of susceptibility to misinformation. For example, it is likely that for political fake news, identity-motivations are going to be more salient; however, for misinformation that tackles non-politicized issues (such as falsehoods about cures for the common cold), knowledge deficits, inattention, or confusion might be more likely to play a role. Of course, it is possible for public-health issues—such as COVID-19—to become politicized relatively quickly, in which case the prominence of motivational goals in driving susceptibility to misinformation might increase. Accuracy and motivational goals are also frequently in conflict. For example, people might understand that a news story is unlikely to be true, but if the misinformation promotes the goals of their social group, they might be more inclined to forgo their desire for accuracy in favor of a motivation to conform with the norms of their community 56 , 57 . In other words, in any given context, the importance people assign to accuracy versus social goals is going to determine how and when they are going to update their beliefs in light of misinformation. There is much to be gained by advancing more contextual theories that focus on the interplay between accuracy and socio-political goals in explaining why people are susceptible to misinformation.

Measuring the infodemic

To return to the viral analogy, researchers have adopted models from epidemiology, such as the susceptible–Infected–recovered (SIR) model, to measure and quantify the spread of misinformation in online social networks 17 , 70 . In this context, R 0 often represents individuals who will start posting fake news following contact with someone who is already ‘infected’. Evidence for the potential of an infodemic is taken when R 0 exceeds 1, which signals the potential for exponential growth (when R 0 is lower than 1, the infodemic will eventually sizzle out) and is evidence pointing to a possible infodemic. Analyses of social-media platforms have shown that all have the potential to drive infodemic-like spread, but some are more capable than others 17 . For example, research on Twitter has found that false news is about 70% more likely to be shared than true news, and it takes true news 6 times longer than false stories to reach 1,500 people 71 . Although fake news can thus spread faster and deeper than true news, it is important to emphasize that these findings are based on a relatively narrow definition of fact-checked news (see Box 2 and ref. 70 ), and more recent research has pointed out that these estimates are likely platform-dependent 72 . Importantly, several studies have now shown that fake news typically represents a small part of people’s overall media diet and that the spread of misinformation on social media is highly skewed so that a small number of accounts are responsible for the majority of the content that is shared and consumed, also known as ‘supersharers’ and ‘superconsumers’ 3 , 24 , 73 . Although much of this work has come from the political domain, very similar findings have been found in the context of the COVID-19 pandemic, during which ‘superspreaders’ on Twitter and Facebook were exerting a majority of the influence on the platform 74 . A major issue is the existence of echo chambers, in which the flow of information is often systematically biased toward like-minded others 72 , 75 , 76 . Although the prevalence of echo chambers is debated 77 , the existence of such polarized clusters has shown to aid the virality of misinformation 75 , 78 , 79 and impede the spread of corrections 76 .

Exposure does not equal infection

Importantly, exposure estimates based on social-media data often do not seem to line up with people’s self-reported experiences. Different polls show that over a third of people self-report frequent, if not daily exposure, to misinformation 80 . Of course, the validity of people’s self-reported experiences can be variable, but it raises questions about the accuracy of exposure estimates, which are often based on limited public data and can be sensitive to model assumptions. Moreover, a crucial factor to consider here is that exposure does not equal persuasion (or ‘infection’). For example, research in the context of COVID-19 headlines shows that people’s judgments of headline veracity had little impact on their sharing intentions 45 . People may thus choose to share misinformation for reasons other than accuracy. For example, one recent study 81 found that people often share content that appears ‘interesting if true’. The study indicated that although people rate fake news as less accurate, they also rate it as ‘more interesting if true’ than real news and are thus willing to share it.

Spread: limitations and future research

More generally, the body of research on ‘spreading’ has faced significant limitations, including critical gaps in knowledge. There is skepticism about the rate at which people exposed to misinformation begin to actually believe it because research on media and persuasion effects has shown that it is difficult to persuade people using traditional advertisements 82 . But existing research has often used contrived laboratory designs that may not sufficiently represent the environment in which people make news-sharing decisions. For example, studies often test one-off exposures to a single message rather than persuasion as a function of repeated exposure to misinformation from diverse social and traditional media sources. Accordingly, we need a better understanding of the frequency and intensity with which exposure to misinformation ultimately leads to persuasion. Most studies also rely on publicly available data that people have shared or clicked on, but people may be exposed and influenced by much more information while scrolling on their social-media feed 45 . Moreover, fake news is often conceptualized as a list of URLs that were fact-checked as true or false, but this type of fake news represents only a small segment of misinformation; people may be much more likely to encounter content that is misleading or manipulative without being overtly false (see Box 2 ). Finally, micro-targeting efforts have significantly enhanced the ability for misinformation producers to identify and target subpopulations of individuals who are most susceptible to persuasion 83 . In short, more research is needed before precise and valid conclusions can be made about either population-level exposure or the probability that exposure to misinformation leads to infection (that is, persuasion).

Immunization

A rapidly emerging body of research has started to evaluate the possibility of ‘immunizing’ the public against misinformation at a cognitive level. I will categorize these efforts by whether their application is primarily prophylactic (preventative) or therapeutic (post-exposure), also known as ‘prebunking’ and ‘debunking,’ respectively.

Therapeutic treatments: fact-checking and debunking

The traditional, standard approach to countering misinformation generally involves the correction of a myth or falsehood after people have already been exposed or persuaded by a piece of misinformation. For example, debunking misinformation about autism interventions has shown to be effective in reducing support for non-empirically supported treatments, such as dieting 84 . Exposure to court-ordered corrective advertisements from the tobacco industry on the link between smoking and disease can increase knowledge and reduce misperceptions about smoking 85 . In one randomized controlled trial, a video debunking several myths about vaccination effectively reduced influential misperceptions, such as the false belief that vaccines cause autism or that they reduce the strength of the natural immune system 86 . Meta-analyses have consistently found that fact-checking and debunking interventions can be effective 87 , 88 , including in the context of countering health misinformation on social media 89 . However, not all medical misperceptions are equally amenable to corrections 90 . In fact, these same analyses note that the effectiveness of interventions is significantly attenuated by (1) the quality of the debunk, (2) the passing of time, and (3) prior beliefs and ideologies. For example, the aforementioned studies on autism 84 and corrective smoking advertisements 85 showed no remaining effect after a 1-week and 6-week follow-up, respectively. When designing corrections, simply labeling information as false or incorrect is generally not sufficient because correcting a myth by means of a simple retraction leaves a gap in people’s understanding of why the information is false and what is true instead. Accordingly, the recommendation for practitioners is often to craft much more detailed debunking materials 88 . Reviews of the literature 91 , 92 have indicated that best practice in designing debunking messages involves (1) leading with the truth, (2) appealing to scientific consensus and authoritative expert sources, (3) ensuring that the correction is easily accessible and not more complex than the initial misinformation, (4) a clear explanation of why the misinformation is wrong, and (5) the provision of a coherent alternative causal explanation (Fig. 1 ). Although there is generally a lack of comparative research, some recent studies have shown that optimizing debunking messages according to these guidelines enhances their efficacy when compared with alternative or business-as-usual debunking methods 84 .

figure 1

An effective debunking message should open with the facts and present them in a simple and memorable fashion. The audience should then be warned about the myth (do not repeat the myth more than once). The manipulation technique used to mislead people should subsequently be identified and exposed. End by repeating the facts and emphasizing the correct explanation.

Debunking: limitations and future research

Despite these advances, significant concerns have been raised about the application of such post hoc ‘therapeutic’ corrections, mostly notably the risk of a correction backfiring so that people end up believing more in the myth as a result of the correction. This backfire effect can occur along two potential dimensions 92 , 93 , one of which concerns psychological reactance against the correction itself (the ‘worldview’ backfire effect) whereas the other is concerned with the repetition of false information (the ‘familiarity’ backfire effect). Although early research was supportive of the fact that, for example, corrections about myths surrounding the flu and MMR vaccine can cause already concerned individuals to become even more hesitant about vaccination decisions 94 , 95 , more recent studies have failed to find evidence for such worldview backfire effects 93 , 96 . In fact, while evidence of backfire remains widely cited, recent replications have failed to reproduce such effects when correcting misinformation about vaccinations specifically 97 . Thus, although the effect likely exists, its frequency and intensity is less common than previously thought. Worldview backfire concerns can also be minimized by designing debunking messages in a way that coheres rather than conflicts with the recipients’ worldviews 92 . Nonetheless, because debunking forces a rhetorical frame in which the misinformation needs to be repeated in order to correct it (that is, rebutting someone else’s claim), there is a risk that such repetition enhances familiarity with the myth while people subsequently fail to encode the correction in long-term memory. Although research clearly shows that people are more likely to believe repeated (mis)information than non-repeated (mis)information 26 , recent work has found that the risk of ironically strengthening a myth as part of a debunking effort is relatively low 93 , especially when the debunking messages feature the correction prominently relative to the misinformation. The consensus is therefore that, although practitioners should be aware of these backfire concerns, they should not prevent the issuing of corrections given the infrequent nature of these side effects 91 , 93 .

Having said this, there are two other notable problems with therapeutic approaches that limit their efficacy. The first is that retrospective corrections do not reach the same amount of people as the original misinformation. For example, estimates reveal that only about 40% of smokers were exposed to the tobacco industry’s court-ordered corrections 98 . A related concern is that, after being exposed, people continue to make inferences on the basis of falsehoods, even when they acknowledge a correction. This phenomenon is known as the ‘continued influence of misinformation’ 92 , and meta-analyses have found robust evidence of continued influence effects in a wide range of contexts 88 , 99 .

Prophylactic treatments: inoculation theory

Accordingly, researchers have recently begun to explore prophylactic or pre-emptive approaches to countering misinformation, that is, before an individual has been exposed to or has reached ‘infectious’ status. Although prebunking is a more general term used for interventions that pre-emptively remind people to ‘think before they post’ 51 , such reminders in and of themselves do not equip people with any new skills to identify and resist misinformation. The most common framework for preventing unwanted persuasion is psychological inoculation theory 100 , 101 (Fig. 2 ). The theory of psychological inoculation follows the biomedical analogy and posits that, just as vaccines trigger the production of antibodies to help confer immunity against future infection, the same can be achieved with information. By pre-emptively forewarning and exposing people to severely weakened doses of misinformation (coupled with strong refutations), people can cultivate cognitive resistance against future misinformation 102 . Inoculation theory operates via two mechanisms, namely (1) motivational threat (a desire to defend oneself from manipulation attacks) and (2) refutational pre-emption or prebunking (pre-exposure to a weakened example of the attack). For example, research has found that inoculating people against conspiratorial arguments about vaccination before (but not after) exposure to a conspiracy theory effectively raised vaccination intentions 103 . Several recent reviews 102 , 104 and meta-analyses 105 have pointed to the efficacy of psychological inoculation as a robust strategy for conferring immunity to persuasion by misinformation, including many applications in the health domain 106 , such as inoculating people against misinformation about the use of mammography in breast-cancer screening 107 .

figure 2

Psychological inoculation consists of two core components: (1) forewarning people that they may be misled by misinformation (to activate the psychological ‘immune system’), and (2) prebunking the misinformation (tactic) by exposing people to a severely weakened dose of it coupled with strong counters and refutations (to generate the cognitive ‘antibodies’). Once people have gained ‘immunity’ they can then vicariously spread the inoculation to others via offline and online interactions.

Several recent advances, in particular, are worth noting. The first is that the field has moved from ‘narrow-spectrum’ or ‘fact-based’ inoculation to ‘broad-spectrum’ or ‘technique-based’ immunization 102 , 108 . The reasoning behind this shift is that, although it is possible to synthesize a severely weakened dose from existing misinformation (and to subsequently refute that weakened example with strong counterarguments), it is difficult to scale the vaccine if this process has to be repeated anew for every piece of misinformation. Instead, scholars have started to identify the common building blocks of misinformation more generally 38 , 109 , including techniques such as impersonating fake experts and doctors, manipulating people’s emotions with fear appeals, and the use of conspiracy theories. Research has found that people can be inoculated against these underlying strategies and, as a result, become relatively more immune to a whole range of misinformation that makes use of these tactics 38 , 102 . This process is sometimes referred to as cross-protection insofar as inoculating people against one strain offers protection against related and different strains of the same misinformation tactic.

A second advance surrounds the application of active versus passive inoculation. Whereas the traditional inoculation process is passive insofar as people pre-emptively receive the specific refutations from the experimenter, the process of active inoculation encourages people to generate their own ‘antibodies’. Perhaps the best-known example of active inoculation are popular gamified inoculation interventions such as Bad News 38 and GoViral! 110 , where players step into the shoes of a misinformation producer and are exposed—in a simulated social-media environment—to weakened doses of common strategies used to spread misinformation. As part of this process, players actively generate their own media content and unveil the techniques of manipulation. Research has found that resistance to deception occurs when people (1) recognize their own vulnerability to being persuaded and (2) perceive undue intent to manipulate their opinion 111 , 112 . These games therefore aim to expose people’s vulnerability, motivating an individuals’ desire to protect themselves against misinformation through pre-exposure to weakened doses. Randomized controlled trials have found that active inoculation games help people identify misinformation 38 , 110 , 113 , 114 , boost confidence in people’s truth-discernment abilities 110 , 113 , and reduce self-reported sharing of misinformation 110 , 115 . Yet, like many biological vaccines, research has found that psychological immunity also wanes over time but can be maintained for several months with regular ‘booster’ shots that re-engage people with the inoculation process 114 . A benefit of this line of research is that these gamified interventions have been evaluated and scaled across millions of people as part of the WHO’s ‘Stop The Spread’ campaign and the United Nations’ ‘Verified’ initiative in collaboration with the UK government 110 , 116 .

Prebunking: limitations and future research

A potential limitation is that, although misinformation tropes are often repeated throughout history (consider similarities in the myths that the cowpox vaccine would turn people into human–cow hybrids and the conspiracy theory that COVID-19 vaccines alter human DNA), inoculation does require at least some advanced knowledge of what misinformation (tactic) people might be exposed to in the future 91 . In addition, as healthcare workers are being trained to combat misinformation 117 , it is important to avoid confusion in terminology when using psychological inoculation to counter vaccine hesitancy. For example, the approach can be implemented without making explicit reference to the vaccination analogy and instead can focus on the value of ‘prebunking’ and helping people unveil the techniques of manipulation.

Several other important open questions remain. For example, analogous to recent advances in experimental medicine on the application of therapeutic vaccines—which can still boost immune responses after infection—research has found that inoculation can still protect people from misinformation even when they have already been exposed to misinformation 108 , 112 , 118 . This makes conceptual sense insofar it may take repeated exposure or a significant amount of time for misinformation to fully persuade people or become integrated with prior attitudes. Yet it remains conceptually unclear at which point therapeutic inoculation transitions into traditional debunking. Moreover, although some comparisons of active versus passive inoculation approaches exist 105 , 110 , the evidence base for active forms of inoculation remains relatively small. Similarly, whereas head-to-head studies that compared prebunking to debunking suggest that prevention is indeed better than cure 103 , more comparative research is needed. Research also finds that it is possible for people to vicariously pass on the inoculation interpersonally or on social media, a process known as ‘post-inoculation talk’ 104 , which alludes to the possibility of herd immunity in online communities 110 , yet no social-network simulations currently exist that evaluate the potential of inoculative approaches. Current studies are also based on self-reported sharing of misinformation. Future research will need to evaluate the extent to which inoculation can scale across the population and influence objective news-sharing behavior on social media.

The spread of misinformation has undermined public-health efforts, from vaccination uptake to public compliance with health-protective behaviors. Research finds that although people are sometimes duped by misinformation because they are distracted on social media and are not paying sufficient attention to accuracy cues, the politicized nature of many public-health challenges suggests that people also believe in and share misinformation because doing so reinforces important socio-political beliefs and identity structures. A more integrated framework is needed that is sensitive to context and can account for varying susceptibility to misinformation on the basis of how people prioritize accuracy and social motives when forming judgments about the veracity of news media. Although ‘exposure’ does not equal ‘infection,’ misinformation can spread fast in online networks, and its virality is often aided by the existence of political echo chambers. Importantly, however, the bulk of misinformation on social media often originates from influential accounts and superspreaders. Therapeutic and prophylactic approaches to countering misinformation have both demonstrated some success, but given the continued influence of misinformation after exposure, there is much value in preventative approaches, and more research is needed on how to best combine debunking and prebunking efforts. Further research is also encouraged to outline the benefits and potential challenges to applying the epidemiological model to understand the psychology behind the spread of misinformation. A major challenge for the field moving forward will be clearly defining how misinformation is measured and conceptualized, as well as the need for standardized psychometric instruments that allow for better comparisons of outcomes across studies.

Zarocostas, J. How to fight an infodemic. Lancet 395 , 676 (2020).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Allcott, H. & Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 31 , 211–236 (2020).

Article   Google Scholar  

Grinberg, N. et al. Fake news on Twitter during the 2016 US presidential election. Science 363 , 374–378 (2019).

Article   CAS   PubMed   Google Scholar  

Roozenbeek, J. et al. Susceptibility to misinformation about COVID-19 around the world. R. Soc. Open Sci. 7 , 201199 (2020).

Romer, D. & Jamieson, K. H. Conspiracy theories as barriers to controlling the spread of COVID-19 in the US. Soc. Sci. Med. 263 , 113356 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Imhoff, R. & Lamberty, P. A bioweapon or a hoax? The link between distinct conspiracy beliefs about the coronavirus disease (COVID-19) outbreak and pandemic behavior. Soc. Psychol. Personal. Sci. 11 , 1110–1118 (2020).

Article   PubMed Central   Google Scholar  

Freeman, D. et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol. Med. https://doi.org/10.1017/S0033291720001890 (2020).

Loomba, S. et al. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav. 5 , 337–348 (2021).

Article   PubMed   Google Scholar  

Johnson, N. et al. The online competition between pro-and anti-vaccination views. Nature 58 , 230–233 (2020).

Aghababaeian, H. et al. Alcohol intake in an attempt to fight COVID-19: a medical myth in Iran. Alcohol 88 , 29–32 (2020).

Jolley, D. & Paterson, J. L. Pylons ablaze: examining the role of 5G COVID‐19 conspiracy beliefs and support for violence. Br. J. Soc. Psychol. 59 , 628–640 (2020).

Dubé, E. et al. Vaccine hesitancy, vaccine refusal and the anti-vaccine movement: influence, impact and implications. Expert Rev. Vaccines 14 , 99–117 (2015).

Armstrong, G. M. et al. A longitudinal evaluation of the Listerine corrective advertising campaign. J. Public Policy Mark. 2 , 16–28 (1983).

Albarracin, D. et al. Misleading claims about tobacco products in YouTube videos: experimental effects of misinformation on unhealthy attitudes. J. Medical Internet Res . 20 , e9959 (2018).

Krishna, A. & Thompson, T. L. Misinformation about health: a review of health communication and misinformation scholarship. Am. Behav. Sci. 65 , 316–332 (2021).

Kucharski, A. Study epidemiology of fake news. Nature 540 , 525–525 (2016).

Cinelli, M. et al. The COVID-19 social media infodemic. Sci. Rep. 10 , 1–10 (2020).

Scales, D. et al. The COVID-19 infodemic—applying the epidemiologic model to counter misinformation. N. Engl. J. Med 385 , 678–681 (2021).

Vraga, E. K. & Bode, L. Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. Polit. Commun. 37 , 136–144 (2020).

Southwell et al. Misinformation as a misunderstood challenge to public health. Am. J. Prev. Med. 57 , 282–285 (2019).

Wardle, C. & Derakhshan, H. Information Disorder: toward an Interdisciplinary Framework for Research and Policymaking . Council of Europe report DGI (2017)09 (Council of Europe, 2017).

van der Linden, S. et al. You are fake news: political bias in perceptions of fake news. Media Cult. Soc. 42 , 460–470 (2020).

Tandoc, E. C. Jr et al. Defining ‘fake news’ a typology of scholarly definitions. Digit. J. 6 , 137–153 (2018).

Google Scholar  

Allen, J. et al. Evaluating the fake news problem at the scale of the information ecosystem. Sci. Adv. 6 , eaay3539 (2020).

Marsh, E. J. & Yang, B. W. in Misinformation and Mass Audiences (eds Southwell, B. G., Thorson, E. A., & Sheble, L) 15–34 (University of Texas Press, 2018).

Dechêne, A. et al. The truth about the truth: a meta-analytic review of the truth effect. Pers. Soc. Psychol. Rev. 14 , 238–257 (2010).

Lewis, T. Eight persistent COVID-19 myths and why people believe them. Scientific American . https://www.scientificamerican.com/article/eight-persistent-covid-19-myths-and-why-people-believe-them/ (2020).

Wang, W. C. et al. On known unknowns: fluency and the neural mechanisms of illusory truth. J. Cogn. Neurosci. 28 , 739–746 (2016).

Pennycook, G. et al. Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 147 , 1865–1880 (2018).

Fazio, L. K. et al. Repetition increases perceived truth equally for plausible and implausible statements. Psychon. Bull. Rev. 26 , 1705–1710 (2019).

Fazio, L. K. et al. Knowledge does not protect against illusory truth. J. Exp. Psychol. Gen. 144 , 993–1002 (2015).

De Keersmaecker, J. et al. Investigating the robustness of the illusory truth effect across individual differences in cognitive ability, need for cognitive closure, and cognitive style. Pers. Soc. Psychol. Bull. 46 , 204–215 (2020).

Guess, A. et al. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci. Adv. 5 , eaau4586 (2019).

Saunders, J. & Jess, A. The effects of age on remembering and knowing misinformation. Memory 18 , 1–11 (2010).

Brashier, N. M. & Schacter, D. L. Aging in an era of fake news. Curr. Dir. Psychol. Sci. 29 , 316–323 (2020).

Pennycook, G. & Rand, D. G. Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188 , 39–50 (2019).

Imhoff, R. et al. Conspiracy mentality and political orientation across 26 countries. Nat. Hum. Behav. https://doi.org/10.1038/s41562-021-01258-7 (2022).

Roozenbeek, J. & van der Linden, S. Fake news game confers psychological resistance against online misinformation. Humanit. Soc. Sci. Commun. 5 , 1–10 (2019).

Van der Linden, S. et al. The paranoid style in American politics revisited: an ideological asymmetry in conspiratorial thinking. Polit. Psychol. 42 , 23–51 (2021).

De keersmaecker, J. & Roets, A. ‘Fake news’: incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions. Intelligence 65 , 107–110 (2017).

Bronstein, M. V. et al. Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. J. Appl. Res. Mem. 8 , 108–117 (2019).

Greene, C. M. et al. Misremembering Brexit: partisan bias and individual predictors of false memories for fake news stories among Brexit voters. Memory 29 , 587–604 (2021).

Gawronski, B. Partisan bias in the identification of fake news. Trends Cogn. Sci. 25 , 723–724 (2021).

Rathje, S et al. Meta-analysis reveals that accuracy nudges have little to no effect for US conservatives: Regarding Pennycook et al. (2020). Psychol. Sci. https://doi.org/10.25384/SAGE.12594110.v2 (2021).

Pennycook, G. & Rand, D. G. The psychology of fake news. Trends Cogn. Sci. 22 , 388–402 (2021).

van der Linden, S. et al. How can psychological science help counter the spread of fake news? Span. J. Psychol. 24 , e25 (2021).

Evans, J. S. B. In two minds: dual-process accounts of reasoning. Trends Cogn. Sci. 7 , 454–459 (2003).

Bago, B. et al. Fake news, fast and slow: deliberation reduces belief in false (but not true) news headlines. J. Exp. Psychol. Gen. 149 , 1608–1613 (2020).

Scherer, L. D. et al. Who is susceptible to online health misinformation? A test of four psychosocial hypotheses. Health Psychol. 40 , 274–284 (2021).

Pennycook, G. et al. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol. Sci. 31 , 770–780 (2020).

Pennycook, G. et al. Shifting attention to accuracy can reduce misinformation online. Nature 592 , 590–595 (2021).

Swami, V. et al. Analytic thinking reduces belief in conspiracy theories. Cognition 133 , 572–585 (2014).

Kunda, Z. The case for motivated reasoning. Psychol. Bull. 108 , 480–498 (1990).

Kahan, D. M. in Emerging Trends in the Social and Behavioral sciences (eds Scott, R. & Kosslyn, S.) 1–16 (John Wiley & Sons, 2016).

Bolsen, T. et al. The influence of partisan motivated reasoning on public opinion. Polit. Behav. 36 , 235–262 (2014).

Osmundsen, M. et al. Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. Am. Polit. Sci. Rev. 115 , 999–1015 (2021).

Van Bavel, J. J. et al. Political psychology in the digital (mis) information age: a model of news belief and sharing. Soc. Issues Policy Rev. 15 , 84–113 (2020).

Rathje, S. et al. Out-group animosity drives engagement on social media. Proc. Natl Acad. Sci. USA 118 , e2024292118 (2021).

Kahan, D. M. et al. Motivated numeracy and enlightened self-government. Behav. Public Policy 1 , 54–86 (2017).

Kahan, D. M. et al. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat. Clim. Chang. 2 , 732–735 (2012).

Drummond, C. & Fischhoff, B. Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proc. Natl Acad. Sci. USA 114 , 9587–9592 (2017).

Traberg, C. S. & van der Linden, S. Birds of a feather are persuaded together: perceived source credibility mediates the effect of political bias on misinformation susceptibility. Pers. Individ. Differ. 185 , 111269 (2022).

Roozenbeek, J. et al. How accurate are accuracy-nudge interventions? A preregistered direct replication of Pennycook et al. (2020). Psychol. Sci. 32 , 1169–1178 (2021).

Persson, E. et al. A preregistered replication of motivated numeracy. Cognition 214 , 104768 (2021).

Connor, P. et al. Motivated numeracy and active reasoning in a Western European sample. Behav. Public Policy 1–23 (2020).

van der Linden, S. et al. Scientific agreement can neutralize politicization of facts. Nat. Hum. Behav. 2 , 2–3 (2018).

Tappin, B. M. et al. Rethinking the link between cognitive sophistication and politically motivated reasoning. J. Exp. Psychol. Gen. 150 , 1095–1114 (2021).

Tappin, B. M. et al. Thinking clearly about causal inferences of politically motivated reasoning: why paradigmatic study designs often undermine causal inference. Curr. Opin. Behav. Sci. 34 , 81–87 (2020).

Druckman, J. N. & McGrath, M. C. The evidence for motivated reasoning in climate change preference formation. Nat. Clim. Chang. 9 , 111–119 (2019).

Juul, J. L. & Ugander, J. Comparing information diffusion mechanisms by matching on cascade size. Proc. Natl. Acad. Sci. USA 118 , e210078611 (2021).

Vosoughi, S. et al. The spread of true and false news online. Science 359 , 1146–1151 (2018).

Cinelli, M. et al. The echo chamber effect on social media. Proc. Natl Acad. Sci. USA 118 , e2023301118 (2021).

Guess, A. et al. Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4 , 472–480 (2020).

Yang, K. C. et al. The COVID-19 infodemic: Twitter versus Facebook. Big Data Soc. 8 , 20539517211013861 (2021).

Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl Acad. Sci. USA 113 , 554–559 (2016).

Zollo, F. et al. Debunking in a world of tribes. PloS ONE 12 , e0181821 (2017).

Guess, A. M. (Almost) everything in moderation: new evidence on Americans’ online media diets. Am. J. Pol. Sci. 65 , 1007–1022 (2021).

Törnberg, P. Echo chambers and viral misinformation: modeling fake news as complex contagion. PLoS ONE 13 , e0203958 (2018).

Choi, D. et al. Rumor propagation is amplified by echo chambers in social media. Sci. Rep. 10 , 1–10 (2020).

Eurobarometer on Fake News and Online Disinformation. European Commission https://ec.europa.eu/digital-single-market/en/news/final-results-eurobarometer-fake-news-and-online-disinformation (2018).

Altay, S. et al. ‘If this account is true, it is most enormously wonderful’: interestingness-if-true and the sharing of true and false news. Digit. Journal. https://doi.org/10.1080/21670811.2021.1941163 (2021).

Kalla, J. L. & Broockman, D. E. The minimal persuasive effects of campaign contact in general elections: evidence from 49 field experiments. Am. Political Sci. Rev. 112 , 148–166 (2018).

Matz, S. C. et al. Psychological targeting as an effective approach to digital mass persuasion. Proc. Natl Acad. Sci. USA 114 , 12714–12719 (2017).

Paynter, J. et al. Evaluation of a template for countering misinformation—real-world autism treatment myth debunking. PloS ONE 14 , e0210746 (2019).

Smith, P. et al. Correcting over 50 years of tobacco industry misinformation. Am. J. Prev. Med 40 , 690–698 (2011).

Yousuf, H. et al. A media intervention applying debunking versus non-debunking content to combat vaccine misinformation in elderly in the Netherlands: a digital randomised trial. EClinicalMedicine 35 , 100881 (2021).

Walter, N. & Murphy, S. T. How to unring the bell: a meta-analytic approach to correction of misinformation. Commun. Monogr. 85 , 423–441 (2018).

Chan, M. P. S. et al. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. 28 , 1531–1546 (2017).

Walter, N. et al. Evaluating the impact of attempts to correct health misinformation on social media: a meta-analysis. Health Commun. 36 , 1776–1784 (2021).

Aikin, K. J. et al. Correction of overstatement and omission in direct-to-consumer prescription drug advertising. J. Commun. 65 , 596–618 (2015).

Lewandowsky, S. et al. The Debunking Handbook 2020 https://www.climatechangecommunication.org/wp-content/uploads/2020/10/DebunkingHandbook2020.pdf (2020).

Lewandowsky, S. et al. Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Publ. Int 13 , 106–131 (2012).

Swire-Thompson, B. et al. Searching for the backfire effect: measurement and design considerations. J. Appl. Res. Mem. Cogn. 9 , 286–299 (2020).

Nyhan, B. et al. Effective messages in vaccine promotion: a randomized trial. Pediatrics 133 , e835–e842 (2014).

Nyhan, B. & Reifler, J. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine 33 , 459–464 (2015).

Wood, T. & Porter, E. The elusive backfire effect: mass attitudes’ steadfast factual adherence. Polit. Behav. 41 , 135–163 (2019).

Haglin, K. The limitations of the backfire effect. Res. Politics https://doi.org/10.1177/2053168017716547 (2017).

Chido-Amajuoyi et al. Exposure to court-ordered tobacco industry antismoking advertisements among US adults. JAMA Netw. Open 2 , e196935 (2019).

Walter, N. & Tukachinsky, R. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun. Res 47 , 155–177 (2020).

Papageorgis, D. & McGuire, W. J. The generality of immunity to persuasion produced by pre-exposure to weakened counterarguments. J. Abnorm. Psychol. 62 , 475–481 (1961).

CAS   Google Scholar  

McGuire, W. J. in Advances in Experimental Social Psychology (ed Berkowitz, L.) 191–229 (Academic Press, 1964).

Lewandowsky, S. & van der Linden, S. Countering misinformation and fake news through inoculation and prebunking. Eur. Rev. Soc. Psychol. 32 , 348–384 (2021).

Jolley, D. & Douglas, K. M. Prevention is better than cure: addressing anti vaccine conspiracy theories. J. Appl. Soc. Psychol. 47 , 459–469 (2017).

Compton, J. et al. Inoculation theory in the post‐truth era: extant findings and new frontiers for contested science, misinformation, and conspiracy theories. Soc. Personal. Psychol. 15 , e12602 (2021).

Banas, J. A. & Rains, S. A. A meta-analysis of research on inoculation theory. Commun. Monogr. 77 , 281–311 (2010).

Compton, J. et al. Persuading others to avoid persuasion: Inoculation theory and resistant health attitudes. Front. Psychol. 7 , 122 (2016).

Iles, I. A. et al. Investigating the potential of inoculation messages and self-affirmation in reducing the effects of health misinformation. Sci. Commun. 43 , 768–804 (2021).

Cook et al. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS ONE 12 , e0175799 (2017).

van der Linden, S., & Roozenbeek, J. in The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation (eds Greifeneder, R., Jaffe, M., Newman, R., & Schwarz, N.) 147–169 (Psychology Press, 2020).

Basol, M. et al. Towards psychological herd immunity: cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data Soc. 8 , 20539517211013868 (2021).

Sagarin, B. J. et al. Dispelling the illusion of invulnerability: the motivations and mechanisms of resistance to persuasion. J. Pers. Soc. Psychol. 83 , 526–541 (2002).

van der Linden, S. et al. Inoculating the public against misinformation about climate change. Glob. Chall. 1 , 1600008 (2017).

Basol, M. et al. Good news about bad news: gamified inoculation boosts confidence and cognitive immunity against fake news. J. Cogn. 3 , 2 (2020).

Maertens, R. et al. Long-term effectiveness of inoculation against misinformation: three longitudinal experiments. J. Exp. Psychol. Appl 27 , 1–16 (2021).

Roozenbeek, J., & van der Linden, S. Breaking Harmony Square: a game that ‘inoculates’ against political misinformation. The Harvard Kennedy School Misinformation Review https://doi.org/10.37016/mr-2020-47 (2020).

What is Go Viral? World Health Organization https://www.who.int/news/item/23-09-2021-what-is-go-viral (WHO, 2021).

Abbasi, J. COVID-19 conspiracies and beyond: how physicians can deal with patients’ misinformation. JAMA 325 , 208–210 (2021).

Compton, J. Prophylactic versus therapeutic inoculation treatments for resistance to influence. Commun. Theory 30 , 330–343 (2020).

Lazer, D. M. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Pennycook, G. & Rand, D. G. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J. Pers. 88 , 185–200 (2020).

Benton, J. Facebook sent a ton of traffic to a Chicago Tribune story. So why is everyone mad at them? NiemanLab https://www.niemanlab.org/2021/08/facebook-sent-a-ton-of-traffic-to-a-chicago-tribune-story-so-why-is-everyone-mad-at-them/ (2021).

Poutoglidou, F. et al. Ibuprofen and COVID-19 disease: separating the myths from facts. Expert Rev. Respir. Med 15 , 979–983 (2021).

Maertens, R. et al. The Misinformation Susceptibility Test (MIST): a psychometrically validated measure of news veracity discernment. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/gk68h (2021).

Download references

Acknowledgements

I am grateful for support from the IRIS Infodemic Coalition (UK Government, no. SCH-00001-3391) and JITSUVAX (EU 2020 Horizon no. 964728). I thank the Cambridge Social Decision-Making Lab and credit R. Maertens in particular for his help with designing Fig. 2 .

Author information

Authors and affiliations.

Department of Psychology, School of Biology, University of Cambridge, Cambridge, United Kingdom

Sander van der Linden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sander van der Linden .

Ethics declarations

Competing interests.

S.V.D.L. co-designed several interventions in collaboration with the UK Government, DROG, and the WHO, reviewed in this paper, namely GoViral! and Bad News . He does not receive nor hold any financial interests in these interventions. He has received research funding from or consulted for the UK government, the US Government, the European Commission, Facebook, Google, WhatsApp, Edelman, the United Nations, and the WHO on misinformation and infodemic management.

Peer review

Peer review information.

Nature Medicine thanks Brian Southwell and Alison Buttenheim for their contribution to the peer review of this work. Karen O’Leary was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

van der Linden, S. Misinformation: susceptibility, spread, and interventions to immunize the public. Nat Med 28 , 460–467 (2022). https://doi.org/10.1038/s41591-022-01713-6

Download citation

Received : 21 November 2021

Accepted : 24 January 2022

Published : 10 March 2022

Issue Date : March 2022

DOI : https://doi.org/10.1038/s41591-022-01713-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

The adaptive community-response (acr) method for collecting misinformation on social media.

  • Julian Kauk
  • Helene Kreysa
  • Stefan R. Schweinberger

Journal of Big Data (2024)

Belief-consistent information is most shared despite being the least surprising

  • Jacob T. Goebel
  • Mark W. Susmann
  • Duane T. Wegener

Scientific Reports (2024)

Correcting vaccine misinformation on social media: the inadvertent effects of repeating misinformation within such corrections on COVID-19 vaccine misperceptions

  • Jiyoung Lee
  • Kim Bissell

Current Psychology (2024)

Entwicklungen in der Digitalisierung von Public Health seit 2020

  • Benjamin Schüz
  • Iris Pigeot

Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz (2024)

The Society of Information and the European Citizens’ Perception of Climate Change: Natural or Anthropological Causes

  • Fernando Mata
  • Maria Dos-Santos
  • Manuela Vaz-Velho

Environmental Management (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

social media misinformation essay

  • Yale University
  • About Yale Insights
  • Privacy Policy
  • Accessibility

How Social Media Rewards Misinformation

A majority of false stories are spread by a small number of frequent users, suggests a new study co-authored by Yale SOM’s Gizem Ceylan. But they can be taught to change their ways.

An illustration of people looking at devices and being rewarded with likes and other reactions

  • Gizem Ceylan Postdoctoral Researcher, Yale School of Management

In the early months of the COVID-19 pandemic, posts and videos promoting natural remedies for the virus—everything from steam inhalation to ginger— proliferated online . What explains the storm of likes and shares that propels online misinformation through a social media network?

It’s not that people are lazy or don’t want to know the truth. The platforms’ reward systems are wrong.

Some scholars suggest that people share falsehoods out of bias, while others argue that failures of critical thinking or media literacy are to blame. But new research from Gizem Ceylan, a postdoctoral scholar at Yale SOM, suggests that these explanations miss something important. In a new study co-authored by Ian Anderson and Wendy Wood of the University of Southern California, Ceylan found that the reward systems of social media platforms are inadvertently encouraging users to spread misinformation.

By constantly reinforcing sharing—any sharing—with likes and comments, platforms have created habitual users who are largely unconcerned with the content they post. And these habitual users, the research shows, spread a disproportionate share of misinformation.

“It’s not that people are lazy or don’t want to know the truth,” Ceylan says. “The platforms’ reward systems are wrong.”

The researchers conducted several experiments to test how different kinds of social media users interact with true and false headlines. In the first, they asked participants to review eight true and eight false headlines and decide whether to share them to a simulated Facebook feed. They also asked several questions designed to assess how habitually participants used Facebook—that is, how much time they spent on the platform and how automatically they shared content.

The results showed that, overall, participants shared many more true headlines than false ones. However, the patterns were markedly different for the most habitual Facebook users. These participants not only shared more headlines overall, but also shared a roughly equal percentage of true (43%) and false (38%) ones. Less frequent users, by contrast, shared 15% of true and just 6% of false headlines.

Ceylan and her co-authors calculated that the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.

But why do the heaviest users share the most misinformation? Is it because the misinformation is in line with their beliefs? Not so, a subsequent experiment found. Habitual users will even share misinformation that goes against their political beliefs, the researchers discovered.

In this experiment, Ceylan and her colleagues asked participants to consider headlines that reflected partisan bias. Then, they examined concordant sharing (that is, liberals sharing liberal headlines or conservatives sharing conservative headlines) and discordant sharing (Iiberals sharing conservative headlines and vice-versa) among both habitual Facebook users and less frequent ones.

Interestingly, while less frequent users showed a stark preference for sharing concordant as compared to discordant headlines (that is, reflecting the partisan bias found in prior research), the pattern was less marked among the heaviest users. These habitual users shared more overall and also showed a less strong preference for concordant as compared to discordant information—another indication that habit was driving their behavior.

To Ceylan, this does not suggest that heavy Facebook users are not ideologically motivated or lazy in their processing when they spread misinformation. Instead, when users’ sharing habits are activated with cues from the social platform, the content they are sharing—its accuracy and partisan slant—is largely irrelevant to them. They post simply because the platform rewards posting with engagement in the form of likes, comments, and re-shares.

“This was kind of a shocker for the misinformation research community,” she says. “What we showed is that, if people are habitual sharers, they’ll share any type of information, because they don’t care [about the content]. All they care about is likes and attention. The content that gets attention become part of habitual users’ mental representations. Over time, they just share content that fits this mental representation. Thus, rewards on a social platform are critical in shaping people’s habits and what they are attuned to share with others.”

It’s a system issue, not an individual issue. We need to create better environments on social media platforms to help people make better decisions.

The study also suggests a potential way out of this trap. In their last experiment, the researchers simulated a new structure that rewarded participants for accuracy. When they share accurate information, participants were awarded points that were redeemable for an Amazon gift card.

“In this simulation, everyone shared lots of true headlines and few false headlines. Their previous social media habits did not matter,” Ceylan explains. “What this means is that, when you reward people for accuracy, people learn the type of content that gets rewards and build habits for sharing that content (i.e., accuracy in this case).” What’s more, surprisingly, the users continued to share accurate headlines even after the researchers removed the reward for accuracy. These results show that users can be incentivized to develop a new habit: accuracy.

To Ceylan, the research demonstrates how powerfully social media has shaped our habits. Platforms aim for more profit and engagement —i.e., more users spending longer hours using them. By rewarding and amplifying any type of engagement regardless of its quality or accuracy, platforms have created users who will share indiscriminately. “It’s a system issue, not an individual issue. We need to create better environments on social media platforms to help people make better decisions. We cannot keep on blaming users for showing political biases or being lazy for the misinformation problem. We have to change the reward structure on these platforms.”

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Earn your MBA and SM in engineering with this transformative two-year program.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

An interdisciplinary program that combines engineering, management, and design, leading to a master’s degree in engineering and management.

Executive Programs

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

This 20-month MBA program equips experienced executives to enhance their impact on their organizations and the world.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

Credit: Gordon Johnson from Pixabay

Social Media

What can be done to reduce the spread of fake news? MIT Sloan research finds that shifting peoples’ attention toward accuracy can decrease online misinformation sharing

MIT Sloan Office of Communications

Mar 17, 2021

Findings have implications for how social media companies stem the flow of false news 

Cambridge, Mass., March 17, 2021—Simple interventions to reduce the spread of misinformation can shift peoples’ attention toward accuracy and help them become more discerning about the veracity of the information they share on social media, according to new research led by  David Rand  , Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, at the  MIT Sloan School of Management .

Rand conducted the research with his colleagues, Gordon Pennycook of the Hill/Levene Schools of Business at the University of Regina, Ziv Epstein, a doctoral student at the MIT Media Lab, Mohsen Mosleh of the University of Exeter Business School, Antonio Arechar, a research associate at MIT Sloan, and Dean Eckles, the Mitsubishi Career Development Professor and an Associate Professor of Marketing at MIT Sloan. The team’s findings are published in a forthcoming issue of the journal Nature. 

The study arrives at a time when the sharing of misinformation on social media—including both patently false political “fake news” and misleading hyperpartisan content—has become a key focus of public debate around the world. The topic gained prominence in 2016 in the aftermath of the U.S. presidential election and the referendum on Britain’s exit from the European Union, known as Brexit, during which fabricated stories, presented as legitimate news, received wide distribution on social media. The proliferation of false news during the COVID-19 pandemic, and this January’s violent insurrection in the nation’s Capital, illustrate that disinformation on platforms including Facebook and Twitter remains a pervasive problem.

The study comprises a series of surveys and field experiments. In the first survey, which involved roughly 850 social media users, the researchers found a disconnect between how people judge a news article’s accuracy and their decision of whether or not to share it. Even though people rated true headlines as much more accurate than false headlines, headline veracity had little impact on sharing. Although this may seem to indicate that people share inaccurate content because, for example, they care more about furthering their political agenda than they care about truth, Prof. Rand and his team propose an alternative explanation: Most people do not want to spread misinformation, but the social media context focuses their attention on factors other than truth and accuracy. Indeed, when directly asked, most participants said it was important to only share news that is accurate – even when they had just indicated they would share numerous false headlines only minutes before.

“The problem is not so much that people don’t care about the truth or want to purposely spread fake news; it’s that social media makes us share things that we would think better of if we stopped to think,” says Prof. Rand. “It’s understandable: scrolling through Twitter and Facebook is distracting. You’re moving at top speed, and reading the news while also being bombarded with pictures of cute babies and funny cat videos. You forget to think about what’s true or not. When it comes to retweeting a headline—even one you would realize was inaccurate if you thought about it—you fail to carefully consider its truthful because your attention is elsewhere.”

Subsequent survey experiments with thousands of Americans found that subtly prompting people to think about accuracy increases the quality of the news they share. In fact, when participants had to consider accuracy before making their decisions the sharing of misinformation was cut in half.

Finally, the team conducted a digital field experiment involving over 5,000 Twitter users who had previously shared news from websites known for publishing misleading content. The researchers used bot accounts to send the users a message asking them to evaluate the accuracy of a random non-political headline – and found that this simple accuracy prompt significantly improved the quality of the news the users subsequently retweeted. “Our message made the idea of accuracy more top-of-mind,” says Prof. Pennycook, who was the co-lead author on the paper with Mosleh and Epstein. “So, when they went back to their newsfeeds, they were more likely to ask themselves if posts they saw were accurate before deciding whether to share them.”

The research team’s findings have implications for how social media companies can stem the flow of misinformation. Platforms could, for instance, implement simple accuracy prompts to shift users’ attention towards the reliability of the content they read before they share it online. “By leveraging people’s existing but latent capabilities for discerning what is true, this approach has the advantage of preserving user autonomy. Therefore, it doesn’t require social media platforms to be the arbiters of truth, but instead enables the users of those platforms,” says Epstein. The team has been working with researchers at Google to develop applications based on this idea, and hope that social media companies like Facebook and Twitter will follow suit.

“Our research shows that people are actually often fairly good at discerning falsehoods from facts, but in the social media context they’re distracted and lack the time and inclination to consider it,” says Prof. Mosleh. “But if the social media platforms reminded users to think about accuracy—maybe when they log on or as they’re scrolling through their feeds—it could be just the subtle prod people need to get in a mindset where they think twice before they retweet” concludes Prof. Rand.

The MIT Sloan School of Management

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu .

Related Articles

Graphic image of cellphone with bullhorn announcing "Fake News."

Internet Matters - Partners Logo

  • Expert Advisory Panel
  • Our partners
  • Become a partner
  • Advice for parents and carers
  • Advice for professionals
  • Connecting Safely Online
  • Fostering Digital Skills
  • UKCIS Vulnerable Users Working Group
  • Online hate
  • Online grooming
  • Fake news and misinformation
  • Screen time
  • Inappropriate content
  • Cyberbullying
  • Online reputation
  • Online Pornography
  • Radicalisation
  • Privacy and identity theft
  • Report issue
  • Pre-school (0-5)
  • Young Children (6-10)
  • Pre-teen (11-13)
  • Teens ( 14+)
  • Social media privacy guides
  • Gaming platforms and devices
  • Smartphones and other devices
  • Broadband & mobile networks
  • Entertainment & search engines
  • Get smart about smartphones
  • My Family’s Digital Toolkit
  • Navigating teens’ online relationships
  • Online gaming advice hub
  • Social media advice hub
  • Press Start for PlayStation Safety
  • Guide to apps
  • Digital resilience toolkit
  • Online money management guide
  • The dangers of digital piracy
  • Guide to buying tech
  • UKCIS Digital Passport
  • Online safety leaflets & resources
  • Digital wellbeing research programme
  • Parent Stories
  • Expert opinion
  • Press releases
  • Our expert panel
  • Free digital stories and lessons
  • Early years
  • Primary school
  • Secondary school
  • Connect school to home
  • Professional guidance
  • Online safety issues
  • Fake news and misinformation advice hub
  • What is misinformation and fake news?

What is misinformation?

Learn about fake news its impact on children.

With so many sources of information online, some children might struggle to make sense of what is true.

In this guide, learn about misinformation, what it looks like and how it impacts children’s wellbeing and safety online.

Fake news can be found embedded in traditional news social media or fake news sites and has no basis in fact but is presented as being factually accurate. This has allowed hackers controls over even politicians to use the net to spread disinformation online.

Our children can struggle to separate fact from fiction thanks to the spread of fake news. Here are some basic strategies to help them develop critical digital literacy:

- talk to them: children rely more on their family than social media for their news so talk to them about what is going on; - read: many people share stories who don't actually read. Encourage your kids to read beyond the headline; - check: teach children quick and easy ways to check the reliability of information like considering the source, doing a search to double-check the author's credibility, seeing if the information is available on reputable sites and using credible fact-checking websites to get more information; - get involved: digital literacy is about participation. Teach your kids to be honest, vigilant and creative digital citizens.

4 quick things to know about misinformation

Fake news is not the preferred term.

‘Fake news’ refers to false information and news online. However, it’s more appropriate to use ‘misinformation’ and ‘disinformation’.

Misinformation is false information spread by people who think it’s true.

Disinformation is false information spread by people who know it’s false .

Mis/disinformation is an online harm

Misinformation can impact children’s:

  • mental health
  • physical wellbeing
  • future finances
  • views towards other people.

It can also lead to mistrust and confusion related to the information they come across online.

Misinformation comes in different forms

Mis/disinformation and fake news might look like:

  • social media hoaxes
  • phishing emails
  • popular videos
  • sponsored posts

Misinformation is hard to spot for children who might not yet have the skills to fact-check. It can spread on social media, through satire news websites, via parody videos and other spaces.

Learn more about the forms it can take.

Insights from Ofcom

  • 32% of 8-17-year-olds believe that all or most of what they see on social media is true.
  • 70% of 12-17s said they were confident they could judge whether something was real or fake.
  • Nearly a quarter of those children were unable to do so in practise.

This mismatch between confidence and ability could leave these children exposed to harm.

On a more positive point, of those who said they were confident, 48% were also able.

See Ofcom’s 2023 research .

Quick guide to tackling misinformation

Help children develop their digital literacy and critical thinking online.

Misinformation is false information that is spread by people who think it's true. This is different from 'fake news' and disinformation.

Fake news refers to websites that share mis or disinformation. This might be via satire sites like The Onion, but it also refers to those pretending to be trustworthy news sources.

Sometimes, people use the term ‘fake news’ to discredit true information. As such, it’s better to use more general terms such as ‘misinformation’ and ‘disinformation’.

Disinformation is false information that someone or a group spreads online while knowing it’s false. Generally, they do this for a specific intention, usually for the purpose of influencing others to believe their point of view.

7 types of mis and disinformation

UNICEF identifies 7 main types of mis and disinformation, all of which can impact children.

This is the image for: Types of misinformation and fake news

Types of misinformation and fake news

Satirical content and parodies can spread misinformation.

This is misleading information that is not intended to harm. Creators of the content know the information is false, but share it for humour. However, if people misunderstand the intent, they might spread it as true.

Clickbait for views can mislead users

This is content where the headline, visuals or captions don’t match the actual content. This is often clickbait to get more views on a video, visits to a page or engagement on social media.

Intentionally misleading content can create anger

People might share information in misleading way to frame an event, issue or person in a particular way. An example is when an old photo is used on a recent social media post. It might spread outrage or fear until the photo receives the right context.

Giving fake context can cause unnecessary outrage

Fake context is when information is shared with incorrect background information.

A lighthearted example is a popular photo of young director Steven Spielberg posing and smiling with a large dead animal. Many people felt outrage for his hunting of an endangered animal. However, the correct context was that he was on set of Jurassic Park and posing with a prop triceratops.

Usually, someone spreading disinformation will ‘alter’ the context of information. The intention is to convince people of their belief or viewpoint.

Impersonation can cause harm in many ways

This is when a person, group or organisation pretends they are another person or source. Imposter content can trick people into:

  • sending money
  • sharing personal information
  • further spreading misinformation.

Manipulated content

True information that’s altered is hard to notice

Manipulated content is real information, images or videos that are altered or changed in some way to deceive others. Some deepfakes are an example of such content.

Completely false information can lead to harm

Fabricated content is disinformation created without any connection to truth. Its overall intention is to deceive and harm. Fabricated content can quickly become misinformation.

How does misinformation spread online?

From social media to news, misinformation can spread all over the world in an instant.

For children, misinformation and disinformation often looks very convincing. This is especially true with the popularity of generative AI and the ability to create deepfakes.

Learn more about using artificial intelligence tools safely.

Artificial intelligence can help scammers create convincing ads and content that tricks people. Unfortunately, unless reported (and sometimes even when reported), these ads can reach millions of people quickly.

While misinformation is nothing new, the internet means it can spread a lot quicker and reach many more people.

How social media spreads false information

From sock puppet accounts to scam ads, social media can help spread misinformation to thousands if not millions of people at once. Unfortunately, social media algorithms make it so any interaction helps the content reach more people.

Angry reactions on Facebook or comments calling a post out as false only helps the poster reach more people. This is because the algorithm only understands whether something is popular or not. It can’t tell if information is false; that’s why users must report false information rather than engage with it.

How echo chambers spread misinformation

‘Echo chambers’ is a term used to describe the experience of only seeing one type of content. Essentially, the more someone engages with the content, the more likely they are to see similar content.

So, if a child interacts with an influencer spreading misogyny, they will see more similar content. If they interact with that content, then they see more, and so on. This continues until all they see is content around misogyny.

When an algorithm creates an echo chamber, it means the user will only see content that supports the user’s view. As such, it’s really difficult to hear others’ perspectives and widen their worldview. This means, when challenged, they become more defensive and are likely to spread hate.

Learn more about algorithms and echo chambers.

How design impacts the way misinformation spreads

In a Risky-by-Design case study from the 5Rights Foundation , the following design features also contributed to misinformation spreading online.

This is the image for: Manage algorithms and echo chambers

Manage algorithms and echo chambers

Recommendations favour popular creators.

Content creators who have a large following and spread misinformation have a wider reach. This is largely due to algorithms designed for the platform.

Many platforms are overrun with bots

Bots and fake profiles (or sock puppet accounts) may spread misinformation as their sole purpose. These can also manipulate information or make the source of disinformation harder to trace. It’s also often quite difficult as a user to successfully report fake or hacked accounts.

Recommendations can create echo chambers

Algorithms can create echo chambers or “a narrowing cycle of similar posts to read, videos to watch or groups to join.” Additionally, some content creators that spread misinformation also have interests in less harmful content. So, the algorithm might recommend this harmless content to users like children. Children then watch these new content creators, eventually seeing the misinformation.

For example, self-described misogynist Andrew Tate also shared content relating to finance and flashy cars. This content might appeal to a group of people who don’t agree with misogyny. For instance, our research shows that boys are more likely than girls to see content from Andrew Tate on social media. However, both girls and boys are similarly likely to see content about Andrew Tate on social media.

Not all content labels are clear

Subtle content label design — such as for identifying something as an ad or joke — are often easy to miss. More obvious labels could help children accurately navigate potential misinformation online.

Autoplay makes accidental viewing easy

When videos or audio that a child chooses finishes, many apps automatically start playing a new one by design. As such they might accidentally engage with misinformation that then feeds into the algorithm.

Most platforms allow you to turn off this feature.

Apps that hide content can support misinformation

Content that gets shared and then quickly removed is harder to fact-check. It spreads misinformation because it doesn’t give viewers the chance to check if it’s true. Children might engage with this type of content on apps like Snapchat where disappearing messages are the norm.

Algorithms cannot assess trending content

Algorithms can identify which hashtags or topics are most popular, sharing them with more users. However, these algorithms can’t tell if it relates to misinformation. So, it’s up to the user to make this decision, which many children might struggle with.

Misinformation can easily reach many

When sharing content directly, many apps and platforms suggest a ready-made list of people. This makes it easy to share misinformation with a large group of people at once.

What impact can fake news have on young people?

Nearly all children are now online, but many of them do not yet have the skills to assess information online.

Half of the children surveyed by the National Literacy Trust admitted to worrying about fake news. Additionally, teachers in the same survey noted an increase in issues of anxiety, self-esteem and a general skewing of world views.

Misinformation can impact children in a number of ways. These could include:

  • Scams : falling for scams could lead to data breaches, financial loss, impacts on credit score and more.
  • Harmful belief systems : if children watch content that spreads hate, this can become a part of their worldview. This could lead to mistreatment of people different from them or even lead to radicalisation and extremism.
  • Dangerous challenges or hacks : some videos online might promote dangerous challenges or ‘life hacks’ that can cause serious harm. These hacks are common in videos from content farms .
  • Confusion and distrust : If a child becomes a victim of dis or misinformation, they might struggle with new information. This can lead to distrust, confusion and maybe anxiety, depending on the extent of the misinformation.

Research into misinformation and fake news

Below are some figures into how misinformation can affect children and young people.

According to Ofcom, 79% of 12-15-year-olds feel that news they hear from family is ‘always’ or ‘mostly’ true.

28% of children aged 12-15 use TikTok as a news source (Ofcom).

6 in 10 parents worry about their child ‘being scammed/defrauded/lied to/impersonated’ by someone they didn’t know.

Around 4 in 10 children aged 9-16 said they experienced the feeling of ‘being unsure about whether what I see is true’. This was the second most common experience after ‘spending too much time online’.

NewsWise from The National Literacy Trust helped children develop their media literacy skills. Over that time, the children able to accurately assess news as false or true increased from 49.2% to 68%. This demonstrates the importance of teaching media literacy.

Resources to tackle misinformation

Help children become critical thinkers and avoid harm from misinformation with these resources.

This is the image for: How to prevent misinformation

How to prevent misinformation

This is the image for: Dealing with misinformation online

Dealing with misinformation online

This is the image for: Help children identify 'fake news' online

Help children identify 'fake news' online

Download workbook.

  • To receive personalised online safety guidance in the future, we’d like to ask for your name and email. Simply fill your details below. You can choose to skip, if you prefer.
  • First name *
  • Last name *
  • Email Address *
  • I am a * Parent/Carer Teacher Professional
  • Organisation name
  • Skip and download
  • Phone This field is for validation purposes and should be left unchanged.

Find anything you save across the site in your account

Don’t Believe What They’re Telling You About Misinformation

By Manvir Singh

Millions of people have watched Mike Hughes die. It happened on February 22, 2020, not far from Highway 247 near the Mojave Desert city of Barstow, California. A homemade rocket ship with Hughes strapped in it took off from a launching pad mounted on a truck. A trail of steam billowed behind the rocket as it swerved and then shot upward, a detached parachute unfurling ominously in its wake. In a video recorded by the journalist Justin Chapman, Hughes disappears into the sky, a dark pinpoint in a vast, uncaring blueness. But then the rocket reappears and hurtles toward the ground, crashing, after ten long seconds, in a dusty cloud half a mile away.

Hughes was among the best-known proponents of Flat Earth theory , which insists that our planet is not spherical but a Frisbee-like disk. He had built and flown in two rockets before, one in 2014 and another in 2018, and he planned to construct a “rockoon,” a combination rocket and balloon, that would carry him above the upper atmosphere, where he could see the Earth’s flatness for himself. The 2020 takeoff, staged for the Science Channel series “Homemade Astronauts,” was supposed to take him a mile up—not high enough to see the Earth’s curvature but hypeworthy enough to garner more funding and attention.

Flat Earth theory may sound like one of those deliberately far-fetched satires, akin to Birds Aren’t Real, but it has become a cultic subject for anti-scientific conspiratorialists, growing entangled with movements such as QAnon and COVID -19 skepticism. In “ Off the Edge: Flat Earthers, Conspiracy Culture, and Why People Will Believe Anything ” (Algonquin), the former Daily Beast reporter Kelly Weill writes that the tragedy awakened her to the sincerity of Flat Earthers’ convictions. After investigating the Flat Earth scene and following Hughes, she had figured that, “on some subconscious level,” Hughes knew the Earth wasn’t flat. His death set her straight: “I was wrong. Flat Earthers are as serious as your life.”

Weill isn’t the only one to fear the effects of false information. In January, the World Economic Forum released a report showing that fourteen hundred and ninety international experts rated “misinformation and disinformation” the leading global risk of the next two years, surpassing war, migration, and climatic catastrophe. A stack of new books echoes their concerns. In “ Falsehoods Fly: Why Misinformation Spreads and How to Stop It ” (Columbia), Paul Thagard, a philosopher at the University of Waterloo, writes that “misinformation is threatening medicine, science, politics, social justice, and international relations, affecting problems such as vaccine hesitancy, climate change denial, conspiracy theories, claims of racial inferiority, and the Russian invasion of Ukraine .” In “ Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity ” (Norton), Sander van der Linden, a social-psychology professor at Cambridge, warns that “viruses of the mind” disseminated by false tweets and misleading headlines pose “serious threats to the integrity of elections and democracies worldwide.” Or, as the M.I.T. political scientist Adam J. Berinsky puts it in “ Political Rumors: Why We Accept Misinformation and How to Fight It ” (Princeton), “a democracy where falsehoods run rampant can only result in dysfunction.”

Most Americans seem to agree with these theorists of human credulity. Following the 2020 Presidential race, sixty per cent thought that misinformation had a major impact on the outcome, and, to judge from a recent survey, even more believe that artificial intelligence will exacerbate the problem in this year’s contest. The Trump and the DeSantis campaigns both used deepfakes to sully their rivals. Although they justified the fabrications as transparent parodies, some experts anticipate a “tsunami of misinformation,” in the words of Oren Etzioni, a professor emeritus at the University of Washington and the first C.E.O. of the Allen Institute for Artificial Intelligence. “The ingredients are there, and I am completely terrified,” he told the Associated Press.

The fear of misinformation hinges on assumptions about human suggestibility. “Misinformation, conspiracy theories, and other dangerous ideas, latch on to the brain and insert themselves deep into our consciousness,” van der Linden writes in “Foolproof.” “They infiltrate our thoughts, feelings, and even our memories.” Thagard puts it more plainly: “People have a natural tendency to believe what they hear or read, which amounts to gullibility.”

But do the credulity theorists have the right account of what’s going on? Folks like Mike Hughes aren’t gullible in the sense that they’ll believe anything. They seem to reject scientific consensus, after all. Partisans of other well-known conspiracies (the government is run by lizard people; a cabal of high-level pedophilic Democrats operates out of a neighborhood pizza parlor) are insusceptible to the assurances of the mainstream media. Have we been misinformed about the power of misinformation?

In 2006, more than five hundred skeptics met at an Embassy Suites hotel near O’Hare Airport, in Chicago, to discuss conspiracy. They listened to presentations on mass hypnosis, the melting point of steel, and how to survive the collapse of the existing world order. They called themselves many things, including “truth activists” and “9/11 skeptics,” although the name that would stick, and which observers would use for years afterward, was Truthers.

The Truthers held that the attacks on the Pentagon and the World Trade Center were masterminded by the White House to expand government power and enable military and security industries to profit from the war on terror. According to an explanation posted by 911truth.org, a group that helped sponsor the conference, George W. Bush and his allies gagged and intimidated whistle-blowers, mailed anthrax to opponents in the Senate, and knowingly poisoned the inhabitants of lower Manhattan. On that basis, Truthers concluded, “the administration does consider the lives of American citizens to be expendable on behalf of certain interests.”

“Out of this dispute a clear leader will emerge.”

Link copied

The Truthers, in short, maintained that the government had gone to extreme measures, including killing thousands of its own citizens, in order to carry out and cover up a conspiracy. And yet the same Truthers advertised the conference online and met in a place where they could easily be surveilled. Speakers’ names were posted on the Internet along with videos, photographs, and short bios. The organizers created a publicly accessible forum to discuss next steps, and a couple of attendees spoke to a reporter from the Times , despite the mainstream media’s ostensible complicity in the coverup. By the logic of their own theories, the Truthers were setting themselves up for assassination.

Their behavior demonstrates a paradox of belief. Action is supposed to follow belief, and yet beliefs, even fervently espoused ones, sometimes exist in their own cognitive cage, with little influence over behavior. Take the “Pizzagate” story, in which Hillary Clinton and her allies ran a child sex ring from the basement of a D.C. pizzeria. In the months surrounding the 2016 Presidential election, a staggering number of Americans—millions, by some estimates—endorsed the account, and, in December of that year, a North Carolina man charged into the restaurant, carrying an assault rifle. Van der Linden and Berinsky both use the incident as evidence of misinformation’s violent implications. But they’re missing the point: what’s really striking is how anomalous that act was. The pizzeria received menacing phone calls, even death threats, but the most common response from believers, aside from liking posts, seems to have been leaving negative Yelp reviews.

That certain deeply held beliefs seem insulated from other inferences isn’t peculiar to conspiracy theorists; it’s the experience of regular churchgoers. Catholics maintain that the Sacrament is the body of Christ, yet no one expects the bread to taste like raw flesh or accuses fellow-parishioners of cannibalism. In “ How God Becomes Real ” (2020), the Stanford anthropologist T. M. Luhrmann recounts evangelical Christians’ frustrations with their own beliefs. They thought less about God when they were not in church. They confessed to not praying. “I remember a man weeping in front of a church over not having sufficient faith that God would replace the job he had lost,” Luhrmann writes. The paradox of belief is one of Christianity’s “clearest” messages, she observes: “You may think you believe in God, but really you don’t. You don’t take God seriously enough. You don’t act as if he’s there.” It’s right out of Mark 9:24: “Lord, I believe; help my unbelief!”

The paradox of belief has been the subject of scholarly investigation; puzzling it out promises new insights about the human psyche. Some of the most influential work has been by the French philosopher and cognitive scientist Dan Sperber. Born into a Jewish family in France in 1942, during the Nazi Occupation, Sperber was smuggled to Switzerland when he was three months old. His parents returned to France three years later, and raised him as an atheist while imparting a respect for all religious-minded people, including his Hasidic Jewish ancestors.

The exercise of finding rationality in the seemingly irrational became an academic focus for Sperber in the nineteen-seventies. Staying with the Dorze people in southern Ethiopia, he noticed that they made assertions that they seemed both to believe and not to believe. People told him, for example, that “the leopard is a Christian animal who observes the fasts of the Ethiopian Orthodox Church.” Nevertheless, the average Dorze man guarded his livestock on fast days just as much as on other days. “Not because he suspects some leopards of being bad Christians,” Sperber wrote, “but because he takes it as true both that leopards fast and that they are always dangerous.”

Sperber concluded that there are two kinds of beliefs. The first he has called “factual” beliefs. Factual beliefs—such as the belief that chairs exist and that leopards are dangerous—guide behavior and tolerate little inconsistency; you can’t believe that leopards do and do not eat livestock. The second category he has called “symbolic” beliefs. These beliefs might feel genuine, but they’re cordoned off from action and expectation. We are, in turn, much more accepting of inconsistency when it comes to symbolic beliefs; we can believe, say, that God is all-powerful and good while allowing for the existence of evil and suffering.

In a masterly new book, “ Religion as Make-Believe ” (Harvard), Neil Van Leeuwen, a philosopher at Georgia State University, returns to Sperber’s ideas with notable rigor. He analyzes beliefs with a taxonomist’s care, classifying different types and identifying the properties that distinguish them. He proposes that humans represent and use factual beliefs differently from symbolic beliefs, which he terms “credences.” Factual beliefs are for modelling reality and behaving optimally within it. Because of their function in guiding action, they exhibit features like “involuntariness” (you can’t decide to adopt them) and “evidential vulnerability” (they respond to evidence). Symbolic beliefs, meanwhile, largely serve social ends, not epistemic ones, so we can hold them even in the face of contradictory evidence.

One of Van Leeuwen’s insights is that people distinguish between different categories of belief in everyday speech. We say we “believe” symbolic ones but that we “think” factual ones are true. He has run ingenious experiments showing that you can manipulate how people talk about beliefs by changing the environment in which they’re expressed or sustained. Tell participants that a woman named Sheila sets up a shrine to Elvis Presley and plays songs on his birthday, and they will more often say that she “believes” Elvis is alive. But tell them that Sheila went to study penguins in Antarctica in 1977, and missed the news of his death, and they’ll say she “thinks” he’s still around. As the German sociologist Georg Simmel recognized more than a century ago, religious beliefs seem to express commitments—we believe in God the way we believe in a parent or a loved one, rather than the way we believe chairs exist. Perhaps people who traffic in outlandish conspiracies don’t so much believe them as believe in them.

Van Leeuwen’s book complements a 2020 volume by Hugo Mercier, “ Not Born Yesterday .” Mercier, a cognitive scientist at the École Normale Supérieure who studied under Sperber, argues that worries about human gullibility overlook how skilled we are at acquiring factual beliefs. Our understanding of reality matters, he notes. Get it wrong, and the consequences can be disastrous. On top of that, people have a selfish interest in manipulating one another. As a result, human beings have evolved a tool kit of psychological adaptations for evaluating information—what he calls “open vigilance mechanisms.” Where a credulity theorist like Thagard insists that humans tend to believe anything, Mercier shows that we are careful when adopting factual beliefs, and instinctively assess the quality of information, especially by tracking the reliability of sources.

Van Leeuwen and Mercier agree that many beliefs are not best interpreted as factual ones, although they lay out different reasons for why this might be. For Van Leeuwen, a major driver is group identity. Beliefs often function as badges: the stranger and more unsubstantiated the better. Religions, he notes, define membership on the basis of unverifiable or even unintelligible beliefs: that there is one God; that there is reincarnation; that this or that person was a prophet; that the Father, the Son, and the Holy Spirit are separate yet one. Mercier, in his work, has focussed more on justification. He says that we have intuitions—that vaccination is bad, for example, or that certain politicians can’t be trusted—and then collect stories that defend our positions. Still, both authors treat symbolic beliefs as socially strategic expressions.

After Mike Hughes’s death, a small debate broke out over the nature of his belief. His publicist, Darren Shuster, said that Hughes never really believed in a flat Earth. “It was a P.R. stunt,” he told Vice News. “We used the attention to get sponsorships and it kept working over and over again.” Space.com dug up an old interview corroborating Shuster’s statements. “This flat Earth has nothing to do with the steam rocket launches,” Hughes told the site in 2019. “It never did, it never will. I’m a daredevil!”

Perhaps it made sense that it was just a shtick. Hughes did death-defying stunts years before he joined the Flat Earthers. He was born in Oklahoma City in 1956 to an auto-mechanic father who enjoyed racing cars. At the age of twelve, Hughes was racing on his own, and not long afterward he was riding in professional motorcycle competitions. In 1996, he got a job driving limousines, but his dream of becoming the next Evel Knievel persisted; in 2002, he drove a Lincoln Town Car off a ramp and flew a hundred and three feet, landing him in Guinness World Records.

When Hughes first successfully launched a rocket, in 2014, he had never talked about the shape of the planet. In 2015, when he co-ran a Kickstarter campaign to fund the next rocket flight, the stated motivation was stardom, not science: “Mad Mike Hughes always wanted to be famous so much that he just decided one day to build a steam rocket and set the world record.” He got two backers and three hundred and ten dollars. Shortly afterward, he joined the Flat Earth community and tied his crusade to theirs. The community supported his new fund-raising effort, attracting more than eight thousand dollars. From there, his fame grew, earning him features in a documentary (“Rocketman,” from 2019) and that Science Channel series. Aligning with Flat Earthers clearly paid off.

Not everyone believes that he didn’t believe, however. Waldo Stakes, Hughes’s landlord and rocket-construction buddy, wrote on Facebook that “Mike was a real flat earther,” pointing to the “dozens of books on the subject” he owned, and said that Hughes lost money hosting a conference for the community. Another of Hughes’s friends told Kelly Weill that Flat Earth theory “started out as a marketing approach,” but that once it “generated awareness and involvement . . . it became something to him.”

The debate over Hughes’s convictions centers on the premise that a belief is either sincere or strategic, genuine or sham. That’s a false dichotomy. Indeed, the social functions of symbolic beliefs—functions such as signalling group identity—seem best achieved when the beliefs feel earnest. A Mormon who says that Joseph Smith was a prophet but secretly thinks he was a normal guy doesn’t strike us as a real Mormon. In fact, the evolutionary theorist Robert Trivers argued in “ Deceit and Self-Deception ” (2011) that we trick ourselves in order to convince others. Our minds are maintaining two representations of reality: there’s one that feels true and that we publicly advocate, and there’s another that we use to effectively interact with the world.

“I can say literally anything and they use it for spa music.”

The idea of self-deception might seem like a stretch; Mercier has expressed skepticism about the theory. But it reconciles what appear to be contradictory findings. On the one hand, some research suggests that people’s beliefs in misinformation are authentic. In “Political Rumors,” for example, Berinsky describes experiments he conducted suggesting that people truly believe that Barack Obama is a Muslim and that the U.S. government allowed the 9/11 attacks to happen. “People by and large say what they mean,” he concludes.

On the other hand, there’s research implying that many false beliefs are little more than cheap talk. Put money on the table, and people suddenly see the light. In an influential paper published in 2015, a team led by the political scientist John Bullock found sizable differences in how Democrats and Republicans thought about politicized topics, like the number of casualties in the Iraq War. Paying respondents to be accurate, which included rewarding “don’t know” responses over wrong ones, cut the differences by eighty per cent. A series of experiments published in 2023 by van der Linden and three colleagues replicated the well-established finding that conservatives deem false headlines to be true more often than liberals—but found that the difference drops by half when people are compensated for accuracy. Some studies have reported smaller or more inconsistent effects, but the central point still stands. There may be people who believe in fake news the way they believe in leopards and chairs, but underlying many genuine-feeling endorsements is an understanding that they’re not exactly factual.

Van der Linden, Berinsky, and Thagard all offer ways to fight fabrication. But, because they treat misinformation as a problem of human gullibility, the remedies they propose tend to focus on minor issues, while scanting the larger social forces that drive the phenomenon. Consider van der Linden’s prescription. He devotes roughly a third of “Foolproof” to his group’s research on “prebunking,” or psychological inoculation. The idea is to present people with bogus information before they come across it in the real world and then expose its falsity—a kind of epistemic vaccination. Such prebunking can target specific untruths, or it can be “broad-spectrum,” as when people are familiarized with an array of misinformation techniques, from emotional appeals to conspiratorial language.

Prebunking has received an extraordinary amount of attention. If you’ve ever read a headline about a vaccine against fake news, it was probably about van der Linden’s work. His team has collaborated with Google, WhatsApp, the Department of Homeland Security, and the British Prime Minister’s office; similar interventions have popped up on Twitter (now X). In “Foolproof,” van der Linden reviews evidence that prebunking makes people better at identifying fake headlines. Yet nothing is mentioned about effects on their actual behavior. Does prebunking affect medical decisions? Does it make someone more willing to accept electoral outcomes? We’re left wondering.

The evidential gap is all the trickier because little research exists in the first place showing that misinformation affects behavior by changing beliefs. Berinsky acknowledges this in “Political Rumors” when he writes that “few scholars have established a direct causal link” between rumors and real-world outcomes. Does the spread of misinformation influence, say, voting decisions? Van der Linden admits, “Contrary to much of the commentary you may find in the popular media, scientists have been extremely skeptical.”

So it’s possible that we’ve been misinformed about how to fight misinformation. What about the social conditions that make us susceptible? Van der Linden tells us that people are more often drawn to conspiracy theories when they feel “uncertain and powerless,” and regard themselves as “marginalized victims.” Berinsky cites scholarship suggesting that conspiratorial rumors flourish among people who experience “a lack of interpersonal trust” and “a sense of alienation.” In his own research, he found that a big predictor of accepting false rumors is agreeing with statements such as “Politicians do not care much about what they say, so long as they get elected.” A recent study found a strong correlation between the prevalence of conspiracy beliefs and levels of governmental corruption; in those beliefs, Americans fell midway between people from Denmark and Sweden and people from middle-income countries such as Mexico and Turkey, reflecting a fraying sense of institutional integrity. More than Russian bots or click-hungry algorithms, a crisis of trust and legitimacy seems to lie behind the proliferation of paranoid falsehoods.

Findings like these require that we rethink what misinformation represents. As Dan Kahan, a legal scholar at Yale, notes, “Misinformation is not something that happens to the mass public but rather something that its members are complicit in producing.” That’s why thoughtful scholars—including the philosopher Daniel Williams and the experimental psychologist Sacha Altay—encourage us to see misinformation more as a symptom than as a disease. Unless we address issues of polarization and institutional trust, they say, we’ll make little headway against an endless supply of alluring fabrications.

From this perspective, railing against social media for manipulating our zombie minds is like cursing the wind for blowing down a house we’ve allowed to go to rack and ruin. It distracts us from our collective failures, from the conditions that degrade confidence and leave much of the citizenry feeling disempowered. By declaring that the problem consists of “irresponsible senders and gullible receivers,” in Thagard’s words, credulity theorists risk ignoring the social pathologies that cause people to become disenchanted and motivate them to rally around strange new creeds.

Mike Hughes was among the disenchanted. Sure, he used Flat Earth theory to become a celebrity, but its anti-institutionalist tone also spoke to him. In 2018, while seeking funding and attention for his next rocket ride, he self-published a book titled “ ‘Mad’ Mike Hughes: The Tell All Tale.” The book brims with outlandish, unsupported assertions—that George H. W. Bush was a pedophile, say—but they’re interspersed with more grounded frustrations. He saw a government commandeered by the greedy few, one that stretched the truth to start a war in Iraq, and that seemed concerned less with spreading freedom and more with funnelling tax dollars into the pockets of defense contractors. “You think about those numbers for a second,” he wrote, of the amount of money spent on the military. “We have homelessness in this country. We could pay off everyone’s mortgages. And we can eliminate sales tax. Everyone would actually be free.”

Hughes wasn’t a chump. He just felt endlessly lied to. As he wrote near the end of his book, “I want my coffee and I don’t want any whipped cream on top of it, you know what I mean? I just want this raw truth.” ♦

New Yorker Favorites

Why facts don’t change our minds .

The tricks rich people use to avoid taxes .

The man who spent forty-two years at the Beverly Hills Hotel pool .

How did polyamory get so popular ?

The ghostwriter who regrets working for Donald Trump .

Snoozers are, in fact, losers .

Fiction by Jamaica Kincaid: “Girl”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

When Preachers Were Rock Stars

By Louis Menand

Briefly Noted

By Helen Rosner

In the face of online misinformation, these teens are learning how to sort fact from fiction

Beyond the classroom, these tools can help students understand 'the world around them'.

Standing at a high table with an open laptop in front of him, a teen boy speaks to a camera set on a tripod across from him. Lights on stands are seen in the background.

Social Sharing

Sameer Ferdousi, 16, is enthralled by journalism.

A former middle school "news anchor" who reported on events at daily assemblies, Ferdousi is currently on staff at his high school newspaper.

He's also active on social media, like the vast majority of teens, but he's worried about the misinformation he sees on his feed. 

"Anyone can make a post and spread it to millions of people," said the Mississauga, Ont., student. "I've seen a lot more fake news. That kind of sparked my interest for chasing the truth."

social media misinformation essay

Teen fact checkers take on fake TikTok posts

Ferdousi's passion for news has led him to his latest gig: joining the Canadian squad of the international Teen Fact-Checking Network.

More than 91 per cent of young Canadians aged 15-24 are on social media today, according to Statistics Canada , with 62 per cent of this cohort turning to social media first for their news and information . 

Yet teens are also  the least worried about encountering false information online — a concern given that young Canadians are exposed to online harms more than any other population , StatsCan says.

How teens learn about navigating the online world varies from class to class, but two digital media literacy programs aim to get more Canadian students scrutinizing their socials.

A close-up image of a teen boy holding a smartphone, his thumbs hovering over the apps -- including TikTok and Instagram -- on his home screen.

Media literacy group MediaSmarts is spearheading the Teen Fact-Checking Network in Canada, which has partner editions in the U.S., Brazil, India and Spain. The Ottawa-based organizers whittled applicants down to 16 students (English- and French-speaking) for the inaugural lineup.

The students first participated in a boot camp: learning strategies to analyze online content, taking lessons on how to pitch and develop ideas, as well as building technical skills, such as video editing.

"It was a lot at first," Ferdousi admitted. But he said the training — which includes reverse image searching, for instance, or tips on detecting pics doctored via artificial intelligence — has levelled up his critical thinking.

"Whenever I'm seeing a story … or when I'm seeing an image, I check for these little signs: if they're doctored, if they're not telling the whole story, if they're missing some context," he said.

WATCH | A teen from U.S. partner MediaWise digs into unusual Keanu Reeves posts:

MediaSmarts executive director Kathryn Hill believes the amount of information available online is "wonderful," but that abundance also requires verification of what we find. 

social media misinformation essay

Enlisting teens to teach teens

The Teen Fact-Checking Network "is an opportunity to both teach teens directly about how to do that and how to do it well, but also to teach them how to teach others," she said.

"We can all learn from these" videos, Hill added, since the vision for MediaSmarts is to improve the digital media literacy of all Canadians.

WATCH | MediaWise teen fact-checker analyzes politicized posts starring Taylor Swift: 

Evaluating what's online

A high school teacher for more than 20 years, Laura McCarron has first-hand knowledge of why students need to learn to analyze material on the internet. 

social media misinformation essay

How a meme became a teachable moment on digital media literacy

"Students have more freedom and flexibility to be able to research online … and when they're looking up anything, it means that they're not necessarily always looking at [reliable sources]," said the Fredericton social studies teacher. 

"It's become so much more important as a teacher to be able to establish rules and guidelines."

In fall 2020, McCarron was among the first teachers to participate in a new digital literacy offering from civic education organization CIVIX.

In a bright high school classroom, a teacher scrolls through an open laptop while two students sitting next to her peer at the screen.

Called CTRL-F (named for the keyboard shortcut for "find" within a webpage or document), the lessons can be incorporated into the curriculum of various subjects — such as English, social studies, political science, history or media classes. 

The goal is to teach middle and high school students lateral reading techniques , such as digging into who's behind a specific piece of content and checking what other sources say about a particular topic.

Even with straightforward strategies and tips, however, there's a learning curve. McCarron has seen students become frustrated when they realize that a Google search doesn't necessarily generate a trustworthy answer. 

Eventually, though, "they're heavily engaged in conversation about what is reliable and what is not," she said. 

"When they leave high school, they can take these tools, they can take these sources and they can continue to use them in everyday learning and understanding the world around them."

  • Students say misinformation abounds online. Experts say critical thinking helps them navigate it
  • Winnipeg students learning the skills needed to deal with online misinformation, disinformation

Declan DeWolfe, who is one of McCarron's students, now has a toolbox of strategies — searching out reliable sources, verifying information — that they developed by completing research assignments over the past few grades. 

The approach has become "less of an explicit thing that you practise in class and more a way that you interact with the internet," said the Grade 11 student.  

Though the 16-year-old chooses to avoid political or news content when scrolling TikTok (opting to focus on sketch comedy, movies and video game content instead), DeWolfe feels social media users today are bombarded with conflicting opinions, misinformation and disinformation even when they try to avoid it. 

As a result, "it's extremely important to learn how to discern [what's] credible" from content "trying to grab your attention for clout reasons or just trying to mislead you."

A composite photo showing two high school students, each smiling at the camera.

Annie McCaskill, a peer in Grade 12, admits to being frustrated by the extra effort required to evaluate multiple online sources. But she agrees it's important to reinforce the mentality that "not everything is true, we should check that." 

For example, "we've been seeing more articles that are written solely by AI," said the 17-year-old. "I think that's a big problem: to just trust whatever's being said [online], especially now, when there might not even be any human oversight at all."

A teen boy gestures and smiles broadly to someone off camera to the right, with bright television lights in an indoor space seen behind him.

Back in Ontario, Ferdousi emphasizes students today are aware that fake news, out-of-context posts and doctored images and videos exist online. But it's easy for impressionable teens to trip up.  

It happened to him just a few weeks ago. A massive Drake fan, he fell for a friend's TikTok post about a surprise concert, which turned out to be fake. 

So Ferdousi is eager to get to the bottom of suspicious online content — and inspiring other teens to adopt that mindset, too.

"Maybe if their parents are getting tricked by misleading news articles or sites, videos, pictures and those types of things, kids can be their own voice of reason in their homes."

With files from Deana Sumanac-Johnson and Nazima Walji

Related Stories

  • Analysis Ont. school boards are trying to knock down the social media giants. Do their cases stand a chance?
  • Math lessons from deepfakes of Drake, other celebrities on TikTok raise concerns about misinformation
  • Amid rise in AI deepfakes, experts urge school curriculum updates for online behaviour
  • Why passing laws to curb online disinformation is so difficult, according to tech and legal experts
  • How kids and teens can navigate social media in the era of fake news

Add some “good” to your morning and evening.

Start the day smarter. Get the CBC News Morning Brief, the essential news you need delivered to your inbox.

Peer Reviewed

Who knowingly shares false political information online?

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

Some people share misinformation accidentally, but others do so knowingly. To fully understand the spread of misinformation online, it is important to analyze those who purposely share it. Using a 2022 U.S. survey, we found that 14 percent of respondents reported knowingly sharing misinformation, and that these respondents were more likely to also report support for political violence, a desire to run for office, and warm feelings toward extremists. These respondents were also more likely to have elevated levels of a psychological need for chaos, dark tetrad traits, and paranoia. Our findings illuminate one vector through which misinformation is spread.

Department of Political Science, University of Miami, USA

Department of Psychological and Brain Sciences, Indiana University, USA

Department of English, University of Miami, USA

Department of Electrical and Computer Engineering, University of Miami, USA

Department of Interactive Media, University of Miami, USA

Department of Computer Science, University of Miami, USA

Department of Biology, University of Miami, USA

social media misinformation essay

Research Questions

  • What percentage of Americans admit to knowingly sharing political information on social media they believe may be false?
  • How politically engaged, and in what ways, are people who report knowingly sharing false political information online? 
  • Are people who report knowingly sharing false political information online more likely to report extremist views and support for extremist groups?
  • What are the psychological, political, and social characteristics of those who report knowingly sharing false political information online? 

Essay Summary

  • While most people are exposed to small amounts of misinformation online (in comparison to their overall news diet), previous studies have shown that only a small number of people are responsible for sharing most of it. Gaining a better understanding of the motivations of social media users who share information they believe to be false could lead to interventions aimed at limiting the spread of online misinformation.  
  • Using a national survey from the United States ( n  = 2,001; May–June 2022), we asked respondents if they share political information on social media that they believe is false; 14% indicated that they do. 
  • Respondents who reported purposefully sharing false political information online were more likely to harbor (i) a desire to run for political office, (ii) support for political violence, and (iii) positive feelings toward QAnon, Proud Boys, White Nationalists, and Vladimir Putin. Furthermore, these respondents displayed elevated levels of anti-social characteristics, including a psychological need for chaos, “dark” personality traits (narcissism, psychopathy, Machiavellianism, and sadism), paranoia, dogmatism, and argumentativeness. 
  • People who reported sharing political information they believe is false on social media were more likely to use social media platforms known for promoting extremist views and conspiracy theories (e.g., 8Kun, Telegram, Truth Social).

Implications 

A growing body of research shows that online misinformation is both easily accessible (Allcott & Gentzkow, 2017; Del Vicario et al., 2016) and can spread quickly through online social networks (Vosoughi et al., 2018). Though  misinformation —information that is false or misleading according to the best currently established knowledge (Ecker et al., 2021; Vraga & Bode, 2020)—is often spread unintentionally,  disinformation , a subcategory of misinformation, is spread with the deliberate intent to deceive (Starbird, 2019). Critically, the pervasiveness of online mis- and disinformation has made attempts by online news and social media companies to prevent, curtail, or remove it from various platforms difficult (Courchesne et al., 2021; Ha et al., 2022; Sanderson et al., 2021; Vincent et al., 2022). While the causal impact of online misinformation is often difficult to determine (Enders et al., 2022; Uscinski et al., 2022), numerous studies have shown that exposure is (at least) correlated with false beliefs (Bryanov & Vziatysheva, 2021), beliefs in conspiracy theories (Xiao et al., 2021), and nonnormative behaviors, including vaccine refusal (Romer & Jamieson, 2021). 

Numerous studies have investigated the spread of political mis- and disinformation as a “top-down” phenomenon (Garrett, 2017; Lasser et al., 2022; Mosleh & Rand, 2022) emanating from domestic political actors (Berlinski et al., 2021), untrustworthy websites (Guess et al., 2020), and hostile foreign governments (Bail et al., 2019), and flowing through social media and other networks (Johnson et al., 2022). Indeed, studies have found that most online political content, as well as most online misinformation, is produced by a relatively small number of accounts (Grinberg et al., 2019; Hughes, 2019). Other research has focused on how the public interacts with and evaluates misinformation to identify the individual differences related not only to falling for misinformation but also to unintentionally spreading it (Littrell et al., 2021a; Pennycook & Rand, 2021).

However, rather than being unknowingly duped into sharing misinformation, many people (who are not political elites, paid activists, or foreign political actors) knowingly share false information in a deliberate attempt to deceive or mislead others, often in the service of a specific goal (Buchanan & Benson, 2019; Littrell et al., 2021b; MacKenzie & Bhatt, 2020; Metzger et al., 2021). For instance, people who create and spread fake news content and highly partisan disinformation online are often motivated by the desire that such posts will “go viral,” attracting attention that will hopefully provide a reliable stream of advertising revenue (Guess & Lyons, 2020; Pennycook & Rand, 2020; Tucker et al., 2018). Others may do so to discredit political or ideological outgroups, advance their own ideological agenda or that of their partisan ingroup, or simply because they enjoy instigating discord and chaos online (Garrett et al., 2019; Marwick & Lewis, 2017; Petersen et al., 2023).

Though the art of deception is likely as old as communication itself, in the past, a person’s ability to meaningfully communicate with (and perhaps deceive) large groups was arguably limited. In contrast, social media now gives every person the power to rapidly broadcast (false) information to potentially global, mass audiences (DePaulo et al., 1996; Guess & Lyons, 2020). This implicates social media as a critical vector in the spread of misinformation. Whatever the motivations for sharing false information online, a better understanding of the human sources who create it by identifying the psychological, political, and ideological factors common to those who do so intentionally can provide crucial insights to aid in developing interventions that decrease its spread.

 In a national survey of the United States, we asked participants to rate their agreement (“strongly agree” to “strongly disagree”) with the statement, “I share information on social media about politics even though I believe it may be false.” In total, 14% of respondents agreed or strongly agreed with this statement; these findings coincide with those of other studies on similar topics (Buchanan & Kempley, 2021; Halevy et al., 2014; Serota & Levine, 2015). Normatively, it is encouraging that only a small minority of our respondents indicated that they share false information about politics on social media. However, the fact that 14% of the U.S. adult population claims to purposely spread political misinformation online is nonetheless troubling. Rather than being exclusively a top-down phenomenon, the purposeful sharing of false information by members of the public appears to be an important vector of misinformation that deserves more attention from researchers and practitioners.

Of further concern, our findings show that people who claimed to knowingly share information on social media about politics were more politically active in meaningful ways. First and perhaps foremost, such respondents were not only more likely to state a desire to run for political office but were also more likely to feel qualified for office, compared to people that do not claim to knowingly share false information. This finding is troubling from a normative perspective since such people might not be honest with constituents if elected (consider, for example, Representative George Santos of New York), and this could further erode our information environment (e.g., Celse & Chang, 2019). However, this finding may also offer crucial insights to better understand the tendency and motivations of at least some politicians to share misinformation or outright lie to the public (Arendt, 1972; Sunstein, 2021). Beyond aspirations for political office, spreading political misinformation online is positively associated with support for political violence, civil disobedience, and protests. Moreover, though spreading misinformation is also associated with participating in political campaigns, it is only weakly related to attending political meetings, contacting elected representatives, or staying informed about politics. Taken together, these findings paint a somewhat nuanced picture: People who were more likely to self-report having intentionally shared false political information on social media were more likely to be politically active and efficacious in certain aggressive ways, while simultaneously being less likely to participate in more benign or arguably positive ways.

Our findings also revealed that respondents who reported sharing false political information on social media were more likely to express support for extremist groups such as QAnon, Proud Boys, and White Nationalists. These observations coincide with previous studies linking extremist groups to the spread of misinformation, disinformation, and conspiracy theories (Moran et al., 2021; Nguyen & Gokhale, 2022; Stern, 2019). One possible explanation for this association is that supporters of extremist groups recognize their outsider status in comparison to mainstream political groups, leveraging false information to manage public impressions, attract new members, and further their group’s cause. Alternatively, it could be that the beliefs promoted by extremist groups are so disconnected from our shared political reality that these groups may need to rely on falsehoods to manipulate their own followers and prevent attrition of group membership. While the exact nature of these associations remains unclear, future research should further interrogate the connection between sharing false information and support for extremism and extremist groups. In line with previous studies (Lawson & Kakkar, 2021), our findings show that people who reported sharing false political information on social media were more likely to report higher levels of antisocial psychological characteristics. Specifically, they reported higher levels of a “need for chaos,” “dark tetrad” personality traits (a combined measure of narcissism, Machiavellianism, psychopathy, and sadism), paranoia, dogmatism, and argumentativeness when compared to respondents who did not report knowingly sharing false information on social media. Much like the Joker in the movie  The Dark Knight , people who intentionally spread false information online may, at least on some level, simply want to “watch the world burn” (Arceneaux et al., 2021). Indeed, previous studies suggest that much of the toxicity of social media is not due to a “mismatch” between human psychology and online platforms (i.e., that online platforms bring out the worst in otherwise nice people); instead, such toxicity results from a relatively smaller fraction of people with status-seeking antisocial tendencies, who act overtly antisocial online, and are drawn to interactions in which they express elevated levels of aggressiveness toward others with toxic language (Bor & Petersen, 2022; Kim et al., 2021). Such observations are echoed in our own results, which showed that people who knowingly share false information online were also more likely to indicate that posting on social media gives them a greater feeling of power and control and allows them to express themselves more freely. 

While research on the associations between religiosity and lying/dishonesty has shown mixed results (e.g., Desmond & Kraus, 2012; Grant et al., 2019), we found that religiosity positively predicts knowingly sharing false information online. Additionally, despite numerous studies of online activity suggesting that people on the political right are more likely to share misinformation (e.g., DeVerna et al., 2022; Garrett & Bond, 2021), our findings show no significant association between self-reported sharing of false information online and political identity or the strength of one’s partisan or ideological views. 

Our findings offer a broad psychological and ideological blueprint of individuals who reported intentionally spreading false information online, implicating specific personality and attitudinal characteristics as potential motivators of such behavior. Overall, these individuals are more antagonistic and argumentative, have a higher need for chaos, and tend to be more dogmatic and religious. Additionally, they are more politically engaged and active, often in counterproductive and destructive ways, and show higher support for extremist groups. They are also more likely to get their news from fringe social media sources and feel a heightened sense of power and self-expression from their online interactions. Taken together, these findings suggest that interventions which focus on eliminating the perceived social incentives gained from intentionally spreading misinformation online (e.g., heightened feelings of satisfaction, power, and enjoyment associated with discrediting ideological outgroups, instigating chaos, and “trolling”) may be effective at attenuating this type of online behavior. 

Though some research has shown promising results using general interventions such as “accuracy nudges” (Pennycook et al., 2021) and educational video games to inoculate people against misinformation (Roozenbeek et al., 2022), more direct measures may also need to be implemented by social media companies. For example, companies might consider restructuring the online environment to remove overt social incentives that may inadvertently reward pernicious behavior (e.g., reconsidering how “likes” and sharing are implemented) and instead create online ecosystems that reward more positive social media interactions. For practitioners, at the very least, our finding that some people claim to share online misinformation for reasons other than simply being duped, suggests that future interventions aimed at limiting the spread of misinformation should attempt to address users who both unknowingly  and  knowingly share misinformation, as these two groups of users may require different interventions. More specifically, if one does not care about accuracy, then accuracy nudges will do little to prevent them from sharing misinformation. Taken together, our findings further implicate personality and attitudinal characteristics as potentially significant motivators for the spread of misinformation. As such, we join others who have called for greater integration of personality research into the study of online misinformation and the ways in which it spreads (e.g., Lawson & Kakkar, 2021; van der Linden et al., 2021).

Finding 1: Most people do not report intentionally spreading false political information online. 

We asked participants to rate their agreement (“strongly agree” to “strongly disagree”) with the statement, “I share information on social media about politics even though I believe it may be false.” At best, agreement with this statement reflects a carefree disregard for the truth, a key characteristic of certain types of “bullshitting” (Frankfurt, 2009; Littrell et al., 2021a). However, at worst, strong agreement with this statement is admitting to intentional deception (i.e., lying). Though most participants disagreed with this statement, a non-trivial percentage of respondents (14%) indicated that they do intentionally share false political information on social media (Figure 1). These findings are consistent with empirical studies of similar constructs, such as lying (Buchanan & Kempley, 2021; Halevy et al., 2014; Serota & Levine, 2015) and “bullshitting” (Littrell et al., 2021a), which have shown that a small but consistent percentage of people admit to intentionally misleading others. 

social media misinformation essay

Notably, it is possible that the prevalence of knowingly sharing political misinformation online is somewhat underreported in our data, given that some of the spreaders of it in our sample could have denied it when responding to that item (which, ironically, would be another instance of them spreading misinformation). Indeed, some research has found that survey respondents may sometimes hide their true beliefs or express agreement or support for a specific idea they actually oppose either as a joke or to signal their group identity (i.e., Lopez & Hillygus, 2018; Schaffner & Luks, 2018; Smallpage et al., 2022). Further, self-reported measures of behavior are sometimes only weakly correlated with actual behavior (Dang et al., 2020). However, there are good reasons to have confidence in this self-reported measure. First, self-report surveys have high reliability for measuring complex psychological constructs (e.g., beliefs, attitudes, preferences) and are sometimes better at predicting real-world outcomes than behavioral measures of those same constructs (Kaiser & Oswald, 2022). Second, the percentage of respondents in our sample who admitted to spreading false political information online aligns with findings from previous research. For example, Serota and Levine (2015) found that 14.1% of their sample admitted to telling at least one “big lie” per day, while Littrell and colleagues (2021a) found 17.3% of their sample admitted to engaging in “persuasive bullshitting” on a regular basis. These numbers are similar to the 14% of our sample who self-reported knowingly sharing false information. Third, previous studies have found that self-report measures of lying and bullshitting positively correlate with behavioral measures of those same constructs (Halevy et al., 2014; Littrell et al., 2021a; Zettler et al., 2015). Given that our dependent variable captures a conceptually similar construct to those other measures, we are confident that our self-report data reflects real-world behavior, at least to a reasonable degree.

Crucially, we also found that the correlational patterns we reported across multiple variables are highly consistent and make sense with respect to what prior theory would predict of people who share information they believe to be false. Indeed, “need for chaos” has recently been shown to be a strong motivator of sharing hostile, misleading political rumors online (Petersen et al., 2023) and, as Figure 4 illustrates, “need for chaos” was also the strongest positive predictor in our study of sharing false political information online (β = .18, p < .001). Moreover, as an added test of the reliability and validity of our dependent variable, we examined correlations between responses to our measure of sharing false political information online and single-scale items that reflect similar behavioral tendencies. For instance, our dependent variable is significantly and positively correlated (r = .53, p < .001) with the statement, “Just for kicks, I’ve said mean things to people on social media,” from the Sadism subscale of our “Dark Tetrad” measure. Additionally, our dependent variable also correlated well with two conceptually similar items from the Machiavellianism subscale, “I tend to manipulate others to get my way” (r = .43, p < .001) and “I have used deceit or lied to get my way” (r = .35, p < .001). Although the sizes of these effects do not suggest that these constructs are isomorphic, it is helpful to note that our dependent variable item specifically measures sharing false information about politics on social media, which arises from a diversity of motivations, and not lying about anything and everything across all domains.

Finding 2: Reporting sharing false political information is associated with politically motivated behaviors and attitudes. 

Self-reported sharing of false political information on social media was significantly and positively correlated with having contacted an elected official within the previous year ( r  = .24,  p  < .001) and with the belief that, “People like me can influence government” ( r  = .23,  p  < .001). Additionally, people who self-report sharing false political information online also reported more frequent attendance at political meetings ( r  = .36,  p  < .001) and volunteering during elections ( r  = .41,  p  < .001) compared to participants who do not report sharing false political information online.

Although these findings may give the impression that respondents reporting spreading online political misinformation are somewhat civically virtuous, these respondents also report engaging in aggressive and disruptive political behaviors. Specifically, reporting spreading false information was significantly associated with greater reported involvement in political protests ( r  = .40,  p  < .001), acts of civil disobedience ( r  = .44,  p  < .001), and political violence ( r  = .46,  p  < .001). Spreading false information online was also significantly and positively related to believing that one is qualified for public office ( r  = .40,  p  < .001) and the desire to possibly run for office one day ( r  = .52,  p  < .001), but was only weakly related to staying informed about government and current affairs (“follows politics”;  r  = .05,  p  = .024).

social media misinformation essay

Finding 3: Reporting sharing false political information online is associated with support for extremist groups. 

Using a sliding scale from 0 to 100, participants rated their feelings about various public figures and groups (Figure 3). While the self-reported tendency to knowingly share false political information online was weakly, but positively, associated with support for more mainstream public figures such as Donald Trump ( r  = .14,  p  < .001), Joe Biden ( r  = .13,  p  < .001), and Bernie Sanders ( r  = .09,  p  < .001), it was more strongly associated with support for Vladimir Putin ( r  = .40,  p  < .001). Likewise, self-reported sharing of false political information online was weakly but positively associated with support for the Democrat Party ( r  = .13,  p  < .001) and the Republican Party ( r  = .13,  p  < .001), but was most strongly associated with support for extremist groups such as the QAnon movement ( r  = .45,  p  < .001), Proud Boys ( r  = .42,  p  < .001), and White Nationalists ( r  = .42,  p  < .001). 

social media misinformation essay

Finding 4: Reporting sharing false political information online is associated with dark psychological traits.

We constructed a multiple linear regression model to better understand the extent to which various psychological, political, and demographic characteristics might underlie the proclivity to knowingly share political misinformation on social media. Holding all other variables constant, a greater “need for chaos” ( β  = .18,  p  < .001) as well as higher levels of antagonistic, “dark tetrad” personality traits (a single factor measure of narcissism, Machiavellianism, psychopathy, and sadism;  β  = .18,  p  < .001) were the strongest positive predictors of self-reported sharing of political misinformation online. Self-reported sharing of false information was also predicted by higher levels of paranoia ( β  = .11,  p  < .001), dogmatism ( β  = .09,  p  = .001), and argumentativeness ( β  = .06,  p  = .035). People who feel that posting on social media gives them greater feelings of power and control ( β  = .14, p < .001) and allows them to more freely express opinions and attitudes they are reluctant to express in person ( β  = .06, p = .038) are also more likely to report knowingly sharing false political information online. Importantly, though sharing political misinformation online is positively predicted by religiosity ( β  = .07, p = .003), it is not significantly associated with political identity or the strength of one’s partisan or ideological views.

social media misinformation essay

Finding 5: People who report intentionally sharing false political information online are more likely to get their news from social media sites, particularly from outlets that are known for perpetuating fringe views.

On a scale from “everyday” to “never,” participants reported how often they get “information about current events, public issues, or politics” from various media sources, including offline  legacy media sources  (e.g., network television, cable news, local television, print newspapers, radio) and  online media sources  (e.g., online newspapers, blogs, YouTube, and various social media platforms). A principal components analysis of the online media sources revealed three distinct categories: 1)  online mainstream news media , made up of TV news websites, online news magazines, online newspapers; 2)  mainstream social media sites , such as YouTube, Facebook, Twitter, Instagram; and 3)  alternative social media sites , which comprised blogs, Reddit, Truth Social, Telegram, and 8Kun (factor loadings are listed in Table A8 of the appendix). After reverse-coding the scale for analysis, we examined bivariate correlations to determine whether the proclivity to share false political information online is meaningfully associated with the types of media sources participants get their information from. 

As shown in Figure 5A, reporting sharing false political information online is strongly associated with more frequent use of alternative ( r  = .46,  p  < .001) and mainstream ( r  = .42,  p  < .001) social media sites and weakly-to-moderately correlated with getting information from online ( r  = .20,  p  < .001) or offline/legacy ( r  = .17,  p  < .001) mainstream news sources. On an individual level (Figure 5B), reporting sharing false political information online was most strongly associated with getting information on current events, public issues, and politics from Truth Social ( r  = .41,  p  < .001), Telegram ( r  = .41,  p  < .001), and 8Kun ( r  = .41,  p  < .001), of which the latter two are popular among fringe groups known for promoting extremist views and conspiracy theories (Urman & Katz, 2022; Zeng & Schäfer, 2021). 

social media misinformation essay

We surveyed 2,001 American adults (900 male, 1,101 female,  M age  = 48.54,  SD age  = 18.51, Bachelor’s degree or higher = 43.58%) from May 26 through June 30, 2022, using Qualtrics ( qualtrics.com ). For this survey, Qualtrics partnered with Cint and Dynata to recruit a demographically representative sample (self-reported sex, age, race, education, and income) based on U.S. Census records. Cint and Dynata maintain panels of subjects that are only used for research, and both comply fully with European Society for Opinion and Marketing Research (ESOMAR) standards for protecting research participants’ privacy and information security. Additionally, and in keeping with Qualtrics data quality standards, responses were excluded from the data set from participants who failed six attention check items or completed the survey in less than one-half of the estimated median completion time of 18.6 minutes (calculated from a soft-launch test of the questionnaire, n = 50). In exchange for their participation, respondents received incentives redeemable from the sample provider. These data were collected as part of a larger survey.

Our dependent variable asked respondents to rate their agreement with the following statement using a 5-point Likert-type scale (Figure 1):

“I share information on social media about politics even though I believe it may be false.”

In addition to this question, participants were also asked to rate the strength of certain political beliefs (i.e., whether they feel qualified to run for office, whether they think they might run for office one day, and whether they believe someone like them can influence government) and the frequency that they engaged in specific political behaviors in the previous 12 months (contacting elected officials, volunteering during an election, staying informed about government, and participating in political meetings, protests, civil disobedience, or violence). We calculated bivariate Pearson’s correlation coefficients for each of these variables with the item measuring whether one shares false political information on social media, which we have displayed in Table A4 of the Appendix.

Participants also used “feelings thermometers” to rate their attitudes toward a number of public political figures and groups. Each public figure or group was rated on a scale from 0 to 100, with scores of 0 to 50 reflecting negative feelings and scores from above 50 to 100 reflecting positive feelings. Although all correlations between the sharing false political information variable the public figures and groups were statistically significant, the strongest associations were with more adversarial figures (e.g., Putin) and groups (e.g., The QAnon Movement, Proud Boys, White Nationalists). We have plotted these associations in Figure 3. 

To provide a more complete description of individuals who are more likely to report intentionally sharing false political information online, we examine the predictive utility of a number of psychological attributes, political attitudes, and demographics variables in an ordinary least squares (OLS) multiple linear regression model (Figure 4). We provide precise estimates in tabular form for all predictors as well as the overall model in the Appendix.

Finally, participants were asked to rate 17 media sources according to the frequency (“everyday” to “never”) with which they use each for staying informed on current events, public issues, and politics. A principal components analysis revealed that the media sources represented four categories:  legacy mainstream news media  (network TV, cable TV, local TV, radio, and print newspapers),  online mainstream news media  (TV news websites, online news magazines, online newspapers),  mainstream social media sites  (YouTube, Facebook, Twitter, Instagram), and  alternative social media sites  (blogs, Reddit, Truth Social, Telegram, 8Kun). We calculated bivariate correlations between reporting sharing false political information online with the four categories of media sources as well as the 17 individual sources. We have plotted these associations in Figure 5 and provide a full list of intercorrelations for all variables as well as factor loadings from the PCA in the Appendix. 

  • Conspiracy Theories
  • / Platforms
  • / Psychology
  • / Social Media

Cite this Essay

Littrell, S., Klofstad, C., Diekman, A., Funchion, J., Murthi, M., Premaratne, K., Seelig, M., Verdear, D., Wuchty, S., & Uscinski, J. E. (2023). Who knowingly shares false political information online?. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-121

Bibliography

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives , 31 (2), 211–236. https://doi.org/10.1257/jep.31.2.211

Arceneaux, K., Gravelle, T. B., Osmundsen, M., Petersen, M. B., Reifler, J., & Scotto, T. J. (2021). Some people just want to watch the world burn: The prevalence, psychology and politics of the ‘need for chaos’. Philosophical Transactions of the Royal Society B: Biological Sciences , 376 (1822), 20200147. https://doi.org/10.1098/rstb.2020.0147

Arendt, H. (1972). Crises of the Republic: Lying in politics, civil disobedience on violence, thoughts on politics, and revolution . Houghton Mifflin Harcourt.

Armaly, M. T., & Enders, A. M. (2022). ‘Why me?’ The role of perceived victimhood in American politics. Political Behavior , 44 (4), 1583–1609. https://doi.org/10.1007/s11109-020-09662-x

Bail, C., Guay, B., Maloney, E., Combs, A., Hillygus, D. S., Merhout, F., Freelon, D., & Volfovsky, A. (2019). Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017. Proceedings of the National Academy of Sciences , 117 (1), 243–250. https://doi.org/10.1073/pnas.1906420116

Berlinski, N., Doyle, M., Guess, A. M., Levy, G., Lyons, B., Montgomery, J. M., Nyhan, B., & Reifler, J. (2021). The effects of unsubstantiated claims of voter fraud on confidence in elections. Journal of Experimental Political Science , 10 (1), 34–49. https://doi.org/10.1017/XPS.2021.18

Bizumic, B., & Duckitt, J. (2018). Investigating right wing authoritarianism with a very short authoritarianism scale. Journal of Social and Political Psychology , 6 (1), 129–150. https://doi.org/10.5964/jspp.v6i1.835

Bor, A., & Petersen, M. B. (2022). The psychology of online political hostility: A comprehensive, cross-national test of the mismatch hypothesis. American Political Science Review , 116 (1), 1–18. https://doi.org/10.1017/S0003055421000885

Bryanov, K., & Vziatysheva, V. (2021). Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLOS ONE , 16 (6), e0253717. https://doi.org/10.1371/journal.pone.0253717

Buchanan, T., & Benson, V. (2019). Spreading disinformation on Facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social Media + Society , 5 (4), 2056305119888654. https://doi.org/10.1177/2056305119888654

Buchanan, T., & Kempley, J. (2021). Individual differences in sharing false political information on social media: Direct and indirect effects of cognitive-perceptual schizotypy and psychopathy. Personality and Individual Differences , 182 , 111071. https://doi.org/10.1016/j.paid.2021.111071

Buhr, K., & Dugas, M. J. (2002). The intolerance of uncertainty scale: Psychometric properties of the English version. Behaviour Research and Therapy , 40 (8), 931–945. https://doi.org/10.1016/S0005-7967(01)00092-4

Celse, J., & Chang, K. (2019). Politicians lie, so do I. Psychological Research , 83 (6), 1311–1325. https://doi.org/10.1007/s00426-017-0954-7

Choi, T. R., & Sung, Y. (2018). Instagram versus Snapchat: Self-expression and privacy concern on social media. Telematics and Informatics , 35 (8), 2289–2298. https://doi.org/10.1016/j.tele.2018.09.009

 Chun, J. W., & Lee, M. J. (2017). When does individuals’ willingness to speak out increase on social media? Perceived social support and perceived power/control. Computers in Human Behavior , 74 , 120–129. https://doi.org/10.1016/j.chb.2017.04.010

Conrad, K. J., Riley, B. B., Conrad, K. M., Chan, Y.-F., & Dennis, M. L. (2010). Validation of the Crime and Violence Scale (CVS) against the Rasch measurement model including differences by gender, race, and age. Evaluation Review , 34 (2), 83–115. https://doi.org/10.1177/0193841×10362162

Costello, T. H., Bowes, S. M., Stevens, S. T., Waldman, I. D., Tasimi, A., & Lilienfeld, S. O. (2022). Clarifying the structure and nature of left-wing authoritarianism. Journal of Personality and Social Psychology , 122 (1), 135–170. https://doi.org/10.1037/pspp0000341

Courchesne, L., Ilhardt, J., & Shapiro, J. N. (2021). Review of social science research on the impact of countermeasures against influence operations. Harvard Kennedy School (HKS) Misinformation Review, 2 (5). https://doi.org/10.37016/mr-2020-79

Crawford, J. R., & Henry, J. D. (2004). The positive and negative affect schedule (PANAS): Construct validity, measurement properties and normative data in a large non-clinical sample. British Journal of Clinical Psychology , 43 (3), 245–265. https://doi.org/10.1348/0144665031752934

Dang, J., King, K. M., & Inzlicht, M. (2020). Why are self-report and behavioral measures weakly correlated? Trends in Cognitive Sciences , 24 (4), 267–269. https://doi.org/10.1016/j.tics.2020.01.007

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences , 113 (3), 554–559. https://doi.org/10.1073/pnas.1517441113

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology , 70 (5), 979–995. https://doi.org/10.1037/0022-3514.70.5.979

Desmond, S. A., & Kraus, R. (2012). Liar, liar: Adolescent religiosity and lying to parents. Interdisciplinary Journal of Research on Religion , 8, 1–26. https://www.religjournal.com/pdf/ijrr08005.pdf

DeVerna, M. R., Guess, A. M., Berinsky, A. J., Tucker, J. A., & Jost, J. T. (2022). Rumors in retweet: Ideological asymmetry in the failure to correct misinformation. Personality and Social Psychology Bulletin , 01461672221114222. https://doi.org/10.1177/01461672221114222

Durand, M.-A., Yen, R. W., O’Malley, J., Elwyn, G., & Mancini, J. (2020). Graph literacy matters: Examining the association between graph literacy, health literacy, and numeracy in a Medicaid eligible population. PLOS ONE , 15 (11), e0241844. https://doi.org/10.1371/journal.pone.0241844

Ecker, U. K. H., Sze, B. K. N., & Andreotta, M. (2021). Corrections of political misinformation: No evidence for an effect of partisan worldview in a US convenience sample. Philosophical Transactions of the Royal Society B: Biological Sciences , 376 (1822), 20200145. https://doi.org/10.1098/rstb.2020.0145

Edelson, J., Alduncin, A., Krewson, C., Sieja, J. A., & Uscinski, J. E. (2017). The effect of conspiratorial thinking and motivated reasoning on belief in election fraud. Political Research Quarterly , 70 (4), 933–946. https://doi.org/10.1177/1065912917721061

Enders, A. M., Uscinski, J., Klofstad, C., & Stoler, J. (2022). On the relationship between conspiracy theory beliefs, misinformation, and vaccine hesitancy. PLOS ONE , 17 (10), e0276082. https://doi.org/10.1371/journal.pone.0276082

Frankfurt, H. G. (2009). On bullshit . Princeton University Press.

Garrett, R. K. (2017). The “echo chamber” distraction: Disinformation campaigns are the problem, not audience fragmentation. Journal of Applied Research in Memory and Cognition , 6 (4), 370–376. https://doi.org/10.1016/j.jarmac.2017.09.011

Garrett, R. K., & Bond, R. M. (2021). Conservatives’ susceptibility to political misperceptions. Science Advances , 7 (23), eabf1234. https://doi.org/10.1126/sciadv.abf1234

Garrett, R. K., Long, J. A., & Jeong, M. S. (2019). From partisan media to misperception: Affective polarization as mediator. Journal of Communication , 69 (5), 490–512. https://doi.org/10.1093/joc/jqz028

Grant, J. E., Paglia, H. A., & Chamberlain, S. R. (2019). The phenomenology of lying in young adults and relationships with personality and cognition. Psychiatric Quarterly , 90 (2), 361–369. https://doi.org/10.1007/s11126-018-9623-2

Green, C. E. L., Freeman, D., Kuipers, E., Bebbington, P., Fowler, D., Dunn, G., & Garety, P. A. (2008). Measuring ideas of persecution and social reference: The Green et al. Paranoid Thought Scales (GPTS). Psychological Medicine , 38 (1), 101–111. https://doi.org/10.1017/S0033291707001638

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science , 363 (6425), 374–378. https://doi.org/10.1126/science.aau2706

Guess, A., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 U.S. election. Nature Human Behaviour , 4 (5), 472–480. https://doi.org/10.1038/s41562-020-0833-x

Guess, A. M., & Lyons, B. A. (2020). Misinformation, disinformation, and online propaganda. In N. Persily & J. A. Tucker (Eds.), Social media and democracy: The state of the field and prospects for reform (pp. 10–33), Cambridge University Press. https://doi.org/10.1017/9781108890960

Ha, L., Graham, T., & Gray, J. (2022). Where conspiracy theories flourish: A study of YouTube comments and Bill Gates conspiracy theories. Harvard Kennedy School (HKS) Misinformation Review, 3 (5). https://doi.org/10.37016/mr-2020-107

Halevy, R., Shalvi, S., & Verschuere, B. (2014). Being honest about dishonesty: Correlating self-reports and actual lying. Human Communication Research , 40 (1), 54–72. https://doi.org/10.1111/hcre.12019

Hughes, A. (2019). A small group of prolific users account for a majority of political tweets sent by U.S. adults. Pew Research Center. https://pewrsr.ch/35YXMrM

Johnson, T. J., Wallace, R., & Lee, T. (2022). How social media serve as a super-spreader of misinformation, disinformation, and conspiracy theories regarding health crises. In J. H. Lipschultz, K. Freberg, & R. Luttrell (Eds.), The Emerald handbook of computer-mediated communication and social media (pp. 67–84). Emerald Publishing Limited. https://doi.org/10.1108/978-1-80071-597-420221005

Jonason, P. K., & Webster, G. D. (2010). The dirty dozen: A concise measure of the dark triad. Psychological Assessment , 22 (2), 420–432. https://doi.org/https://doi.org/10.1037/a0019265

Kaiser, C., & Oswald, A. J. (2022). The scientific value of numerical measures of human feelings. Proceedings of the National Academy of Sciences , 119 (42), e2210412119. https://doi.org/doi:10.1073/pnas.2210412119

Kim, J. W., Guess, A., Nyhan, B., & Reifler, J. (2021). The distorting prism of social media: How self-selection and exposure to incivility fuel online comment toxicity. Journal of Communication , 71 (6), 922–946. https://doi.org/10.1093/joc/jqab034

Lasser, J., Aroyehun, S. T., Simchon, A., Carrella, F., Garcia, D., & Lewandowsky, S. (2022). Social media sharing of low-quality news sources by political elites. PNAS Nexus , 1 (4). https://doi.org/10.1093/pnasnexus/pgac186

Lawson, M. A., & Kakkar, H. (2021). Of pandemics, politics, and personality: The role of conscientiousness and political ideology in the sharing of fake news. Journal of Experimental Psychology: General , 151 (5), 1154–1177. https://doi.org/10.1037/xge0001120

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021a). The bullshitting frequency scale: Development and psychometric properties. British Journal of Social Psychology , 60 (1), e12379. https://doi.org/10.1111/bjso.12379

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021b). ‘You can’t bullshit a bullshitter’ (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information. British Journal of Social Psychology , 60 (4), 1484–1505. https://doi.org/10.1111/bjso.12447

Lopez, J., & Hillygus, D. S. (March 14, 2018). Why so serious?: Survey trolls and misinformation . SSRN. http://dx.doi.org/10.2139/ssrn.3131087

MacKenzie, A., & Bhatt, I. (2020). Lies, bullshit and fake news: Some epistemological concerns. Postdigital Science and Education , 2 (1), 9–13. https://doi.org/10.1007/s42438-018-0025-4

Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online . Data & Society Research Institute. https://datasociety.net/library/media-manipulation-and-disinfo-online/

McClosky, H., & Chong, D. (1985). Similarities and differences between left-wing and right-wing radicals. British Journal of Political Science , 15 (3), 329–363. https://doi.org/10.1017/S0007123400004221

Metzger, M. J., Flanagin, A. J., Mena, P., Jiang, S., & Wilson, C. (2021). From dark to light: The many shades of sharing misinformation online. Media and Communication , 9 (1), 134–143. https://doi.org/10.17645/mac.v9i1.3409

Moran, R. E., Prochaska, S., Schlegel, I., Hughes, E. M., & Prout, O. (2021). Misinformation or activism: Mapping networked moral panic through an analysis of #savethechildren. AoIR Selected Papers of Internet Research , 2021 . https://doi.org/10.5210/spir.v2021i0.12212

Mosleh, M., & Rand, D. G. (2022). Measuring exposure to misinformation from political elites on Twitter. Nature Communications , 13 , 7144. https://doi.org/10.1038/s41467-022-34769-6

Nguyen, H., & Gokhale, S. S. (2022). Analyzing extremist social media content: A case study of Proud Boys. Social Network Analysis and Mining , 12 (1), 115. https://doi.org/10.1007/s13278-022-00940-6

Okamoto, S., Niwa, F., Shimizu, K., & Sugiman, T. (2001). The 2001 survey for public attitudes towards and understanding of science and technology in Japan. National Institute of Science and Technology Policy Ministry of Education, Culture, Sports, Science and Technology. https://nistep.repo.nii.ac.jp/record/4385/files/NISTEP-NR072-SummaryE.pdf

Paulhus, D. L., Buckels, E. E., Trapnell, P. D., & Jones, D. N. (2020). Screening for dark personalities. European Journal of Psychological Assessment , 37 (3), 208–222. https://doi.org/10.1027/1015-5759/a000602

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021). Shifting attention to accuracy can reduce misinformation online. Nature , 592 (7855), 590–595. https://doi.org/10.1038/s41586-021-03344-2

Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality , 88 (2), 185–200. https://doi.org/10.1111/jopy.12476

Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences , 25 (5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Petersen, M. B., Osmundsen, M., & Arceneaux, K. (2023). The “need for chaos” and motivations to share hostile political rumors. American Political Science Review , 1–20. https://doi.org/10.1017/S0003055422001447

Romer, D., & Jamieson, K. H. (2021). Patterns of media use, strength of belief in Covid-19 conspiracy theories, and the prevention of Covid-19 from March to July 2020 in the United States: Survey study. Journal of Medical Internet Research , 23 (4), e25215. https://doi.org/10.2196/25215

Roozenbeek, J., van der Linden, S., Goldberg, B., Rathje, S., & Lewandowsky, S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances , 8 (34), eabo6254. https://doi.org/doi:10.1126/sciadv.abo6254

Sanderson, Z., Brown, M. A., Bonneau, R., Nagler, J., & Tucker, J. A. (2021). Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform. Harvard Kennedy School (HKS) Misinformation Review , 2 (4). https://doi.org/10.37016/mr-2020-77

Schaffner, B. F., & Luks, S. (2018). Misinformation or expressive responding? What an inauguration crowd can tell us about the source of political misinformation in surveys. Public Opinion Quarterly , 82 (1), 135–147. https://doi.org/10.1093/poq/nfx042

Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology , 34 (2), 138–157. https://doi.org/10.1177/0261927X14528804

Smallpage, S. M., Enders, A. M., Drochon, H., & Uscinski, J. E. (2022). The impact of social desirability bias on conspiracy belief measurement across cultures. Political Science Research and Methods , 11 (3), 555–569. https://doi.org/10.1017/psrm.2022.1

Starbird, K. (2019). Disinformation’s spread: Bots, trolls and all of us. Nature , 571 (7766), 449–450. https://doi.org/10.1038/d41586-019-02235-x

Stern, A. M. (2019). Proud Boys and the white ethnostate: How the alt-right is warping the American imagination . Beacon Press.

Sunstein, C. R. (2021). Liars: Falsehoods and free speech in an age of deception . Oxford University Press.

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. SSRN. https://dx.doi.org/10.2139/ssrn.3144139

Urman, A., & Katz, S. (2022). What they do in the shadows: Examining the far-right networks on Telegram. Information, Communication & Society , 25 (7), 904–923. https://doi.org/10.1080/1369118X.2020.1803946

Uscinski, J., Enders, A., Seelig, M. I., Klofstad, C. A., Funchion, J. R., Everett, C., Wuchty, S., Premaratne, K., & Murthi, M. N. (2021). American politics in two dimensions: Partisan and ideological identities versus anti-establishment orientations. American Journal of Political Science , 65 (4), 773–1022. https://doi.org/10.1111/ajps.12616

Uscinski, J., Enders, A. M., Klofstad, C., & Stoler, J. (2022). Cause and effect: On the antecedents and consequences of conspiracy theory beliefs. Current Opinion in Psychology , 47, 101364. https://doi.org/10.1016/j.copsyc.2022.101364

van der Linden, S., Roozenbeek, J., Maertens, R., Basol, M., Kácha, O., Rathje, S., & Traberg, C. S. (2021). How can psychological science help counter the spread of fake news? The Spanish Journal of Psychology , 24 , e25. https://doi.org/10.1017/SJP.2021.23

Vincent, E. M., Théro, H., & Shabayek, S. (2022). Measuring the effect of Facebook’s downranking interventions against groups and websites that repeatedly share misinformation. Harvard Kennedy School (HKS) Misinformation Review, 3 (3). https://doi.org/10.37016/mr-2020-100

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science , 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication , 37 (1), 136–144. https://doi.org/10.1080/10584609.2020.1716500

Xiao, X., Borah, P., & Su, Y. (2021). The dangers of blind trust: Examining the interplay among social media news use, misinformation identification, and news trust on conspiracy beliefs. Public Understanding of Science , 30 (8), 977–992. https://doi.org/10.1177/0963662521998025

Zeng, J., & Schäfer, M. S. (2021). Conceptualizing “dark platforms.” Covid-19-related conspiracy theories on 8kun and Gab. Digital Journalism , 9 (9), 1321–1343. https://doi.org/10.1080/21670811.2021.1938165

Zettler, I., Hilbig, B. E., Moshagen, M., & de Vries, R. E. (2015). Dishonest responding or true virtue? A behavioral test of impression management. Personality and Individual Differences , 81 , 107–111. https://doi.org/10.1016/j.paid.2014.10.007

This research was funded by grants from the National Science Foundation #2123635 and #2123618.

Competing Interests

All authors declare no competing interests.

Approval for this study was granted by the University of Miami Human Subject Research Office on May 13, 2022 (Protocol #20220472).

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/AWNAKN

  • Skip to main content
  • Keyboard shortcuts for audio player

Untangling Disinformation

As iran attacked israel, old and faked videos and images got millions of views on x.

Huo Jingnan

Jude Joffe-Block

social media misinformation essay

A billboard in central Tehran, Iran, depicts named Iranian ballistic missiles in service, with text in Arabic reading "the honest [person's] promise" and text in Persian reading "Israel is weaker than a spider's web," on April 15. Iran attacked Israel over the weekend with missiles, which it said was a response to a deadly strike on its consulate building in Damascus, Syria. Atta Kenare/AFP via Getty Images hide caption

A billboard in central Tehran, Iran, depicts named Iranian ballistic missiles in service, with text in Arabic reading "the honest [person's] promise" and text in Persian reading "Israel is weaker than a spider's web," on April 15. Iran attacked Israel over the weekend with missiles, which it said was a response to a deadly strike on its consulate building in Damascus, Syria.

After Iran sent more than 300 drones and ballistic missiles toward Israel on Saturday, Iranian state television showed purported damage on the ground with video of a fire under a hazy, orange sky. But that footage was not from Israel or anywhere close. Fact-checkers identified it as footage from a Chilean wildfire in February.

In reality, nearly all the drones and missiles were intercepted with help from the U.S. and other countries , and no deaths were reported.

Israel shoots down missiles and drones after Iran launches unprecedented attack

Middle East crisis — explained

Israel shoots down missiles and drones after iran launches unprecedented attack.

As the international community worried that Iran's attacks would turn Israel's war in Gaza into a wider regional conflict, a version of the clip appeared on social media. It was one of the images and videos that circulated on social media platforms and falsely claimed to show the aftermath of the attack on the ground in Israel. Some of the images and videos were from previous conflicts or wildfires, others were from video games and some appeared to have been made with generative artificial intelligence. They have gained millions of views on X, the platform formerly known as Twitter, according to researchers from the Institute for Strategic Dialogue .

Fake accounts, old videos and rumors fuel chaos around Gaza hospital explosion

Fake accounts, old videos and rumors fuel chaos around Gaza hospital explosion

Similar footage surfaced in the aftermath of the Oct. 7 attack on Israel by Hamas and Russia's invasion of Ukraine, said Moustafa Ayad, ISD's executive director for Africa, the Middle East and Asia, in an interview with NPR. He co-authored a blog post for ISD that was published Sunday.

The Baltimore bridge collapse gave conspiracy theorists a chance to boost themselves

The Baltimore bridge collapse gave conspiracy theorists a chance to boost themselves

Misinformation can gain a lot of traction when people have questions they need urgent answers to during time-bound events like election days and during conflict, said Isabelle Frances-Wright, ISD's U.S. director of technology and society. Breaking news events, where facts are not firmly established, are frequently a vector for false and misleading content on social media.

ISD researchers identified over 30 "false, misleading, or AI generated images and videos" on X that garnered over 35 million views in total, according to the blog post. When NPR reviewed and revisited the posts Monday afternoon U.S. Eastern time, some had been removed, but others continued to gain views. One of the posts came from a satire account, but the poster did not flag that the post was satirical.

An X post that showed computer-generated footage of rapid explosions with the caption "MARK THIS TWEET IN HISTORY WW3 HAS OFFICIALLY STARTED" had over 13 million views when ISD researchers checked on Sunday. By Monday afternoon U.S. Eastern time, the same video's views topped 22 million. Another X post that shared outdated footage of rockets in the region from October had fewer than 200,000 views on Sunday. By Monday, the post had gained over a million views, despite the author acknowledging in a second post that the video was old.

ISD tracked multiple major platforms, including Facebook, Instagram, TikTok and YouTube, and found that the clips that gained the most views were on X, Ayad said.

NPR reached out to X for a comment about the misleading posts on its platform and received an automated response.

Elon Musk relaunches Twitter Blue, the subscription that includes check marks

Elon Musk relaunches Twitter Blue, the subscription that includes check marks

Most of the accounts that researchers identified sport a blue check , which used to mean the user had been verified by Twitter but now means the user has an active subscription to X Premium and meets X's eligibility rules. Premium accounts enjoy a wider reach on the platform and stand to gain revenue from their exposure.

It's part of how the platform has changed under the ownership of Elon Musk. Aside from the blue checks, the platform has also dialed back on content moderation and relied more on users flagging content in "community notes."

Of the posts that ISD researchers identified, they found only two that had a community note attached on Sunday. By Monday, 10 of the posts had community notes labeling the content as misleading, outdated or computer generated, but those posts appeared to continue to draw views.

Twitter once muzzled Russian and Chinese state propaganda. That's over now

Twitter once muzzled Russian and Chinese state propaganda. That's over now

It's unclear who is behind the accounts or what their motive is, Ayad said. Some have previously posted pro-Iran or pro-Kremlin content and already had large followings. Some purport to be from open-source intelligence researchers or citizen journalists.

Frances-Wright noticed that many users also mistakenly thought that footage that had been verified and released by credible sources was fake.

"People are really struggling to understand what is true and what is false."

Ayad said not only does the misleading content give viewers a false sense of the conflict, but the posts also spur Islamophobia and antisemitism.

"Users who see this content will often comment about how — 'I hope the Muslims kill the Jews' or 'the Jews die,'" said Ayad.

"We've seen that on Oct. 7 [when Hamas attacked Israel]. We've seen that in the bombardment of Gaza. It's similar in nature. Like it creates a ripple effect."

NPR's Research, Archives & Data Strategy team contributed research to this story

social media misinformation essay

Opinion: Social media is revolutionizing migration into the US — and spreading dangerous misinformation

I n 1848, an Irish immigrant in Fort Smith, Ark., wrote a letter home extolling conditions in his newfound country. “Everything I need to live on is so cheap that it costs but very little to support a man and his family here,” he stated. “I address myself to my friends. I would say to them come here one and all and don’t hesitate one moment.” 

Throughout the 19 th and 20 th centuries, as successive waves of immigrants arrived in the United States from Europe and other parts of the world, their letters to friends and families at home frequently contained glowing reports of the better opportunities and higher wages to be found in America. Often, they included detailed instructions , money and even boat tickets for the next wave of would-be immigrants waiting at home; when a Swedish immigrant named Carl sent his sister a letter in 1889 , for instance, he included a ticket to America and travel tips.  

Historians who study these “America letters ” between immigrants and their social networks at home argue that the letters were a vital driver of immigration to the United States. And while some scholars have pointed out that these letters occasionally tended to gloss over the inevitable hardships, difficult living conditions and insecurity that could also define the immigrant experience, the letters also served as a conduit for crucial information that could help migrants make informed decisions about where to go and how to get there. 

These days, communication plays a very similar role in motivating today’s migrants to make the journey. But social media is revolutionizing migrant communication — and migration itself. 

Today, would-be immigrants, the vast majority of whom are in the Global South, are motivated to make the journey not only by direct contact with family and friends, but also by a flood of unsolicited and often idealized messages, images and videos on social media — especially TikTok and Meta-owned WhatsApp .  

Unlike the old letters, which took days or weeks to arrive, social media messaging is virtually instantaneous. Social media posts reach a much broader audience as well, thanks to the algorithms that match users to their interests. Most importantly, however, migrant social media messaging is riddled with disinformation and rumors that are causing new and unprecedented challenges at the border and beyond. As any search of Facebook, Instagram or TikTok will reveal, a majority of the content aimed at migrants portrays only the bright side of the American Dream : images of the American flag and skyscrapers , and promises of steady employment , as well as false assurances of a deceptively easy passage through perilous migrant routes like the Darién Gap .  

Furthermore, these WhatsApp chats, Facebook groups and viral TikToks make the process of legal immigration seem much simpler and easier than it actually is. Immigrants today face a far more difficult and complex path to legal migration than those in the 19 th and early 20 th centuries. As a result, many are choosing the one legal path available to them: asking for asylum at the U.S.-Mexico border.  

In recent years, as thousands of migrants in northern Mexico navigate the complex process to get appointments with CBP ONE (the USCIS application migrants use to schedule appointments at the border to formally request asylum), their social media feeds have been flooded with videos of migrants getting the coveted appointments and crossing successfully . 

Plenty of information is generated by migrants themselves, just as the old immigrant letters were. Yet there’s also a nefarious new source of this viral content: according to a 2022 Tech Transparency Project investigation, many of these messages are actually created by coyotes , cartel-linked traffickers who transport migrants throughout the Americas and are advertising widely on social media .  

Taken together, this flood of online misinformation is generating sudden and unprecedented shifts in migration patterns. On March 12, 2023, migrants rushed the U.S. border at the Paso Del Norte International Bridge in El Paso, Texas, after a rumor spread on social media that migrants would be allowed to enter. In the final months of 2023, migrants on Facebook and WhatsApp saw rumors — supposedly started by migrant traffickers in order to generate business — that CBP ONE was going to shut down in December. These rumors may have been a key factor behind the highest number ever of migrant encounters on the border that same month.  

Migration misinformation is so pervasive across global social media networks that it is even spurring spontaneous migrations from completely new places. The summer of 2023 saw a rapid increase in the number of migrants arriving from Mauritania · from almost none in previous years to over 8,000 in the span of just over a year — after word spread on TikTok and other forms of social media advertising a new route through Nicaragua and promising that “the American Dream is still available.” What the videos didn’t show, however, were the dangers awaiting migrants upon crossing through the rest of Central America and Mexico, as well as the unpleasant possibilities of being detained, deported or stuck in a shelter in the United States — all of which would become realities for Mauritanian migrants who made the journey.  

One of the reasons migrants have come to rely on social media for information is because trustworthy information through reliable sources can be hard to come by. Major U.S. migration policy is often negotiated behind closed doors through bilateral agreements with countries throughout Latin America, most notably Mexico . And governments have been slow or unable to provide accurate and updated information to migrants via the social media channels they use most.  

“When CBP ONE was updated post-Title 42, we were told by US Customs and Border Patrol that the registration numbers for migrants were assigned randomly when making an account, and did not indicate the sequence of when they were made,” said Father Brian Strassburger, SJ, director of Del Camino Jesuit Border Ministries in the Rio Grande Valley. “But eventually, it became painfully obvious that the first digits of the registration numbers clearly followed a sequence that assigned a percentage of the appointments. When things like this happen, we begin to lose our credibility with the migrant community, and our ability to combat misinformation is diminished.”  

Indeed, the challenges faced by migrant communication in the social media age mean that immigrant advocates on the border and in Latin America must work constantly to dispel rumors started by traffickers, respond to sudden social media-fueled migrations, and educate migrants away from the basic misunderstandings about immigration policy that spread online.  

“The principal enemy we have is disinformation,” says Karen Perez, director of Jesuit Refugee Services Mexico. “Five years ago, it wasn’t like that.”  

As with the “America letters” of the past, migrants rely on communication with each other; it can be a lifeline as they plan their journeys. Yet the migrant advocates of today face an enormous task in trying to tackle the instantaneously proliferating sources of misinformation and migration rumors on social media. They would do well to dedicate personnel and time to closely monitor social media networks and to try to proactively produce content that is reliable and trustworthy, such as Al Otro Lado , an organization that regularly produces videos explaining U.S. immigration processes and dispelling incorrect information in a variety of languages.  

Social media companies can help as well, by doing a better job at quickly taking down content that migrants, government agencies and advocacy organizations flag as disinformation. In that sense, the new communications technologies of today — in contrast to the “America letters” of the past — present both greater problems for migrants and those who respond to migration, as well as greater possibilities for intervention and improvement.  

Harrison Hanvey is manager of outreach and partnerships at the Jesuit Conference of Canada and the U.S.  

Julia G. Young is an associate professor of History at The Catholic University of America.  

For the latest news, weather, sports, and streaming video, head to The Hill.

Opinion: Social media is revolutionizing migration into the US — and spreading dangerous misinformation 

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

Watch CBS News

NPR suspends Uri Berliner, editor who accused the network of liberal bias

By Aimee Picchi

Edited By Alain Sherter

April 17, 2024 / 8:18 AM EDT / CBS News

National Public Radio has suspended Uri Berliner, a senior editor who earlier this month claimed in an essay that the network had "lost America's trust" by pushing progressive views in its coverage while suppressing dissenting opinions.

Berliner's suspension was reported by NPR media correspondent David Folkenflik, who said that the senior editor was suspended for five days without pay starting on Friday. A formal rebuke from NPR said Berliner had violated its policy of securing prior approval to write for other news outlets, and warned that he would be fired if he breached those guidelines in future, Folkenflik reported.

NPR declined to comment to CBS News. "NPR does not comment on individual personnel matters, including discipline," a spokesperson said. 

Berliner's  essay  in the Free Press caused a firestorm of debate, with some conservatives, including former President Donald Trump, calling on the government to "defund" the organization. Some of Berliner's NPR colleagues also took issue with the essay, with "Morning Edition" host Steve Inskeep writing on his Substack that the article was "filled with errors and omissions."

"The errors do make NPR look bad, because it's embarrassing that an NPR journalist would make so many," Inskeep wrote.

In the essay, Berliner wrote that NPR has always had a liberal bent, but that for most of his 25 year tenure it had retained an open-minded, curious culture. "In recent years, however, that has changed," he wrote. "Today, those who listen to NPR or read its coverage online find something different: the distilled worldview of a very small segment of the U.S. population."

Berliner added, "[W]hat's notable is the extent to which people at every level of NPR have comfortably coalesced around the progressive worldview. The "absence of viewpoint diversity" is "is the most damaging development at NPR," he wrote. 

After the essay's publication, NPR's top editor, Edith Chapin, said she strongly disagrees with Berliner's conclusions and is proud to stand behind NPR's work.

COVID coverage, DEI initiatives

Berliner criticized coverage of major events at NPR, singling out its reporting on COVID and Hunter Biden as problematic. With the first topic, he wrote that the network didn't cover a theory that COVID-19 had been created in a Chinese lab, a theory he claimed NPR staffers "dismissed as racist or a right-wing conspiracy."

He also took NPR for task for what he said was failing to report developments related to  Hunter Biden's laptop . "With the election only weeks away, NPR turned a blind eye," Berliner wrote. 

Berliner also criticized NPR for its internal management, citing what he claims is a growing focus on diversity, equity and inclusion initiatives, or DEI.

"Race and identity became paramount in nearly every aspect of the workplace," Berliner wrote. "A growing DEI staff offered regular meetings imploring us to 'start talking about race'."

Inskeep said Berliner's essay left out the context that many other news organizations didn't report on Hunter Biden's laptop over questions about its authenticity. He also disputed Berliner's characterization that NPR editors and reporters don't debate story ideas. 

"The story is written in a way that is probably satisfying to the people who already believe it, and unpersuasive to anyone else — a mirror image of his critique of NPR," Inskeep wrote.

—With reporting by the Associated Press.

Aimee Picchi is the associate managing editor for CBS MoneyWatch, where she covers business and personal finance. She previously worked at Bloomberg News and has written for national news outlets including USA Today and Consumer Reports.

More from CBS News

Tesla wants shareholders to vote again on Musk's $56 billion payout

Is long-term care insurance worth it for seniors in their 70s? Experts weigh in

Tesla plans to lay off more than 10% of workforce as sales slump

Suspect in custody after gunshots fired from rooftop in Los Angeles area

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • March Madness
  • AP Top 25 Poll
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Anonymous users are dominating right-wing discussions online. They also spread false information

FILE - The X logo is shown on a computer screen in Belgrade, Serbia, July 24, 2023. Social media accounts who shield their real identities behind clever slogans and cartoon avatars have come to dominate right-wing political discussion online, even as they spread false information. (AP Photo/Darko Vojinovic, File)

FILE - The X logo is shown on a computer screen in Belgrade, Serbia, July 24, 2023. Social media accounts who shield their real identities behind clever slogans and cartoon avatars have come to dominate right-wing political discussion online, even as they spread false information. (AP Photo/Darko Vojinovic, File)

FILE - Tesla and SpaceX CEO Elon Musk addresses the European Jewish Association’s conference, Jan. 22, 2024, in Krakow, Poland. Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Musk immediately started raising alarm. In three days, the claim, which election officials explained was inaccurate, amassed more than 63 million views on X, according to the platform’s metrics. (AP Photo/Czarek Sokolowski, File)

FILE - Republican presidential candidate former President Donald Trump speaks at a campaign event in Grand Rapids, Mich., April 2, 2024. Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Trump, immediately started raising alarm. In three days, the claim, which election officials explained was inaccurate, amassed more than 63 million views on X, according to the platform’s metrics. (AP Photo/Paul Sancya, File)

FILE - Rep. Marjorie Taylor Greene, R-Ga., arrives for an interview in Laconia, N.H., Jan. 22, 2024. Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Greene immediately started raising alarm. In three days, the claim, which election officials explained was inaccurate, amassed more than 63 million views on X, according to the platform’s metrics. (AP Photo/Matt Rourke, File)

  • Copy Link copied

NEW YORK (AP) — The reposts and expressions of shock from public figures followed quickly after a user on the social platform X who uses a pseudonym claimed that a government website had revealed “skyrocketing” rates of voters registering without a photo ID in three states this year — two of them crucial to the presidential contest.

“Extremely concerning,” X owner Elon Musk replied twice to the post this past week.

“Are migrants registering to vote using SSN?” Georgia Rep. Marjorie Taylor Greene, an ally of former President Donald Trump, asked on Instagram, using the acronym for Social Security number.

Trump himself posted to his own social platform within hours to ask, “Who are all those voters registering without a Photo ID in Texas, Pennsylvania, and Arizona??? What is going on???”

FILE - A woman checks for her name before casting her vote at a polling station during the Telangana state assembly elections in Hyderabad, India, Nov. 30, 2023. From April 19 to June 1, nearly 970 million Indians - or over 10% of the world’s population - will vote in the country's general elections. It's one of several high-profile elections around the world this year that are highlighting concerns about online election misinformation. (AP Photo/Mahesh Kumar A., File)

State election officials soon found themselves forced to respond. They said the user, who pledges to fight, expose and mock “wokeness,” was wrong and had distorted Social Security Administration data. Actual voter registrations during the time period cited were much lower than the numbers being shared online.

Stephen Richer, the recorder in Maricopa County, Arizona, which includes Phoenix, refuted the claim in multiple X posts while Jane Nelson, the secretary of state in Texas, issued a statement calling it “totally inaccurate.”

Yet by the time they tried to correct the record, the false claim had spread widely. In three days, the pseudonymous user’s claim amassed more than 63 million views on X, according to the platform’s metrics. A thorough explanation from Richer attracted a fraction of that, reaching 2.4 million users.

The incident sheds light on how social media accounts that shield the identities of the people or groups behind them through clever slogans and cartoon avatars have come to dominate right-wing political discussion online even as they spread false information.

FILE - Tesla and SpaceX CEO Elon Musk addresses the European Jewish Association's conference, Jan. 22, 2024, in Krakow, Poland. Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Musk immediately started raising alarm. In three days, the claim, which election officials explained was inaccurate, amassed more than 63 million views on X, according to the platform’s metrics. (AP Photo/Czarek Sokolowski, File)

The accounts enjoy a massive reach that is boosted by engagement algorithms, by social media companies greatly reducing or eliminating efforts to remove phony or harmful material, and by endorsements from high-profile figures such as Musk. They also can generate substantial financial rewards from X and other platforms by ginning up outrage against Democrats.

Many such internet personalities identify as patriotic citizen journalists uncovering real corruption. Yet their demonstrated ability to spread misinformation unchecked while disguising their true motives worries experts with the United States in a presidential election year.

They are exploiting a long history of trust in American whistleblowers and anonymous sources, said Samuel Woolley, director of the Propaganda Research Lab at the University of Texas at Austin.

“With these types of accounts, there’s an allure of covertness, there’s this idea that they somehow might know something that other people don’t,” he said. “They’re co-opting the language of genuine whistleblowing or democratically inclined leaking. In fact what they’re doing is antithetical to democracy.”

FILE - Republican presidential candidate former President Donald Trump speaks at a campaign event in Grand Rapids, Mich., April 2, 2024. Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Trump, immediately started raising alarm. In three days, the claim, which election officials explained was inaccurate, amassed more than 63 million views on X, according to the platform’s metrics. (AP Photo/Paul Sancya, File)

The claim that spread online this past week misused Social Security Administration data tracking routine requests made by states to verify the identity of individuals who registered to vote using the last four digits of their Social Security number. These requests are often made multiple times for the same individual, meaning they do not necessarily correspond one-to-one with people registering to vote.

The larger implication is that the cited data represents people who entered the U.S. illegally and are supposedly registering to vote with Social Security numbers they received for work authorization documents. But only U.S. citizens are allowed to vote in federal elections and illegal voting by those who are not is exceedingly rare because states have processes to prevent it.

Accounts that do not disclose the identities of those behind them have thrived online for years, gaining followers for their content on politics, humor, human rights and more. People have used anonymity on social media to avoid persecution by repressive authorities or to speak freely about sensitive experiences. Many left-wing protesters adopted anonymous online identities during the Occupy Wall Street movement of the early 2010s.

The meteoric rise of a group of right-wing pseudonymous influencers who act as alternative information sources has been more recent. It’s coincided with a decline in public trust in government and media through the 2020 presidential election and the COVID-19 pandemic.

These influencers frequently spread misinformation and otherwise misleading content, often in service of the same recurring narratives such as alleged voter fraud, the “woke agenda” or Democrats supposedly encouraging a surge of people through illegal immigration to steal elections or replace whites. They often use similar content and reshare each other’s posts.

The account that posted the recent misinformation also has spread bogus information about the Israel-Hamas war, sharing a post last fall that falsely claimed to show a Palestinian “crisis actor” pretending to be seriously injured.

Since his takeover of Twitter in 2022, Musk has nurtured the rise of these accounts, frequently commenting on their posts and sharing their content. He also has protected their anonymity. In March, X updated its privacy policy to ban people from exposing the identity of an anonymous user.

Musk also rewards high engagement with financial payouts. The X user who spread the false information about new voter registrants has racked up more than 2.4 million followers since joining the platform in 2022. The user, in a post last July, reported earning more than $10,000 from X’s new creator ad revenue program. X did not respond to a request for comment, which was met with an automated reply.

Tech watchdogs said that while it’s critical to maintain spaces for anonymous voices online, they shouldn’t be allowed to spread lies without accountability.

“Companies must vigorously enforce terms of service and content policies that promote election integrity and information integrity generally,” said Kate Ruane, director of the Free Expression Project at the Center for Democracy and Technology.

The success of these accounts shows how financially savvy users have deployed the online trolling playbook to their advantage, said Dale Beran, a lecturer at Morgan State University and the author of “It Came from Something Awful: How a Toxic Troll Army Accidentally Memed Donald Trump into Office.”

“The art of trolling is to get the other person enraged,” he said. “And we now know getting someone enraged really fuels engagement and gives you followers and so will get you paid. So now it’s sort of a business.”

Some pseudonymous accounts on X have used their brands to build loyal audiences on other platforms, from Instagram to the video-sharing platform Rumble and the encrypted messaging platform Telegram. The accounts themselves — and many of their followers — publicly promote their pride in America and its founding documents.

It’s concerning that many Americans place their trust in these shadowy online sources without thinking critically about who is behind them or how they may want to harm the country, said Kara Alaimo, a communications professor at Farleigh Dickinson University who has written about toxicity on social media.

“We know that foreign governments including China and Russia are actively creating social media accounts designed to sow domestic discord because they think weakening our social fabric gives their countries a competitive advantage,” she said. “And they’re right.”

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here . The AP is solely responsible for all content.

This story has been corrected to reflect that the name of the Texas secretary of state is Jane Nelson, not Janet Nelson.

ALI SWENSON

social media misinformation essay

25,000+ students realised their study abroad dream with us. Take the first step today

Meet top uk universities from the comfort of your home, here’s your new year gift, one app for all your, study abroad needs, start your journey, track your progress, grow with the community and so much more.

social media misinformation essay

Verification Code

An OTP has been sent to your registered mobile no. Please verify

social media misinformation essay

Thanks for your comment !

Our team will review it before it's shown to our readers.

Leverage Edu

  • School Education /

Press Freedom Essay in 500+ Words in English for Students

' src=

  • Updated on  
  • Apr 17, 2024

Essay on Press Freedom

Press freedom means that media, both digital and print, are free from any state control. In today’s modern world, press freedom is very important to safeguard democracy and encourage an accountable and transparent government. Different countries have their own laws regarding press freedom. Countries like India, the USA, South Korea, and Japan have freedom of expression and speech. It means people and organisations are free to express their thoughts, share ideas, and express themselves without any government interference. However, one should understand that freedom of expression is one thing, and spreading false information or hate is another. On this page, we will be discussing press freedom essay in 500 words.

social media misinformation essay

Table of Contents

  • 1.1 Importance of Press Freedom
  • 1.2 Press Freedom Challenges
  • 1.3 Conclusion
  • 2 10-Line Essay on Press Freedom

Quick Read: Essay on My Vision for India

Press Freedom Essay

In a democratic society, press freedom is very important. In today’s modern world, there are different types of press; digital media, print media, internet, broadcasting, newspapers, etc. According to the Press Freedom Index 2023, Norway has been ranked #1 for the seventh consecutive year. India, on the other hand, was ranked #161 out of 180 counties listed. This low rank in India defines the significant decline of press freedom in the country. 

We often hear that the cost of freedom is eternal vigilance. Thus, it is the responsibility of the press to remain vigilant for people’s safety. The media is also responsible for monitoring the freedom of people. Freedom of the press helps hold those in power accountable and ensures that the state’s funds and resources are not used for personal benefits.

Quick Read: Essay on Labour Day

Importance of Press Freedom

There are several reasons for the importance of press freedom. It is essential for the proper functioning of our democratic societies. Press freedom ensures transparency and accountability. We, as citizens, can access a wide range of sources of information. This information offered by the press allows us to make informed decisions about our government, and raise our voice against injustice and unconstitutional activities.

  • A free press is sometimes referred to as a ‘watchdog’. With their microphones and cameras, journalists investigate and report on public interest issues, exposing corruption, abuse of power, and other wrongdoing.
  • Advocating and protecting human rights is another significant reason for freedom of the press. In war-torn places or countries, journalists highlight human rights abuses, discrimination, and injustices. These help raise awareness and catalyze action to address these issues.
  • Press freedom can also encourage innovation and progress by promoting the free flow of information and ideas. This can help create an environment where creativity can thrive, leading to advancements in technology, science, culture, and other fields.
  • In a democracy, press freedom is an essential component of the system of checks and balances. 
  • In a diverse country like India, press freedom can support cultural diversity and pluralism by offering a platform for people to express or raise their voices against injustice.

Press Freedom Challenges

Every freedom comes with its challenges, and press freedom is no exception. The job of a journalist is full of challenges and risks. A lot of journalists who expose scammers or corrupt political leaders receive threats, and some are devastating. 

  • In times of distress or an emergency, countries often impose censorship on the press. It means only that news approved by the government will be published or telecasted.
  • The challenges facing female journalists are even worse. In some countries, cultural norms restrict women from working as journalists.
  • We, as an audience, often find it difficult to distinguish between reliable journalism and false or misleading information. This is more common in today’s world, where fake news and misinformation have taken a significant leap on social media and digital platforms.
  • As of 2022, India’s literacy rate is 76.32%. There is still a large part of the population who lack critical thinking skills and media literacy. 

Quick Read: Essay on Freedom Fighters

Press freedom is very important to keep us informed and vigilant about today’s world and the actions of the government. Press freedom ensures that the government is transparent and accountable. They help in the smooth functioning of the democratic process. It is our responsibility to understand how important freedom of the press is and how it can help shape our decisions.

10-Line Essay on Press Freedom

Here is a 10-line essay on press freedom. 

  • Freedom of the Press is crucial to safeguarding democracy.
  • Freedom of the Press is categorised under Article 19(1) of the Indian Constitution.
  • The World Press Freedom Index releases a report on countries with freedom of the press.
  • India was ranked 161 out of 180 countries in the 2023 World Press Freedom Index.
  • A free press is sometimes referred to as a ‘watchdog’.
  •  Freedom of the press helps hold those in power accountable and ensures that the state’s funds and resources are not used for personal benefits.
  • Press censorship, gender discrimination, and the spreading of false information are some challenges to press freedom.
  • Journalists and media organisations often fall victim to cyberattacks and online hacks.
  • Press freedom can support cultural diversity and pluralism.
  • Press freedom can advocate for and protect human rights in war-torn or disputed areas.

Ans: In a democratic society, press freedom is very important. In today’s modern world, there are different types of press; digital media, print media, internet, broadcasting, newspaper, etc. According to the Press Freedom Index 2023, Norway has been ranked #1 for the seventh consecutive year. India, on the other hand, was ranked #161 out of 180 counties listed. This low rank in India defines the significant decline of press freedom in the country. 

Ans: Press Freedom Day is globally observed on the 3rd of May every year.

Ans: According to the World Press Freedom Index 2023 report, India was ranked #161 out of 180 countries listed. This low rank in India defines the significant decline of press freedom in the country.

Popular Essay Writing Topics

For more information on such interesting topics, visit our essay writing page and follow Leverage Edu. 

' src=

Shiva Tyagi

With an experience of over a year, I've developed a passion for writing blogs on wide range of topics. I am mostly inspired from topics related to social and environmental fields, where you come up with a positive outcome.

Leave a Reply Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Contact no. *

social media misinformation essay

Connect With Us

social media misinformation essay

25,000+ students realised their study abroad dream with us. Take the first step today.

social media misinformation essay

Resend OTP in

social media misinformation essay

Need help with?

Study abroad.

UK, Canada, US & More

IELTS, GRE, GMAT & More

Scholarship, Loans & Forex

Country Preference

New Zealand

Which English test are you planning to take?

Which academic test are you planning to take.

Not Sure yet

When are you planning to take the exam?

Already booked my exam slot

Within 2 Months

Want to learn about the test

Which Degree do you wish to pursue?

When do you want to start studying abroad.

January 2024

September 2024

What is your budget to study abroad?

social media misinformation essay

How would you describe this article ?

Please rate this article

We would like to hear more.

Have something on your mind?

social media misinformation essay

Make your study abroad dream a reality in January 2022 with

social media misinformation essay

India's Biggest Virtual University Fair

social media misinformation essay

Essex Direct Admission Day

Why attend .

social media misinformation essay

Don't Miss Out

  • Share full article

Advertisement

Supported by

China’s Advancing Efforts to Influence the U.S. Election Raise Alarms

China has adopted some of the same misinformation tactics that Russia used ahead of the 2016 election, researchers and government officials say.

A photo illustration shows a collage of images of President Biden, Donald Trump, the Chinese leader Xi Jinping, a social media account page and the U.S. capital.

By Tiffany Hsu and Steven Lee Myers

Covert Chinese accounts are masquerading online as American supporters of former President Donald J. Trump, promoting conspiracy theories, stoking domestic divisions and attacking President Biden ahead of the election in November, according to researchers and government officials.

The accounts signal a potential tactical shift in how Beijing aims to influence American politics, with more of a willingness to target specific candidates and parties, including Mr. Biden.

In an echo of Russia’s influence campaign before the 2016 election, China appears to be trying to harness partisan divisions to undermine the Biden administration’s policies, despite recent efforts by the two countries to lower the temperature in their relations.

Some of the Chinese accounts impersonate fervent Trump fans, including one on X that purported to be “a father, husband and son” who was “MAGA all the way!!” The accounts mocked Mr. Biden’s age and shared fake images of him in a prison jumpsuit, or claimed that Mr. Biden was a Satanist pedophile while promoting Mr. Trump’s “Make America Great Again” slogan.

“I’ve never seen anything along those lines at all before,” said Elise Thomas, a senior analyst at the Institute for Strategic Dialogue, a nonprofit research organization that uncovered a small group of the fake accounts posing as Trump supporters.

Ms. Thomas and other researchers have linked the new activity to a long-running network of accounts connected with the Chinese government known as Spamouflage. Several of the accounts they detailed previously posted pro-Beijing content in Mandarin — only to resurface in recent months under the guise of real Americans writing in English.

In a separate project, the Foundation for Defense of Democracies, a research organization in Washington, identified 170 inauthentic pages and accounts on Facebook that have also pushed anti-American messages, including pointed attacks on Mr. Biden.

The effort has more successfully attracted actual users’ attention and become more difficult for researchers to identify than previous Chinese efforts to influence public opinion in the United States. Though researchers say the overall political tilt of the campaign remains unclear, it has raised the possibility that China’s government is calculating that a second Trump presidency, despite his sometimes hostile statements against the country, might be preferable to a second Biden term.

China’s activity has already raised alarms inside the American government.

In February, the Office of the Director of National Intelligence reported that China was expanding its influence campaigns to “sow doubts about U.S. leadership, undermine democracy and extend Beijing’s influence.” The report expressed concern that Beijing could use increasingly sophisticated methods to try to influence the American election “to sideline critics of China.”

Liu Pengyu, the spokesman for the Chinese Embassy in Washington, said in a statement that the presidential election was “the domestic affair of the United States” and that “China is committed to the principle of noninterference.”

“Claims about China influencing U.S. presidential elections are completely fabricated,” he added.

Ms. Thomas, who has studied China’s information operations for years, said the new effort suggested a more subtle and sophisticated approach than previous campaigns. It was the first time, she said, that she had encountered Chinese accounts posing so persuasively as Trump-supporting Americans while managing to attract genuine engagement.

“The worry has always been, what if one day they wake up and are effective?” she said. “Potentially, this could be the beginning of them waking up and being effective.”

Online disinformation experts are looking ahead to the months before the November election with growing anxiety.

Intelligence assessments show Russia using increasingly subtle influence tactics in the United States to spread its case for isolationism as its war against Ukraine continues. Mock news sites are targeting Americans with Russian propaganda.

Efforts to beat back false narratives and conspiracy theories — already a difficult task — must now also contend with waning moderation efforts at social media platforms, political pushback , fast-advancing artificial intelligence technology and broad information fatigue .

Until now, China’s efforts to advance its ideology in the West struggled to gain traction, first as it pushed its official propaganda about the superiority of its culture and economy and later as it began denigrating democracy and stoking anti-American sentiment.

In the 2022 midterm elections, the cybersecurity firm Mandiant reported that Dragonbridge , an influence campaign linked to China, tried to discourage Americans from voting while highlighting U.S. political polarization. That campaign, which experimented with fake American personas posting content in the first person, was poorly executed and largely overlooked online, researchers said.

The recent campaigns connected to China have sought to exploit the divisions already apparent in American politics, joining the divisive debate over issues such as gay rights, immigration and crime mainly from a right-wing perspective.

In February, according to the Institute for Strategic Dialogue, a Chinese-linked account on X calling itself a Western name alongside a “MAGA 2024” reference shared a video from RT, the Russian television network controlled by the Kremlin, to claim that Mr. Biden and the Central Intelligence Agency had sent a neo-Nazi gangster to fight in Ukraine. (That narrative was debunked by the investigative group Bellingcat.)

The next day the post received an enormous boost when Alex Jones, the podcaster known for spreading false claims and conspiracy theories, shared it on the platform with his 2.2 million followers.

The account with the “MAGA 2024” reference had taken steps to appear authentic, describing itself as being run by a 43-year-old Trump supporter in Los Angeles. But it used a profile photo lifted from a Danish man’s travel blog, the institute’s report on the accounts said. Although the account opened 14 years ago, its first publicly visible post was last April. In that post, the account attempted, without evidence, to link Mr. Biden to Jeffrey Epstein, the disgraced financier and registered sex offender.

At least four other similar accounts are also operating, Ms. Thomas said, all of them with ties to China. One account paid for a subscription on X, which offers perks like better promotion and a blue check mark that was, before Elon Musk bought the platform, a sign of verification conferred to users whose identities had been verified. Like the other accounts, it shared pro-Trump and anti-Biden claims, including the QAnon conspiracy theory and baseless election fraud accusations .

The posts included exhortations to “be strong ourselves, not smear China and create rumors,” awkward phrases like “how dare?” instead of “how dare you?” and signs that the user’s web browser had been set to Mandarin.

One of the accounts seemed to slip up in May when it responded to another post in Mandarin; another was posting primarily in Mandarin until last spring, when it briefly went silent before resurfacing with all-English content. The accounts denounced efforts by American lawmakers to ban the popular TikTok app , which is owned by the Chinese company ByteDance, as a form of “true authoritarianism” orchestrated by Israel and as a tool for Mr. Biden to undermine China.

The accounts sometimes amplified or repeated content from the Chinese influence campaign Spamouflage , which was first identified in 2019 and linked to an arm of the Ministry of Public Security. It once posted content almost exclusively in Chinese to attack the Communist Party’s critics and protesters in Hong Kong.

It has pivoted in recent years to focus on the United States, portraying the country as overwhelmed by chaos . By 2020, it was posting in English and criticizing American foreign policy, as well as domestic issues in the United States, including its response to Covid-19 and natural disasters, like the wildfires in Hawaii last year.

China, which has denied interfering in other countries’ internal affairs, now appears to be building a network of accounts across many platforms to put to use in November. “This is reminiscent of Russia’s style of operations, but the difference is more the intensity of this operation,” said Margot Fulde-Hardy, a former analyst at Viginum, the government agency in France that combats disinformation online.

In the past, many Spamouflage accounts followed one another, posted sloppily in several languages and simultaneously blitzed social media users with identical messages across multiple platforms.

The newer accounts are trickier to find because they are trying to build an organic following and appear to be controlled by humans rather than automated bots. One of the accounts on X also had linked profiles on Instagram and Threads, creating an appearance of authenticity.

Meta, which owns Instagram and Threads, last year removed thousands of inauthentic accounts linked to Spamouflage on Facebook and others on Instagram. It called one network it had removed “the largest known cross-platform influence operation to date.” Hundreds of related accounts remained on other platforms, including TikTok, X, LiveJournal and Blogspot, Meta said.

The Foundation for Defense of Democracies documented a new coordinated group of Chinese accounts linked to a Facebook page with 3,000 followers called the War of Somethings. The report underscores the persistence of China’s efforts despite Meta’s repeated efforts to take down Spamouflage accounts.

“What we’re seeing,” said Max Lesser, a senior analyst with the foundation, “is the campaign just continues, undeterred.”

Tiffany Hsu reports on misinformation and disinformation and its origins, movement and consequences. She has been a journalist for more than two decades. More about Tiffany Hsu

Steven Lee Myers covers misinformation for The Times. He has worked in Washington, Moscow, Baghdad and Beijing, where he contributed to the articles that won the Pulitzer Prize for public service in 2021. He is also the author of “The New Tsar: The Rise and Reign of Vladimir Putin.” More about Steven Lee Myers

IMAGES

  1. Trends in the Diffusion of Misinformation on Social Media

    social media misinformation essay

  2. ≫ Impact of Social Media on Quality of Life Free Essay Sample on

    social media misinformation essay

  3. Misinformation targeted by Stirling researcher

    social media misinformation essay

  4. How to Write a Social Media Essay With Tips and Examples

    social media misinformation essay

  5. ≫ Negative Effects of Social Media Free Essay Sample on Samploon.com

    social media misinformation essay

  6. Misinformation on Social Media

    social media misinformation essay

COMMENTS

  1. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  2. How misinformation spreads on social media—And what to do about it

    As widespread as misinformation online is, opportunities to glimpse it in action are fairly rare. Yet shortly after the recent attack in Toronto, a journalist unwittingly carried out a kind of ...

  3. Review essay: fake news, and online misinformation and disinformation

    Social media is commonly assumed to be culpable for this growth, with 'the news' and current affairs deemed the epicentre of the battle for information credibility. This review begins by explaining the key definitions and discussions of the subject of fake news, and online misinformation and disinformation with the aid of each book in turn.

  4. Tackling misinformation: What researchers could do with social media

    The hypotheses will be tested using a unique dataset that would include user consumption and production habits, as well as content exposure, and time spent on several social media platforms coupled with other information like an online survey and in-depth interviews of users who have been exposed to misinformation across social media platforms.

  5. Biases Make People Vulnerable to Misinformation Spread by Social Media

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. Social media are among the primary sources of news in the U.S. and ...

  6. How Social Media Amplifies Misinformation More Than Information

    By Steven Lee Myers. Oct. 13, 2022. It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure ...

  7. The causes and consequences of COVID-19 ...

    We investigate the relationship between media consumption, misinformation, and important attitudes and behaviours during the coronavirus disease 2019 (COVID-19) pandemic. We find that comparatively more misinformation circulates on Twitter, while news media tends to reinforce public health recommendations like social distancing. We find that exposure to social media is associated with ...

  8. A value-driven approach to addressing misinformation in social media

    Misinformation in social media is an actual and contested policy problem given its outreach and the variety of stakeholders involved. In particular, increased social media use makes the spread of ...

  9. Propaganda, misinformation, and histories of media techniques

    This essay argues that the recent scholarship on misinformation and fake news suffers from a lack of historical contextualization. The fact that misinformation scholarship has, by and large, failed to engage with the history of propaganda and with how propaganda has been studied by media and communication researchers is an empirical detriment to it, and

  10. Study reveals key reason why fake news spreads on social media

    USC study reveals the key reason why fake news spreads on social media. The USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online. January 17, 2023 By Pamela Madrid.

  11. Controlling the spread of misinformation

    Misinformation on COVID-19 is so pervasive that even some patients dying from the disease still say it's a hoax.In March 2020, nearly 30% of U.S. adults believed the Chinese government created the coronavirus as a bioweapon (Social Science & Medicine, Vol. 263, 2020) and in June, a quarter believed the outbreak was intentionally planned by people in power (Pew Research Center, 2020).

  12. Misinformation: susceptibility, spread, and interventions to immunize

    Importantly, several studies have now shown that fake news typically represents a small part of people's overall media diet and that the spread of misinformation on social media is highly skewed ...

  13. PDF Misinformation in Social Media: Definition, Manipulation, and Detection

    The widespread dissemination of misinformation in social media has recently received a lot of attention in academia. While the problem of misinformation in social media has been intensively studied, there are seemingly di erent def-initions for the same problem, and inconsistent results in di erent studies. In this survey, we aim to consolidate the

  14. How Social Media Rewards Misinformation

    How Social Media Rewards Misinformation. A majority of false stories are spread by a small number of frequent users, suggests a new study co-authored by Yale SOM's Gizem Ceylan. But they can be taught to change their ways. In the early months of the COVID-19 pandemic, posts and videos promoting natural remedies for the virus—everything from ...

  15. What can be done to reduce the spread of fake news? MIT Sloan research

    The study arrives at a time when the sharing of misinformation on social media—including both patently false political "fake news" and misleading hyperpartisan content—has become a key focus of public debate around the world. The topic gained prominence in 2016 in the aftermath of the U.S. presidential election and the referendum on ...

  16. COVID‐19 and misinformation: Is censorship of social media a remedy to

    Main social media platforms have also actively fought against false information by filtering out or flagging content considered as misinformation. In this essay, I will discuss the censorship on social media platforms related to COVID‐19 and the problems it raises along with an alternative approach to counteract the spread of medical and ...

  17. What is fake news and misinformation?

    Misinformation is false information that is spread by people who think it's true. This is different from 'fake news' and disinformation. Fake news refers to websites that share mis or disinformation. This might be via satire sites like The Onion, but it also refers to those pretending to be trustworthy news sources.

  18. The Fake Fake-News Problem and the Truth About Misinformation

    The fear of misinformation hinges on assumptions about human suggestibility. "Misinformation, conspiracy theories, and other dangerous ideas, latch on to the brain and insert themselves deep ...

  19. In the face of online misinformation, these teens are learning how to

    He's also active on social media, like the vast majority of teens, but he's worried about the misinformation he sees on his feed. "Anyone can make a post and spread it to millions of people," said ...

  20. Who knowingly shares false political information online?

    Normatively, it is encouraging that only a small minority of our respondents indicated that they share false information about politics on social media. However, the fact that 14% of the U.S. adult population claims to purposely spread political misinformation online is nonetheless troubling.

  21. Fake, misleading visuals of Iran's attack on Israel spread on X : NPR

    Misinformation can gain a lot of traction when people have questions they need urgent answers to during time-bound events like election days and during conflict, said Isabelle Frances-Wright, ISD ...

  22. Opinion: Social media is revolutionizing migration into the US

    A flood of online misinformation is generating sudden and unprecedented shifts in migration patterns. The Hill. Opinion: Social media is revolutionizing migration into the US — and spreading ...

  23. TikTok Spreads Birth Control Falsehood. Doctors Should Fight Back

    But social media platforms, in particular TikTok and Instagram, are allowing false information to proliferate in new and dangerous ways. ... The twin forces of birth control-related misinformation ...

  24. NPR suspends Uri Berliner, editor who accused the network of liberal

    Misinformation spreads on social media . ... In the essay, Berliner wrote that NPR has always had a liberal bent, but that for most of his 25 year tenure it had retained an open-minded, curious ...

  25. Anonymous accounts use right-wing channels to spread misinformation

    Social media accounts who shield their real identities have come to dominate right-wing political discussion online, even as they spread false information. When a user who uses a pseudonym on the social platform X made a claim against a government website, public figures including Trump, immediately started raising alarm.

  26. Press Freedom Essay in 500+ Words in English for Students

    This is more common in today's world, where fake news and misinformation have taken a significant leap on social media and digital platforms. As of 2022, India's literacy rate is 76.32%. There is still a large part of the population who lack critical thinking skills and media literacy. Quick Read: Essay on Freedom Fighters. Conclusion

  27. China's Advancing Efforts to Influence the U.S. Election Raise Alarms

    China has adopted some of the same misinformation tactics that Russia used ahead of the 2016 election, researchers and government officials say. By Tiffany Hsu and Steven Lee Myers Covert Chinese ...