Misinformation and Disinformation in News: Definitions and Detection

The distinction between misinformation and disinformation is not semantic hairsplitting — it determines legal liability, platform enforcement policy, and how newsrooms assign editorial responsibility. This page covers the formal definitions adopted by academic institutions and government bodies, the structural mechanics by which false information spreads through news ecosystems, the causal conditions that allow it to persist, and the detection frameworks used by professional fact-checkers and media organizations. The scope is the U.S. news landscape, with reference to international classification standards where they inform domestic practice.


Definition and Scope

The distinction between misinformation and disinformation rests on a single variable: intent. Misinformation refers to false or inaccurate information shared without the intent to deceive — the person spreading it believes it to be true. Disinformation refers to false information deliberately created or disseminated to deceive, manipulate, or mislead a target audience.

The First Draft coalition, a journalism support organization that operated from 2015 to 2022 and was affiliated with Harvard Kennedy School's Shorenstein Center, formalized a third category: malinformation — information that is factually accurate but shared with the intent to cause harm, such as private information disclosed to damage a person's reputation. This three-part taxonomy is used by the European Union's High Level Expert Group on Fake News and the Reuters Institute for the Study of Journalism at the University of Oxford.

The scope of the problem in news contexts extends beyond false articles. It encompasses false headlines attached to accurate articles, authentic images paired with incorrect captions, manipulated video, selectively edited quotations, fabricated statistics attributed to real institutions, and synthetic media generated by AI systems. The Reuters Institute Digital News Report 2023 found that 56 percent of respondents across 46 countries reported concern about their ability to distinguish real from fabricated news online.

Regulatory scope in the United States remains limited by First Amendment protections, distinguishing U.S. policy from the EU's Digital Services Act (Regulation (EU) 2022/2065), which imposes due-diligence obligations on large online platforms regarding the spread of disinformation.


Core Mechanics or Structure

False information moves through news ecosystems through 4 identifiable structural pathways:

  1. Origin — A false claim is created or emerges from a misunderstanding, fabrication, or deliberate campaign. Origins include state-sponsored influence operations, partisan websites designed to mimic legitimate news outlets, social media posts by anonymous accounts, and honest errors by credentialed journalists.

  2. Amplification — Platform algorithms, particularly those optimizing for engagement, amplify content that generates strong emotional responses. A 2018 study published in Science by Vosoughi, Roy, and Aral found that false news on Twitter spread to 1,500 people approximately 6 times faster than accurate news.

  3. Legitimization — Repeated exposure creates the "illusory truth effect," a cognitive phenomenon documented in psychological literature (Hasher, Goldstein, and Toppino, 1977) wherein repetition increases perceived accuracy regardless of factual basis.

  4. Embedding — Once a false claim achieves sufficient circulation, it becomes embedded in secondary sources, reference sites, and social memory, making correction operationally difficult even when debunking is published.

Within newsrooms, structural vulnerabilities include competitive pressure to publish before verification is complete, reliance on wire feeds without independent confirmation, and insufficient sourcing discipline. The news sourcing standards governing attributed quotation and document verification are one institutional defense against misinformation entering the editorial pipeline.


Causal Relationships or Drivers

False information thrives under specific structural conditions, not simply due to individual gullibility. Identified drivers include:


Classification Boundaries

The misinformation/disinformation boundary is clear in theory but contested in practice. Determining intent requires evidence, and intent is rarely self-disclosed. Fact-checkers and researchers apply the following classification criteria:

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) maintains resources distinguishing influence operations (coordinated disinformation) from organic misinformation, particularly in the context of election security.

Satire presents a formal classification challenge. Satirical content that is mistaken for factual reporting and recirculated without satirical context crosses into misinformation, even though the original publication carried no deceptive intent. The journalism ethics standards of professional organizations address labeling obligations.


Tradeoffs and Tensions

Several structural tensions define where misinformation and disinformation policy becomes contested:

Speed versus accuracy: Competitive publishing pressure creates conditions where errors enter the record before verification is complete. The breaking news coverage environment concentrates this risk. Corrections issued after initial publication reach a fraction of the original audience.

Platform moderation versus press freedom: Algorithmic or human suppression of false content raises First Amendment concerns when applied to journalism. The legal and normative boundaries are the subject of active litigation and legislative debate in the United States. Freedom of the press protections operate differently for distribution platforms than for publishers.

Transparency versus source protection: Anonymous sources in journalism may be the only way to obtain accurate information on matters of public interest, but anonymous sourcing is also a vector through which misinformation can enter credentialed reporting.

Labeling versus amplification: Research by the Reuters Institute has documented the "implied truth effect," wherein labeling some content as false implicitly suggests that unlabeled content is verified — a counterproductive outcome for unlabeled misinformation.


Common Misconceptions

Misconception: Misinformation is primarily a social media phenomenon.
Correction: False information has circulated through print, broadcast, and wire services throughout the history of journalism. Social media accelerates spread but did not originate the problem. The news-literacy field documents false reporting in mainstream outlets across every decade of the 20th century.

Misconception: Fact-checking eliminates the effect of false claims.
Correction: Debunking research consistently shows that corrections reduce but do not eliminate false belief. The "backfire effect" — the idea that corrections strengthen false beliefs — has limited replication support in recent studies, but corrections do show diminishing returns as time passes after initial exposure.

Misconception: Disinformation is always state-sponsored.
Correction: Domestic commercial operations ("clickbait farms") produce disinformation for advertising revenue without any state coordination, as documented during the 2016 U.S. election cycle by the Oxford Internet Institute's Computational Propaganda Project.

Misconception: Images and video cannot be misinformation.
Correction: Authentic images used in false context — a photograph from one event attributed to another — are one of the most common forms of visual misinformation documented by organizations including Snopes and AFP Fact Check.


Checklist or Steps

Standard verification sequence used by professional fact-checkers (as documented by the International Fact-Checking Network's Code of Principles):

  1. Consult fact-checking in news registries to determine whether the claim has been previously evaluated.
  2. Issue corrections promptly if new evidence alters the assessment, per the corrections and retractions standards applicable to the publication.

Reference Table or Matrix

Term Definition Intent to Deceive Example
Misinformation False information shared without deceptive intent No Satire shared without satirical label
Disinformation False information deliberately created or spread to deceive Yes State-sponsored fabricated news article
Malinformation Accurate information shared to cause harm Yes (harmful deployment) Leaked private communications to damage reputation
Propaganda Biased or misleading information promoting a political cause Intent varies State media framing of military operations
Deepfake AI-generated synthetic media depicting fabricated events Typically yes Fabricated video of a public figure speaking
Astroturfing False appearance of grassroots support via coordinated accounts Yes Multiple fake social accounts echoing the same claim
Clickbait misinformation Misleading headlines attached to unrelated or distorted content Yes Headline contradicted by article body

The National News Authority index provides navigational access to related reference topics covering the broader U.S. news media sector, including media bias and news, news aggregators and algorithms, and AI and news production.


 ·   · 

References