Skip to main content
technology Support = Bad

Misinformation

Supporting means...

Amplifies or spreads misinformation; recommender systems promote false content; profits from disinfo; resists fact-checking

Opposing means...

Actively combats misinformation; invests in fact-checking; transparent about content moderation; reduces viral spread of false info

Recent Incidents

In early 2025, Musk launched a social media campaign against UK's Labour government, publishing over 100 posts (100+ million views) falsely accusing PM Starmer of allowing grooming gangs to avoid prosecution for votes. Posts targeting UK increased 5.6-fold in January 2025. He explicitly called for votes for Germany's far-right AfD party before their February 2025 election. In August 2024, he posted 'Civil war is inevitable' in the UK during riots, prompting government rebuke. The EU's EDMO documented how his 'powerful disinformation machine works' across European politics.

In legal proceedings for Universal Music Group et al. v. Anthropic, expert witness testimony submitted by Anthropic's lawyers contained erroneous citations generated by Claude AI. The inaccuracies included incorrect article titles and author names that were not caught during manual review. Anthropic acknowledged the error in a court filing, characterized it as an honest mistake, and apologized to the court. The incident highlighted risks of AI hallucinations even when used by sophisticated parties in high-stakes legal contexts.

On January 7, 2025, Meta announced it would end its third-party fact-checking program on Facebook and Instagram, replacing it with a community notes system similar to X (formerly Twitter). CEO Mark Zuckerberg stated fact-checkers had been 'too politically biased' and called for reducing 'censorship'. The change was announced two weeks before Trump's second inauguration.

In late 2024, YouTube rewrote its moderation policy to allow videos with up to 50% violating content to remain online (up from 25%), prioritizing 'freedom of expression' over enforcement. Moderators instructed to leave up videos on elections, race, gender, abortion even if half violates rules against hate speech or misinformation. Changes disclosed publicly in June 2025 via NYT report.

Throughout 2024, Elon Musk posted at least 87 claims about US elections that fact-checkers rated as false or misleading, amassing over 2 billion views. None received Community Notes fact-checks. He promoted the 'Great Replacement' conspiracy theory claiming Democrats were 'importing voters' (747 million views across 42 posts), spread voting machine fraud conspiracies, and shared an AI deepfake of Kamala Harris (133 million views). The Center for Countering Digital Hate estimated his political reach would have cost a campaign $24 million in ads.

From 2023-2024, Musk used his X account (the most-followed on the platform) to systematically amplify conspiracy theories and far-right disinformation. He endorsed the antisemitic 'great replacement' conspiracy theory, boosted anti-immigrant conspiracy theories about Haitian immigrants, amplified accounts like @EndWokeness and @libsoftiktok (which inspired bomb threats at a children's hospital), and shared election fraud conspiracies. A 2023 Science Feedback analysis found 'super-spreader' disinformation accounts saw a 42% increase in engagement, with Musk personally interacting with their top posts.

In June 2023, YouTube reversed its policy of removing content making false claims about the 2020 US presidential election being stolen. The platform had previously removed 'tens of thousands' of such videos since December 2020. YouTube stated the reversal was because 'removing this content does curb some misinformation' but 'could also have the unintended effect of curtailing political speech.' Critics argued this enabled continued spread of election denialism.

Lawyer Steven Schwartz used ChatGPT to conduct legal research for a personal injury case (Mata v. Avianca, Inc.). ChatGPT hallucinated multiple fake legal cases with convincing-looking citations and case summaries. Schwartz submitted these fabricated cases to federal court without verifying they existed. When opposing counsel and the judge could not locate the cases, it was revealed they were AI-generated fictions. The judge sanctioned Schwartz and his firm, and the incident became a landmark case highlighting the dangers of AI hallucinations in professional contexts.

X expanded Community Notes (formerly Birdwatch) to all users globally in late 2022, allowing contributors to add context to potentially misleading posts. The system uses an open-source algorithm to surface notes that earn consensus across users with different viewpoints. By November 2023, the program had approximately 133,000 contributors and notes received tens of millions of views daily. A May 2024 study found COVID-19 vaccine notes were accurate 97% of the time.

Following Musk's acquisition, X reinstated numerous accounts that had been banned for violating platform rules. Donald Trump's account was reinstated November 19, 2022 via a Twitter poll. On November 24, 2022, Musk announced a 'general amnesty' for suspended accounts based on a poll where 72% voted in favor. Alex Jones, banned in 2018 for abusive behavior related to Sandy Hook conspiracy theories, was reinstated December 10, 2023. Andrew Tate, banned for misogynistic content, was also reinstated. A BBC study found that a third of 1,100 reinstated accounts appeared to have violated Twitter guidelines.

Since Musk's takeover, X removed policies on crisis misinformation, COVID-19 misleading information, election outcome misinformation, and transgender protections (misgendering/deadnaming). The platform reinstated previously banned accounts including Trump's (suspended after Jan 6). Gizmodo reported the transgender protection policy became 'effectively dead' after Musk relaxed hate speech policies in November 2022.

In August 2022, Meta launched BlenderBot 3 as a public demo chatbot. The system made false statements about Facebook's data privacy practices and incorrectly claimed Donald Trump won the 2020 election. The chatbot also made statements on other sensitive political topics without factual basis. Meta faced backlash for releasing the chatbot publicly with insufficient safety testing and fact-checking mechanisms.

In January 2022, Neil Young demanded Spotify remove his music over Joe Rogan Experience episodes spreading COVID-19 vaccine misinformation. Joni Mitchell and other artists followed. Spotify added content advisories but refused to remove misinformation outright, and removed 70 episodes containing racial slurs. Despite the controversy, Spotify renewed Rogan's deal for up to $250M in February 2024, making it non-exclusive.

In May 2020, YouTube published its COVID-19 Medical Misinformation Policy banning content contradicting WHO or local health authorities. In 2021, YouTube expanded the policy to cover all vaccines and removed accounts of prominent anti-vaccination activists including Joseph Mercola and Robert Kennedy Jr. Studies showed the policy significantly reduced the rate of misinformation videos on the platform compared to the pre-policy period.

negligent

Multiple academic studies found YouTube's recommendation algorithm directed users toward increasingly extreme content. A systematic review found 14 of 23 studies implicated YouTube's recommender system in facilitating problematic content pathways. Research from UC Davis and PNAS showed the algorithm was more likely to recommend extremist and conspiracy content to right-leaning users. Over 70% of content watched on YouTube is recommended by its proprietary, opaque algorithm. While some studies produced contradictory findings, the lack of algorithmic transparency prevented definitive conclusions.

In April 2019, Persson tweeted 'Q is legit. Don't trust the media' promoting the QAnon conspiracy theory. He also responded to a meme saying 'trans women are women' with 'No, they feel like they are' and added 'you are absolutely evil if you want to encourage delusion.' After backlash, he partially walked back the trans comments but not the QAnon promotion.

negligent

In September 2018, the UN Fact-Finding Mission reported that Facebook was a 'useful instrument for those seeking to spread hate' in Myanmar and had been 'slow and ineffective' in tackling hatred against Rohingya. Hundreds of military personnel used fake accounts to flood Facebook with anti-Rohingya content. Facebook had only two Burmese-speaking content reviewers for 18 million active Myanmar users. Facebook's own 2018 human rights assessment concluded it was not doing enough to prevent incitement to violence.

negligent

In March 2018, it was revealed that political consulting firm Cambridge Analytica had harvested the personal data of up to 87 million Facebook users without consent via a personality quiz app. Facebook had known about the misuse since 2015 but took no public action. The data was used for political targeting in the 2016 US presidential election. The scandal wiped over $100 billion from Facebook's market value and led to Zuckerberg testifying before Congress.