On March 12, 2026, EU antitrust chief Teresa Ribera announced an investigation into Meta over WhatsApp policies that may block competitors' AI chatbots from the platform. The investigation examines whether Meta is using its dominant messaging position to prevent rival AI services from reaching WhatsApp's user base, as part of broader EU scrutiny of Big Tech's AI operations.
incidental
A Delaware judge ruled in early March 2026 that Hartford, Chubb, and more than 20 other insurers do not have a duty to defend Meta in thousands of lawsuits alleging its platforms harm children. The court found that harm from deliberate design choices (addictive features, algorithmic amplification) does not qualify as 'accidents' under insurance policies. This ruling represents a significant financial blow to Meta, which faces thousands of pending addiction and child safety lawsuits.
On February 20, 2026, the National Parent Teacher Association ended its partnership with Meta amid ongoing child-safety trials. The decision came two days after Zuckerberg testified in a landmark social media addiction trial where he was grilled about internal documents showing millions of underage users.
Meta laid off about 1,500 employees representing 10% of its Reality Labs division, which includes 15,000 employees and focuses on metaverse development. In 2025, CEO Mark Zuckerberg directed executives to reduce their 2026 budgets as Meta increasingly focuses on AI research.
negligent
42 State Attorneys General issued a letter to Meta (along with other large technology companies) about the rise in sycophantic and delusional outputs from generative AI software. The letter highlighted that generative AI software has been involved in at least six deaths in the United States, and other incidents of domestic violence, poisoning, and hospitalizations for psychosis.
negligent
In December 2025, families of Levi Maciejewski (13, Pennsylvania, died 2024) and Murray Downey (16, Scotland, died 2023) sued Meta alleging Instagram's design enabled sextortion schemes targeting teens. The lawsuit cited an internal 2022 audit that allegedly found Instagram's 'Accounts You May Follow' feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. Instagram's default public privacy settings for teens were not changed to private until 2024, despite Meta claiming the change was made in 2021.
negligent
In November 2025, Meta's board of directors settled a shareholder derivative lawsuit for $190 million. Shareholders alleged that board members failed to properly oversee compliance with a 2012 FTC consent decree on user privacy, and that they improperly agreed to the $5 billion 2019 FTC settlement specifically to shield Mark Zuckerberg from personal liability. The suit highlighted undisclosed conflicts of interest among board members, including allegations that Marc Andreessen provided Zuckerberg strategic advice during board negotiations over a stock restructuring.
reactive
Chief Judge James Boasberg ruled after a six-week bench trial that the FTC failed to prove Meta unlawfully monopolized 'personal social networking.' The court found TikTok and YouTube are legitimate competitors, noting Americans spend only 17% of time on Facebook viewing friends' content. The ruling was the most decisive government loss in any major Big Tech antitrust case. The FTC appealed in January 2026.
negligent
Reuters obtained internal Meta documents showing the company displayed approximately 15 billion 'higher risk' scam advertisements per day, generating an estimated $16 billion annually (10% of revenue). Documents revealed Meta set 'revenue guardrails' limiting fraud enforcement to 0.15% of revenue (~$135M), and executives proposed focusing fraud control only on countries with imminent regulatory action. Internal documents showed Meta was involved in 1 in 3 U.S. frauds. Meta also developed a 'playbook' to manage regulatory perception of scam ads.
Content moderators from nine countries formed the Global Trade Union Alliance of Content Moderators in Nairobi, Kenya, to fight for living wages, safe working conditions and union representation. The alliance is calling on tech companies including TikTok, Meta, Alphabet and OpenAI to adopt mental health protections throughout their supply chains. Over 80% of workers surveyed said their employer needs to do more to support their mental health. Report titled 'The People Behind the Screens' documented traumatic, high-pressure conditions including PTSD, depression, burnout and suicidality among moderation workers. Workers describe pressure to review thousands of horrific videos daily including beheadings, child abuse, and torture.
negligent
A joint Guardian and Bureau of Investigative Journalism investigation revealed Meta secretly relocated content moderation from Kenya to Ghana after facing lawsuits. Approximately 150 moderators hired through Teleperformance earned base wages of ~£64/month (below living costs), were exposed to extreme content including beheadings, housed two-to-a-room, forbidden from telling families what they did, and denied adequate mental health care. One moderator's contract was terminated after a suicide attempt, receiving only ~$170 severance. Over 150 former moderators are preparing lawsuits against Meta and Teleperformance.
$227.0M
The European Commission issued its first-ever Digital Markets Act fine, finding Meta's 'consent or pay' model violated DMA obligations to give consumers a choice of service using less personal data. Meta offered EU users of Facebook and Instagram only a binary choice between consenting to full data combination for personalized ads or paying a subscription. Internal documents revealed the model 'was never intended to comply' with the DMA, with Meta's own estimates predicting below 1% subscription uptake. The violation period ran from March to November 2024.
reactive
Meta announced a major overhaul of its DEI initiatives in January 2025. The company eliminated its dedicated DEI team, ended equity and inclusion programs, stopped representation goals, ceased diverse supplier sourcing requirements, and ended the 'diverse slate approach' in hiring. VP of HR Janelle Gale cited the changing legal and policy landscape around DEI in the United States.
On January 7, 2025, Meta announced it would end its third-party fact-checking program on Facebook and Instagram, replacing it with a community notes system similar to X (formerly Twitter). CEO Mark Zuckerberg stated fact-checkers had been 'too politically biased' and called for reducing 'censorship'. The change was announced two weeks before Trump's second inauguration.
reactive
On January 7, 2025, as part of broader content moderation changes, Meta updated its Community Standards to expressly permit users to describe LGBTQ+ people as mentally ill or abnormal and to call for their exclusion from professions, public spaces, and society based on sexual orientation and gender identity.
On January 6, 2025, Meta announced the appointment of Dana White, CEO of UFC and a prominent Trump supporter who played a key role in Trump's 2024 reelection campaign, to its board of directors. The appointment came amid Meta's broader outreach to the incoming Trump administration.
Meta spent a record $24.4 million on lobbying in 2024, a 27% increase from 2023 and the most the company has spent since it began federal lobbying in 2009. The effort was powered by 65 lobbyists — one for every eight members of Congress. Combined, Amazon, Apple, Google, Meta, and Microsoft spent nearly $69 million lobbying the federal government in 2022 alone.
Meta announced it would donate $1 million to Donald Trump's 2025 presidential inauguration fund. The donation came as Meta CEO Mark Zuckerberg sought to improve relations with the incoming administration after years of tension over content moderation decisions.
In December 2024, over 140 content moderators working for Meta through contractor Samasource Kenya were diagnosed with severe PTSD by Dr. Ian Kanyanya at Kenyatta National Hospital. Moderators reported reviewing extreme content including child sexual abuse, torture, murder, and bestiality with only ~1 minute per piece, under close monitoring with threat of termination. Workers reported self-harm, vomiting, and severe psychological symptoms. Meta outsources content moderation to contractors in developing countries including Kenya and Ghana. A second wave of lawsuits emerged from Ghana moderators in April 2025.
Meta faces a lawsuit from Ferras Hamad, a Palestinian-American engineer who claims he was fired for attempting to fix bugs that suppressed Palestinian posts on Instagram. Hamad found content by Palestinian photojournalist Motaz Azaiza was misclassified as pornographic. The lawsuit alleges Meta deleted internal employee communications mentioning relatives killed in Gaza and investigated employees for using the Palestinian flag emoji.