negligent
On May 16, 2025, a court granted conditional certification for Mobley v. Workday to proceed as a nationwide collective action under the Age Discrimination in Employment Act. Derek Mobley claimed Workday's algorithms caused him to be rejected from more than 100 jobs over seven years because of his age, race, and disabilities. Workday disclosed that '1.1 billion applications were rejected' using its software tools, and the collective could potentially include 'hundreds of millions' of members. Workday denies the claims.
negligent
The ACLU filed a complaint in March 2025 alleging HireVue's AI hiring tools violated anti-discrimination laws by systematically disadvantaging candidates based on race, disability, and other protected characteristics.
negligent
BBC data analysis in December 2024 showed Palestinian news outlets saw 77% decline in engagement after October 7, 2023, while Israeli outlets saw 37% increase. Leaked internal documents revealed Instagram's algorithm was adjusted within a week of October 7th, lowering the moderation confidence threshold for Palestinian content from 80% to 25%, causing significantly more removals.
Under Ton-That's leadership, Clearview AI aggressively expanded its facial recognition platform to more than 3,100 law enforcement agencies across the United States, including the FBI and Department of Homeland Security. By 2024, law enforcement searches via Clearview AI had doubled to 2 million annually. The expansion included a $9.2 million ICE contract in 2025, with ICE personnel using the system globally. This occurred despite wrongful identification cases, including Randal Quran Reid who spent six days in jail due to a mistaken Clearview match.
reactive
HireVue published independent bias audits of its AI hiring tools to comply with New York City's Local Law 144, becoming one of the first companies to proactively demonstrate regulatory compliance for AI hiring systems.
Koa Health published results of its 2022-2023 ethics audit conducted by Eticas, showing 24% improvement from the prior year. The Koa Foundations app achieved perfect ratings in bias reduction categories with no disparate impact or undesired bias found. The company maintains a public Ethics Impact Assessment framework.
Ng incorporated ethics and responsible AI content into his Machine Learning curriculum on Coursera and DeepLearning.AI, covering fairness, transparency, bias, and societal impact. By embedding these topics into courses taken by millions, he helped establish responsible AI practices as a standard part of technical AI education rather than an afterthought.
reactive
HireVue announced in January 2021 that it would discontinue its facial analysis feature in video interviews, responding to sustained criticism from civil rights organizations, researchers, and regulators about algorithmic bias.
reactive
In June 2020, after the PULSE AI model depixelated Barack Obama's photo into a white face, LeCun argued that 'ML systems are biased when data is biased' but that 'learning algorithms themselves are not biased.' Timnit Gebru and other researchers criticized this framing as reductive, arguing it ignores systemic issues in AI development. The exchanges became heated, and LeCun signed off Twitter on June 28, 2020, asking 'everyone to please stop attacking each other' and specifically asking people to stop attacking Gebru.
On June 8, 2020, IBM CEO Arvind Krishna sent a letter to Congress announcing IBM would no longer offer, develop, or research facial recognition technology. IBM called for national policies to address racial justice and police reform, becoming the first major tech company to exit the facial recognition market. Krishna stated IBM 'firmly opposes' use of facial recognition for mass surveillance and racial profiling.
negligent
Multiple academic studies found YouTube's recommendation algorithm directed users toward increasingly extreme content. A systematic review found 14 of 23 studies implicated YouTube's recommender system in facilitating problematic content pathways. Research from UC Davis and PNAS showed the algorithm was more likely to recommend extremist and conspiracy content to right-leaning users. Over 70% of content watched on YouTube is recommended by its proprietary, opaque algorithm. While some studies produced contradictory findings, the lack of algorithmic transparency prevented definitive conclusions.
negligent
Research and complaints showed HireVue's AI facial analysis system systematically disadvantaged deaf candidates, non-native English speakers, and people with darker skin tones in hiring assessments between 2014-2020.
In 2019, The Guardian reported TikTok's moderation practices resulted in removal of content positive toward LGBTQ+ people in countries including Turkey, such as same-sex couples holding hands. In December 2019, TikTok admitted it deliberately reduced the viral potential of videos made by LGBTQ+ users, claiming the goal was to 'reduce bullying' in comments. The Australian Strategic Policy Institute also found content from LGBTQ+ creators was systematically suppressed. While TikTok later updated its policies, the practice demonstrated algorithmic discrimination against marginalized communities under the guise of user protection.
In November 2019, DHH posted a viral thread exposing that the Apple Card algorithm gave him a credit limit 20x higher than his wife's despite her having a longer credit history and higher credit score. Apple co-founder Steve Wozniak confirmed similar disparity. The New York State Department of Financial Services launched an investigation into Goldman Sachs and the Apple Card program as a result.
The Electronic Privacy Information Center filed an FTC complaint alleging HireVue's AI-powered video interview system used facial analysis to screen job candidates in ways that were unfair, deceptive, and biased.
reactive
After researchers including Kate Crawford documented pervasive bias in ImageNet's person categories -- including racist slurs, misogynist labels, and ableist classifications -- Fei-Fei Li's team systematically identified non-visual concepts and offensive categories. They proposed and executed removal of 1,593 categories (54% of the 2,932 person categories), addressing both bias and privacy concerns in the foundational AI dataset. This represented a significant acknowledgment that even groundbreaking datasets require ongoing ethical review and correction.
Hoan Ton-That personally pitched US Border Patrol on using Clearview AI to screen arriving migrants for 'sentiment about the USA,' proposing to scan social media for posts saying 'I hate Trump' or 'Trump is a puta' and targeting anyone with an 'affinity for far-left groups.' The proposal conflated support for Trump with American identity and would have used facial recognition to profile migrants based on political views.
negligent $5.0M
From 2017-2025, Meta/Facebook's advertising algorithms discriminated against older workers and women in job ad delivery. ProPublica/NYT 2017 investigation found dozens of employers ran recruitment ads limited to specific age groups. EEOC ruled in September 2019 that four companies violated federal law by excluding women and older workers from job ads. Meta settled in March 2019 for $5 million with ACLU/CWA, agreeing to eliminate age/gender targeting in employment ads. However, December 2022 EEOC charge by Real Women in Trucking (joined by AARP Foundation in 2023) alleged Meta's ad-delivery algorithm continued discriminating, with ads delivered to 99% male or 99% under-55 audiences despite advertisers targeting all ages/genders. Case remains pending with EEOC as of 2025.
Gebru co-authored the influential 'Datasheets for Datasets' paper proposing that every dataset used for AI training be accompanied by documentation about how data was gathered, its limitations, and how it should or should not be used. The framework became an industry standard practice adopted by major AI organizations to improve data transparency and reduce bias in AI systems.
Gebru co-authored the landmark Gender Shades study with Joy Buolamwini at MIT, which found that commercial facial recognition systems had error rates of over 34% for darker-skinned women compared to less than 1% for lighter-skinned men. The research led to significant industry changes, including Microsoft retiring gender classification in Azure Face API and IBM discontinuing general-purpose facial recognition.