Skip to main content

Fei-Fei LiFei-Fei Li led systematic removal of biased and offensive categories from ImageNet

After researchers including Kate Crawford documented pervasive bias in ImageNet's person categories -- including racist slurs, misogynist labels, and ableist classifications -- Fei-Fei Li's team systematically identified non-visual concepts and offensive categories. They proposed and executed removal of 1,593 categories (54% of the 2,932 person categories), addressing both bias and privacy concerns in the foundational AI dataset. This represented a significant acknowledgment that even groundbreaking datasets require ongoing ethical review and correction.

Scoring Impact

TopicDirectionRelevanceContribution
Algorithmic Fairness+towardprimary+1.00
Research Integrity+towardsecondary+0.50
User Privacy+towardsecondary+0.50
Overall incident score =+0.443

Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)× agency (reactive ×0.75)

Evidence (1 signal)

Confirms Policy Change Sep 1, 2019 verified

ImageNet team removed 1,593 offensive person categories after bias audit

Following documentation by researchers like Kate Crawford of pervasive bias in ImageNet's person categories, Fei-Fei Li's team at Stanford published a paper systematically identifying non-visual concepts and offensive categories including racial and sexual characterizations. They removed 1,593 categories -- 54% of the 2,932 person categories -- from ImageNet to address bias and privacy concerns.

Related: Same Topics