Fei-Fei Li—Fei-Fei Li led systematic removal of biased and offensive categories from ImageNet
After researchers including Kate Crawford documented pervasive bias in ImageNet's person categories -- including racist slurs, misogynist labels, and ableist classifications -- Fei-Fei Li's team systematically identified non-visual concepts and offensive categories. They proposed and executed removal of 1,593 categories (54% of the 2,932 person categories), addressing both bias and privacy concerns in the foundational AI dataset. This represented a significant acknowledgment that even groundbreaking datasets require ongoing ethical review and correction.
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| Algorithmic Fairness | +toward | primary | +1.00 |
| Research Integrity | +toward | secondary | +0.50 |
| User Privacy | +toward | secondary | +0.50 |
| Overall incident score = | +0.443 | ||
Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)× agency (reactive ×0.75)
Evidence (1 signal)
ImageNet team removed 1,593 offensive person categories after bias audit
Following documentation by researchers like Kate Crawford of pervasive bias in ImageNet's person categories, Fei-Fei Li's team at Stanford published a paper systematically identifying non-visual concepts and offensive categories including racial and sexual characterizations. They removed 1,593 categories -- 54% of the 2,932 person categories -- from ImageNet to address bias and privacy concerns.