OpenAI—ChatGPT fabricated non-existent legal cases with fake citations presented in federal court in Mata v. Avianca
Lawyer Steven Schwartz used ChatGPT to conduct legal research for a personal injury case (Mata v. Avianca, Inc.). ChatGPT hallucinated multiple fake legal cases with convincing-looking citations and case summaries. Schwartz submitted these fabricated cases to federal court without verifying they existed. When opposing counsel and the judge could not locate the cases, it was revealed they were AI-generated fictions. The judge sanctioned Schwartz and his firm, and the incident became a landmark case highlighting the dangers of AI hallucinations in professional contexts.
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| AI Safety | -against | primary | -1.00 |
| Human-Centered AI | -against | secondary | -0.50 |
| Misinformation | +toward | primary | -1.00 |
| Overall incident score = | -0.737 | ||
Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)
Evidence (1 signal)
Federal judge sanctioned lawyer for submitting ChatGPT-fabricated legal cases in Mata v. Avianca
Attorney Steven Schwartz used ChatGPT for legal research and submitted fabricated cases with fake citations to federal court. The judge found the cases were AI hallucinations and sanctioned both Schwartz and his firm for failing to verify the citations existed.