Yann LeCun—LeCun publicly dismissed AI existential risk concerns as 'preposterous' and 'complete B.S.', opposing safety-focused regulation
Throughout 2023-2024, Yann LeCun was one of the most vocal critics of AI existential risk narratives. He called concerns about AI existential risk 'preposterous' (June 2023) and 'complete B.S.' (October 2024), publicly disagreeing with fellow AI pioneers Geoffrey Hinton and Yoshua Bengio. He argued the AI alignment problem has been 'ridiculously overblown' and that it is 'way too early to regulate' AI to prevent existential risk. He debated Eliezer Yudkowsky on alignment feasibility and estimated P(doom) at less than 1%.
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| AI Oversight | -against | primary | -1.00 |
| AI Safety | -against | primary | -1.00 |
| Overall incident score = | -0.966 | ||
Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.64)
Evidence (2 signals)
LeCun told TechCrunch that AI existential threat worries are 'complete B.S.'
In October 2024, LeCun stated that AI is not on the verge of becoming intelligent, and characterized existential risk discussion as 'premature', 'preposterous', and 'complete B.S.' He said the AI alignment problem has been 'ridiculously overblown.'
LeCun told Fortune that AI existential risk concerns are 'preposterous'
In June 2023, LeCun told Fortune that concerns about AI existential risk are 'preposterous', positioning himself as a leading voice among AI optimists countering what he called an 'increasingly vocal and influential AI doom narrative.'