OpenAI—Samsung engineers leaked proprietary source code to ChatGPT while debugging, creating unintentional data exposure
In May 2023, Samsung engineers used ChatGPT to debug proprietary source code and review internal business documents by copying them directly into the chatbot. This created an unintentional data leakage scenario because ChatGPT retains conversations for model training unless explicitly disabled by enterprise users. Samsung subsequently banned ChatGPT use internally. The incident highlighted insufficient warnings to enterprise users about data retention policies and the risks of using consumer AI tools with sensitive corporate information.
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| Consumer Protection | -against | primary | -1.00 |
| Corporate Transparency | -against | secondary | -0.50 |
| User Privacy | -against | primary | -1.00 |
| Overall incident score = | -0.477 | ||
Score = avg(topic contributions) × significance (medium ×1) × confidence (0.57)
Evidence (1 signal)
Samsung banned ChatGPT after engineers leaked proprietary code; highlighted enterprise data retention risks
Samsung engineers used ChatGPT to debug proprietary source code and review internal documents, inadvertently leaking sensitive corporate data because ChatGPT retains conversations for training. Samsung subsequently banned ChatGPT use internally. The incident exposed inadequate warnings about data retention in enterprise contexts.