Skip to main content

GoogleGoogle Gemini AI vulnerable to prompt-injection attacks leaking private calendar data

Researchers demonstrated that Google's Gemini AI model could be tricked using prompt-injection attacks to leak private details about a user's calendar. The vulnerability allows malicious actors to extract sensitive personal information through carefully crafted prompts, highlighting security risks in AI systems with access to private user data.

Scoring Impact

TopicDirectionRelevanceContribution
AI Safety-againstsecondary-0.50
User Privacy-againstprimary-1.00
Overall incident score =-0.613

Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.55)

Evidence (1 signal)

Confirms product_decision Jan 27, 2026 reported

Researchers demonstrated Gemini AI calendar data leak via prompt-injection

Researchers were able to use prompt-injection attacks to trick Google's Gemini AI model into leaking private details about a user's calendar, demonstrating vulnerabilities in AI systems with access to sensitive personal data.

Related: Same Topics