Skip to main content

Sundar PichaiEstablished Google AI Principles committing to not develop weapons or surveillance AI

In June 2018, following the Project Maven controversy, Sundar Pichai published Google's AI Principles, committing the company to develop AI that is socially beneficial, avoids unfair bias, is built and tested for safety, is accountable to people, incorporates privacy design principles, upholds scientific excellence, and is made available for uses that accord with these principles. Notably, Google pledged not to develop AI for weapons or surveillance that violates international norms.

Scoring Impact

TopicDirectionRelevanceContribution
AI Oversight+towardsecondary+0.50
AI Safety+towardprimary+1.00
Human-Centered AI+towardsecondary+0.50
Overall incident score =+0.443

Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)× agency (reactive ×0.75)

Evidence (1 signal)

Confirms Policy Change Jun 7, 2018 verified

Pichai published Google AI Principles blog post committing to responsible AI development

Sundar Pichai published 'AI at Google: our principles' outlining seven objectives for AI applications and areas where Google will not design or deploy AI, including weapons, surveillance violating international norms, and technologies that cause overall harm.

Related: Same Topics