Skip to main content

Methodology

How we research, document, and score ethical signals in the tech industry.

What We Track

GoodIndex documents incidents: discrete events or actions with ethical implications from tech companies, their executives, and related organizations. We currently track 1765 incidents across 440 entities, backed by 2010 signals and 2935 cited sources.

Incidents vs Signals

An incident is a unit of ethical reality - something that happened in the world (a donation, a policy change, a statement). A signal is a piece of evidence about an incident - a news article, a press release, a filing. Multiple signals can confirm the same incident, increasing our confidence in it.

This distinction prevents double-counting: if 5 news outlets report the same donation, that's still one incident with high confidence, not 5 separate actions.

Entities

  • Companies: Tech companies of all sizes
  • People: Executives, founders, board members, investors
  • VC Firms: Venture capital and investment firms
  • Nonprofits: Industry associations, foundations, advocacy groups
  • Political Entities: Campaigns, PACs, political organizations
  • Government Bodies: Regulatory agencies, departments

Incident Types

Donations Endorsements Criticisms Policy changes Statements Resignations Lobbying Investments Partnerships Legal actions Product decisions Labor practices

Evidence Standards

Every signal requires at least one cited source. We rate evidence strength and weight scores accordingly.

Verified

Multiple independent sources, official records, SEC filings

Highest weight
Documented

Reliable source with clear documentation

High weight
Reported

Single news report from reputable outlet

Medium weight
Alleged

Unconfirmed allegation, limited sourcing

Low weight
Retracted

Previously reported, later retracted

Excluded

Source Quality Hierarchy

  1. Official sources: Company statements, SEC filings, press releases
  2. Primary journalism: Original reporting from established outlets
  3. Secondary reporting: Analysis and aggregation from other sources
  4. Interview transcripts: Direct quotes from recorded interviews
  5. Social media: Direct posts from verified accounts

Topic Framework

Incidents are linked to 62 ethical topics across four categories. Each incident-topic link specifies a direction (toward or against) and relevance level.

Topic Categories

  • Corporate: DEI, labor practices, governance, executive compensation
  • Technology: AI safety, privacy, antitrust, open source, content moderation
  • Policy: Immigration, healthcare, tax practices, climate action
  • Social: LGBTQ+ rights, reproductive rights, civil liberties, worker organizing

Direction

Direction describes the factual relationship between the incident and the topic:

toward The entity engaged in, supported, or advanced this topic. E.g., "Company X launched DEI program" = toward DEI.
against The entity opposed, criticized, or worked against this topic. E.g., "CEO criticized climate regulations" = against climate action.

Relevance

  • Primary: The incident is mainly about this topic (×1.0)
  • Secondary: The topic is relevant but not central (×0.5)
  • Contextual: Tangentially related (×0.2)

How Direction + Valence = Score

Each topic has an editorial valence (positive = support is good, negative = support is bad). The combination determines contribution:

Positive contribution
  • toward + positive valence (supporting good things)
  • against + negative valence (opposing bad things)
Negative contribution
  • against + positive valence (opposing good things)
  • toward + negative valence (supporting bad things)

How Scoring Works

Scores range from -100 (most problematic) to +100 (most ethical). Scoring is incident-based: each incident contributes once, regardless of how many topics it touches or how many news articles covered it.

1. Incident Attributes

Each incident has attributes that affect its weight:

Significance How impactful is this event?
critical ×2.0 high ×1.5 medium ×1.0 low ×0.5
Confidence How well-evidenced is this incident?

Computed from confirming signals, weighted by evidence strength. Better sources contribute more: verified (×1.0), documented (×0.8), reported (×0.5), alleged (×0.2). Two verified sources provide more confidence than four alleged ones.

Agency How much moral agency did the entity exercise?
proactive ×1.0 reactive ×0.75 negligent ×0.5 compelled ×0.25 incidental ×0.1

Proactive actions (deliberate initiative) receive full weight, while compelled actions (forced by law/regulation) receive reduced weight.

2. Topic Links

Each incident is linked to one or more ethical topics with:

  • Direction: "toward" (engaged in/supported) or "against" (opposed/criticized)
  • Relevance: primary (×1.0), secondary (×0.5), or contextual (×0.2)

Topic normalization: If one incident touches 5 topics, those contributions are averaged so the incident counts as 1, not 5. This prevents "topic proliferation bias" where broadly-covered events dominate scores.

3. Per-Incident Score

Each incident's contribution is calculated as:

incident_score = (avg topic direction × valence × relevance) × significance × confidence × agency

4. Overall Score

The overall score averages all incident scores for an entity, then applies a confidence multiplier based on incident count:

overall_score = avg(incident_scores) × confidence_multiplier × 50

The ×50 scaling converts the typical -2 to +2 raw range into -100 to +100.

5. Confidence Adjustment

Entities with fewer incidents have their scores regressed toward zero. This prevents entities with just 1-2 incidents from hitting extreme scores.

  • 1 incident: 25% of raw score (max ±25)
  • 3 incidents: 50% of raw score (max ±50)
  • 10 incidents: 77% of raw score (max ±77)
  • 50 incidents: 94% of raw score (max ±94)

Formula: confidence = incidents / (incidents + 3)

6. Recency Weighting

By default, GoodIndex uses recency-weighted scoring. Recent incidents count more than old ones, reflecting that behavior can change over time.

Grace period

Incidents less than 1 year old receive full weight (×1.0).

Decay

After 1 year, incident weight decays with a 3-year half-life. An incident from 4 years ago has 50% weight; from 7 years ago, 25% weight.

Minimum weight

Very old incidents still count, but with a minimum weight of 10%. Historical record is preserved, just de-emphasized.

Formula: recency = max(0.1, 0.5^((years - 1) / 3)) for incidents older than 1 year.

7. Data Quality

We display data quality indicators to help you understand confidence levels:

High 10+ incidents
Strong confidence
Medium 3-9 incidents
Moderate confidence
Limited 1-2 incidents
Limited data
Stale No incidents in 2+ years
Outdated
Insufficient No incidents
No score

Complete formula (recency-weighted):

incident_score = avg(direction × valence × relevance) × significance × confidence × agency × recency overall_score = avg(incident_scores) × (incident_count / (incident_count + 3)) × 50

Scoring Modes

GoodIndex offers two scoring modes, selectable in Preferences:

Editorial (Default)

Uses recency-weighted scoring. Recent behavior matters more than distant history.

  • Best for: Current ethical assessment
  • Reflects: Recent trajectory and improvements
  • Weight: Decays over time (3-year half-life)

All-Time

Equal weight to all incidents regardless of when they occurred.

  • Best for: Historical accountability
  • Reflects: Complete ethical record
  • Weight: Uniform across all time

Score Threshold

You can optionally hide scores for entities with insufficient data. When enabled, entities below your threshold (default: 3 incidents) show "Insufficient data" instead of a potentially misleading score.

Editorial Stance

Transparency about values is more honest than false neutrality. For most topics, we have an explicit editorial position.

Topics Where We Take a Stance

Support is Good

  • DEI programs
  • Worker rights
  • Climate action
  • AI safety
  • User privacy
  • Open source
  • LGBTQ+ rights
  • Healthcare access

Support is Bad

  • Excessive executive pay
  • Tax avoidance schemes
  • Anti-competitive practices
  • Worker exploitation

Political Alignment (Tracked Separately)

Political alignment is tracked via political entities, not topics. When someone donates to a campaign or attends an inauguration, we record the target as a political entity. By default, this is displayed separately and does not affect the base score.

  • Trump Administration (2017-2021, 2025-)
  • Trump 2024 Campaign, Inauguration Fund
  • Biden Administration (2021-2025)
  • US Democratic Party, Republican Party
  • UK Conservative, Labour, Reform parties

However, users can opt-in to political scoring via Preferences. When enabled, political alignment adds a bonus or penalty (up to ±25 points) based on whether the entity's political affiliations align with the user's stated preferences.

Personalization

Ethical judgments vary. While we publish scores based on our editorial framework, you can customize how scores are calculated via Preferences.

Topic Preferences

  • Override valences: Change whether supporting a topic is good, bad, or neutral
  • Exclude topics: Remove topics from score calculation entirely

Political Preferences

  • Political weight: Set how much political alignment affects scores (0-100%)
  • Profile stances: Indicate whether you view alignment with political groups (e.g., Trump/MAGA, US Democratic) as positive or negative
  • Additive modifier: Political alignment adds up to ±25 points to the ethical score

Ideological Presets

For quick setup, choose a preset (Progressive, Conservative, Libertarian, Tech Ethics Focus) that pre-fills topic and political preferences. You can customize individual settings afterward.

Preferences are stored locally in your browser and never sent to our servers.

Limitations

  • Coverage gaps: Not every company has been researched. Some entities have limited data. Data quality badges indicate confidence levels.
  • Research lag: Scores reflect what we have documented, not necessarily the latest developments. Recent events may not yet be captured.
  • Recency trade-offs: Editorial mode emphasizes recent behavior, which may underweight important historical events. All-Time mode treats all events equally, which may not reflect genuine change.
  • Attribution: We distinguish between company actions and personal actions of executives, but some cases are ambiguous.
  • Subjectivity: Stance classification involves judgment. We aim for consistency.
  • Weighting: Different scoring weights would produce different rankings.
  • Missing context: A single score cannot capture nuance. Always review the underlying incidents.

Found an error or have evidence we should consider? Contact us.