Skip to main content
technology Support = Good

Research Integrity

Supporting means...

Follows proper research protocols and timelines; prioritizes safety over speed in trials; transparent about research methodology and results; protects research subjects; maintains scientific rigor; reports adverse events promptly; allows adequate time for safety evaluation

Opposing means...

Rushes research to meet arbitrary deadlines; cuts corners on safety protocols; pressures researchers to accelerate timelines unsafely; inadequate protection for research subjects; hides or downplays adverse results; prioritizes commercial goals over scientific rigor

Recent Incidents

Sam Altman backed Preventive, a startup aiming to 'correct devastating genetic conditions' before birth through gene editing. The company explored conducting early-stage research in nations with more permissive regulations, with reports indicating the UAE as a possible location. Human embryo editing is currently illegal in the US, UK, and many other countries. Critics warned the project raises profound ethical dilemmas about designer babies, genetic equity, and circumventing regulatory safeguards.

$3.6M

DOJ settlement found Cerebral engaged in practices encouraging unauthorized distribution of controlled substances (Adderall, Ritalin) from 2019-2022. Company set internal targets to increase 'Initial Visit Rx Rate' for prescribing stimulants, used financial incentives to pressure providers to meet prescribing metrics, and considered disciplinary measures for providers who didn't prescribe enough stimulants.

Ieso provides NHS CBT therapy services to over 20 million adults through a network of 600+ therapists, delivering 460,000+ hours of cognitive behavioral therapy annually. The company built the world's largest outcomes-indexed mental health dataset from 145,000+ patients, enabling research into treatment effectiveness.

After receiving FDA approval for human trials in May 2023, Neuralink deviated from established scientific communication standards by announcing significant clinical trial updates via social media platforms rather than registering its brain implant trial on ClinicalTrials.gov, the standard public database for clinical trial transparency and accountability. Industry experts and ethicists raised concerns about this approach, noting it undermined scientific transparency and ethical accountability for a medical device implanted in human brains.

MindSpot operates as a free Australian government-funded online mental health service providing evidence-based CBT. The service has completed over 121,000 assessments with large effect sizes (d=1.40-1.45), 50% symptom reduction, and 95% of users saying they would recommend the service. This demonstrates the viability of publicly-funded digital mental health care.

In September 2023, Elon Musk publicly claimed 'No monkey has died as a result of a Neuralink implant' and that the company only used 'terminal monkeys close to death already.' However, records obtained by the Physicians Committee for Responsible Medicine and WIRED revealed 12 previously healthy Rhesus macaques were euthanized due to implant complications. Health records showed no evidence the monkeys were terminally ill. The PCRM filed an SEC complaint over these materially misleading statements.

negligent

FDA inspectors found 'objectionable conditions or practices' at Neuralink's animal testing facilities during June 2023 inspections, including missing calibration records and quality assurance failures. Separately, the USDA confirmed a 2019 Animal Welfare Act violation involving unapproved BioGlue that caused animal suffering, but it was hidden from public records. DOT also fined Neuralink $2,480 for hazardous materials transport violations in 2023.

In June 2023, UPSIDE Foods became the first company to receive both FDA and USDA approval to sell cultivated chicken in the United States. Cultivated meat is grown from animal cells without slaughtering animals, representing a technological breakthrough for animal welfare. The company launched at Michelin-starred restaurant Bar Crenn and is suing Florida over its ban on cultivated meat.

negligent

In February 2023, the U.S. Department of Transportation opened an investigation into Neuralink over the potentially illegal movement of hazardous pathogens. The Physicians Committee for Responsible Medicine (PCRM) obtained documents suggesting unsafe packaging and transport of implants removed from monkey brains that may have carried infectious diseases including antibiotic-resistant staphylococcus and herpes B virus, in violation of federal hazardous materials law. In January 2024, DOT issued a fine of $2,480 for the violations.

HelloBetter received Digital Health Application (DiGA) certification from Germany's Federal Institute for Drugs and Medical Devices for six mental health programs. The certification means 73 million Germans can access the programs with 100% health insurance coverage. HelloBetter's evidence base of 33 peer-reviewed RCTs is the strongest globally for digital mental health interventions.

negligent

Approximately 1,500 animals—including over 280 sheep, pigs and monkeys—died as a result of Neuralink tests since 2018. At UC Davis, 15 of 23 monkeys with brain implants were euthanized. Animals suffered brain hemorrhages, paralysis, bloody infections, and self-mutilation. Over 20 employees alleged tests were rushed due to Musk's pressure to accelerate development, causing botched experiments. Musk told staff to imagine 'a bomb strapped to their heads' to work faster.

negligent

Between 2018 and 2022, approximately 1,500 animals including monkeys, pigs, and sheep died during Neuralink's brain implant research. Internal staff reported Elon Musk pressured teams to accelerate timelines, leading to botched surgeries - 25 pigs died from wrong-size implants. Monkeys suffered chronic infections, paralysis, and brain swelling. The USDA opened a probe in December 2022, and the DOT investigated improper handling of hazardous pathogens. The internal animal care committee was chaired by a Neuralink executive with financial stake in the company.

On January 3, 2022, Elizabeth Holmes was found guilty of four counts of wire fraud and one count of conspiracy to commit wire fraud, sentenced to 11 years and 3 months in prison. Co-defendant Ramesh Balwani was convicted of 12 counts of fraud and sentenced to nearly 13 years. Evidence showed Theranos systematically concealed that its Edison devices produced inaccurate results, used traditional blood testing machines while claiming proprietary technology, and ignored internal warnings about test reliability.

compelled $700.0M

Elizabeth Holmes was convicted in January 2022 on four counts of investor fraud and conspiracy for misleading investors about Theranos's blood-testing technology capabilities. The company claimed its Edison device could run hundreds of tests from a finger prick of blood, but internal data showed the technology was unreliable. Holmes raised over $700M from investors based on these claims. She was sentenced to over 11 years in federal prison.

In December 2020, Google fired Timnit Gebru, co-lead of its Ethical AI team, after she refused to retract her name from a paper ('On the Dangers of Stochastic Parrots') that detailed risks of large language models. Over 2,600 Google employees and 4,000 external AI researchers signed a protest letter. Google subsequently fired Margaret Mitchell, the other Ethical AI co-lead, in February 2021. The incident demonstrated corporate pressure to suppress inconvenient AI safety research.

In December 2020, Google terminated Timnit Gebru, the technical co-lead of its Ethical AI team, over a disagreement about a research paper scrutinizing bias in large language models. Google maintained Gebru resigned; Gebru says she was fired. Over 2,278 Google employees and 3,114 industry allies signed a petition protesting her departure. Pichai apologized for the process in an internal memo but did not reverse the outcome. Congressional representatives demanded answers from Google about the firing.

reactive

After researchers including Kate Crawford documented pervasive bias in ImageNet's person categories -- including racist slurs, misogynist labels, and ableist classifications -- Fei-Fei Li's team systematically identified non-visual concepts and offensive categories. They proposed and executed removal of 1,593 categories (54% of the 2,932 person categories), addressing both bias and privacy concerns in the foundational AI dataset. This represented a significant acknowledgment that even groundbreaking datasets require ongoing ethical review and correction.

Gebru co-authored the influential 'Datasheets for Datasets' paper proposing that every dataset used for AI training be accompanied by documentation about how data was gathered, its limitations, and how it should or should not be used. The framework became an industry standard practice adopted by major AI organizations to improve data transparency and reduce bias in AI systems.

negligent

Theranos devices produced unreliable results for patients at Walgreens locations, including false positives and negatives for serious conditions. The company voided or corrected tens of thousands of test results in 2014-2015. CMS (Centers for Medicare and Medicaid Services) found the company's lab practices posed 'immediate jeopardy to patient health and safety' and banned Holmes from operating a lab for two years.