The risks and downsides of AI in health.



I discussed the advantages of using AI in health in articles like this and this, but to have a balanced view it’s important to also understand the risks and downsides. In this article I cover:

  1. Real-world examples of where AI went wrong
  2. The full list of downsides & risks

Real-world examples where AI went wrong in health

Incorrect cancer treatments – IBM Watson for Oncology

  • What happened: IBM’s Watson was pitched as a cancer-treatment recommender. Hospitals invested heavily.
  • The issue: Doctors later found that Watson often gave unsafe or incorrect recommendations, partly because it was trained on hypothetical cases rather than real-world patient data.
  • Impact: Trust eroded, projects were scaled back, and it became a cautionary tale about overhyping AI.

Racial bias – Optum Algorithm (2019)

  • What happened: A widely used AI tool was supposed to identify patients needing extra care.
  • The issue: It used healthcare spending as a proxy for health needs. Since less money is historically spent on Black patients, the algorithm underestimated their health risks.
  • Impact: Millions of patients may have been denied needed support. It became a textbook case of systemic bias baked into data.

Inconsistent analysis – Skin Cancer Apps

  • What happened: Several consumer apps promised to flag malignant moles using photos.
  • The issue: Independent testing showed accuracy was inconsistent. Some apps missed melanomas or flagged benign spots as cancer, creating both false reassurance and unnecessary panic.
  • Impact: Regulators issued warnings, and dermatologists stressed that apps should not replace exams.

Poor data for COVID-19 Imaging Models

  • What happened: During the pandemic, dozens of AI models were released claiming to detect COVID-19 on chest X-rays or CT scans.
  • The issue: A 2021 review found that nearly all were biased, overfit, or based on poor data, making them unreliable in real-world hospitals.
  • Impact: Most were abandoned, highlighting how rushing AI into crises without rigorous validation can backfire.

Unhelpful AI Chatbots for Mental Health

  • What happened: Some early mental health bots were promoted as affordable therapy alternatives.
  • The issue: Users reported bots giving generic, unhelpful, or even harmful responses (e.g., minimizing suicidal thoughts).
  • Impact: Raised alarms about deploying chatbots in sensitive areas without strong guardrails and escalation protocols.


The lesson: These failures didn’t mean AI had no role in healthcare—they showed that data quality, transparency, bias checks, and human oversight aren’t optional extras. Without them, AI can magnify existing problems or create new risks.


A List of Risks of using AI in Health

Clinical & Patient Safety Risks

  • Hallucinations and inaccuracy: AI can generate convincing but wrong answers. A misdiagnosis or a missed “red flag” could delay urgent care.
  • Lack of clinical nuance: AI can’t palpate an abdomen, notice subtle nonverbal cues, or integrate “gut feelings” honed by years of practice.
  • Over-reliance: Patients (and even clinicians) might lean too heavily on AI, skipping professional judgment.

Bias & Inequality

  • Training data bias: If models are trained on populations that skew Western, white, or affluent, recommendations may not generalize to other groups.
  • Health disparities: Bias can creep in around race, gender, or socioeconomic factors—leading to unequal care quality.
  • Language gaps: Non-English users may get less accurate or poorly translated advice.

Privacy & Security

  • Data exposure: Patients might share sensitive medical info with AI tools without realizing where it’s stored or who can access it.
  • Compliance issues: Not all tools are HIPAA or GDPR aligned. That creates legal and ethical risks for providers and consumers.

Accountability & Legal Questions

  • Liability: If a patient is harmed by AI-driven advice, who’s responsible? The developer, the doctor, or the patient?
  • Regulation lag: Health AI is moving faster than regulatory frameworks, leaving a “gray zone” for safety and accountability.

Human Factors

  • Erosion of trust: If patients feel doctors rely too much on AI, it may weaken the doctor-patient relationship.
  • Reduced empathy: Even “empathetic” chatbots can’t truly understand suffering; they risk replacing human connection with simulations.
  • Mental health concerns: Users may grow dependent on AI for emotional support in place of qualified professionals.

System-Level Risks

  • Workflow overload: Integrating AI into clinical practice adds new tech layers, which may increase clinician burnout if poorly implemented.
  • Security vulnerabilities: Healthcare data is already a top cybercrime target. Adding AI systems multiplies the attack surface.
  • Unintended consequences: For example, an AI optimized to reduce costs might recommend under-treatment.

The balance: AI has huge promise—but it must be implemented with human oversight, regulation, and transparency to avoid causing more harm than help.

Leave a Reply

Your email address will not be published. Required fields are marked *