Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI in Medicine - Clinical Judgment Doesn’t Go Away

AI in Medicine - Clinical Judgment Doesn’t Go Away

Artificial intelligence is rapidly transforming medical practice, but it is not eliminating the clinician’s central role, it is redefining it. This talk argues that *regulation is reinforcing* clinicians as the final accountability layer in AI-assisted care. In the emerging era of digital biology, where human behavior and physiology are modeled with increasing precision, clinicians must blend judgment with oversight of increasingly autonomous tools.

The presentation introduces a leadership formula `Lv = (i)Kc + (v)SPJ + C² + (a)Ti² + RPFU + E(wo)` that quantifies how curiosity, strategic judgment, coordination, applied technology, risk fluency, and workforce optimization drive effective AI leadership in healthcare. Shah traces the historical arc from digitizing physics to digitizing biology, showing how medicine is becoming an auditable, data-rich science where judgment itself becomes measurable.

A “NoBS” guide maps where AI and ML are practical today across education, diagnostics, therapeutics, administration, and self-service health, clarifying that AI’s regulatory and clinical maturity varies widely across domains. While AI can automate low-risk or routine processes, clinical accountability, ethical reasoning, and contextual decision-making remain human responsibilities.

For clinicians at different career stages, Shah offers concrete guidance:

* Early-career clinicians should build AI literacy and treat model outputs like lab results—subject to review, verification, and contextualization.
* Mid-career clinicians should transition from AI users to AI governance leaders, ensuring safety, traceability, and interpretability.
* Late-career clinicians should codify their tacit knowledge—intuition, heuristics, and pattern recognition—into structured insights that define the safety boundaries of AI systems.

Ultimately, Shah argues that AI will formalize and extend clinical expertise but cannot replace it. The clinician’s judgment—now increasingly quantifiable—remains the “final common accountability pathway.” The next decade of medicine will reward those who treat AI not as a replacement for expertise, but as an amplifier of clinical safety, leadership, and transparency.

Avatar for Shahid N. Shah

Shahid N. Shah

October 28, 2025
Tweet

More Decks by Shahid N. Shah

Other Decks in Technology

Transcript

  1. Your Judgment Doesn’t Go Away. Regulation Is Reinforcing Clinicians as

    the Final Common Accountability Pathway for AI AI in Type 1 & Type 2 Diabetes Care depends on clinicians being responsible for decisions they may not make. by Shahid N. Shah (CTO, Diabetes Research Hub).
  2. The formula for successful AI Leadership. Memorize It. L v

    = (i)K c + (v)S PJ + C2 + (a)T i 2 + R PFU + E(wo)
  3. Technology has digitized our experiences Last and past decades Digitize

    mathematics & engineering Digitize maps, literature, news Digitize purchasing, social networks Predict crowd behavior This and future decades Digitize biology Digitize chemistry Digitize physics Predict human behavior Self-service, directly usable by consumers Not self-service, prosumers or professionals only
  4. https://www.sofa-framework.org/ Now we need to digitize biology and create a

    human body simulator… Google DeepMind CEO: We Want To Build A Virtual Cell
  5. If we can digitize biology and chemistry… 15 year old

    student discovers cure for rare disease while gaming Computer creates treatment for prostate cancer
  6. AI Will Change Medicine. But Not Make It Unrecognizable. •

    AI may transform parts of the mechanics of how care is delivered but human judgment, compassion, and expertise remain central. • Family members and caregivers can become more informed and helpful, especially if physicians encourage their involvement. • AI will take over routine diagnostic and educational tasks, reducing some burdens on clinicians. • Poorly designed or “sloppy” AI systems will create new challenges and increase workload through the rest of this decade. Some things will get easier but other things will get much harder for physicians because they’re still legally accountable for everything and will have more to review and approve.
  7. NoBS Guide to Where ML and AI are applicable to

    you today (Education) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Cohort specific Personalized Risk Data Sharing
  8. NoBS Guide to Where ML and AI are applicable to

    you today (Clinical Education) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Auto Literature Review Specialty-specific Content
  9. NoBS Guide to Where ML and AI are applicable to

    you today (Admin) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Auto Adjudication Fraud Detection Quality Compliance Contract Adherence
  10. NoBS Guide to Where ML and AI are applicable to

    you today (Self Diagnostics) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Patient Self Diagnostics Unlicensed Pro Diagnostics Digitally and Heuristically Guided Diagnostics Images (self, guided, consulted) Labs and Chemistry (self, guided, consulted) Multi-omics (self, guided, consulted) Molecular Biology
  11. NoBS Guide to Where ML and AI are applicable to

    you today (Clinical Diagnostics) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Auto Triage for Low-risk Augmented Triage for Higher risk Infection control / Anti-microbial Stewardship Consulted Tele Diagnostics Med Device Continuous Diagnostics
  12. NoBS Guide to Where ML and AI are applicable to

    you today (Therapeutics) Therapies Therapeutic Tools Diagnostics Diagnostic Tools Patient Administration Payer Admin Clinical Professional Education Public Health Education Patient Education Most Regulation Least Regulation Physical Mental (chat, VR, etc.) Digital (nutritional, etc.) Clinical Research ( “systematic review automation”) Drug Development Clinical Discovery (unattended and digital)
  13. Judgment Is Becoming Quantifiable For the first time, medicine’s art

    (judgment) is being turned into auditable science. Non clinicians and inexperienced clinicians can fake it better, but mid-career and experienced clinicians can guide them to use AI towards improved outcomes.
  14. AI Formalizes Expertise but like it or not, Clinicians Retain

    Risks and Accountability Regulation requires human review for trustworthiness and legal accountability. AI success is measured by how well it supports your decisions, not replaces them (because legal liability is still yours). Future reimbursement will reward transparent human-AI collaboration.
  15. Early-Career Clinicians: Build AI Literacy Before AI Authority • Learn

    the “why,” not just the “what.” Know how models are trained, what data they omit, and why bias appears. That’s your future malpractice shield. • Treat AI output like labs, not laws. Review, verify, and contextualize results; regulators demand independent clinician interpretation, not blind acceptance. • Document your disagreement. Each time you override an AI suggestion, you generate high-value learning data — your judgment becomes training material. • Join validation projects early. Participate in model audits or CGM-AI drift checks; these roles will soon be the new fellowships of the digital era. • Master data privacy and provenance. Know what can leave your institution, what can’t, and how de-identification truly works at the edge. The fastest career accelerant is learning how to supervise machines as safely as you supervise trainees.
  16. Mid-Career Clinicians: Turn Oversight into Leadership • Governance > gadgets.

    Hospitals need AI Safety Committees more than new apps; step up as the “clinical reviewer of record.” • Shift metrics from throughput to traceability. Regulators are auditing how decisions are made, not just how fast; build transparent workflows. • Mentor with humility and guardrails. Teach residents to question model outputs using the same rigor you use for abnormal labs. • Advocate for interpretable tools. Push vendors for audit trails, confidence intervals, and human-readable rationales — these are not luxuries, they’re compliance features. • Document your oversight as leadership evidence. Every AI system will need a designated “responsible clinician”; that’s your new credential.
  17. Late-Career Clinicians: Codify Experience Into the Next Generation of Safety

    • Label the edge cases. When AI misses a rare complication, your correction defines the model’s limits; you’re writing its safety margins. • Teach explainable reasoning. Compare your narrative thinking to the model’s probabilities; that becomes training for both humans and machines. • Participate in post-market surveillance. AI systems require continuous real-world monitoring; your feedback is treated like adverse-event reporting. • Preserve tacit patterns. Record your heuristics (pattern recognition, patient intuition, and context) as structured commentary for future retraining datasets. • Shape policy through credibility. Use your senior voice to remind institutions that judgment is a safety function, not a relic of the past. This Photo by Unknown Author is licensed under CC BY Your tacit knowledge (“gut calls”) are the very data AI cannot synthesize without you. Regulation now depends on your judgment to calibrate safety boundaries.
  18. Here’s the formula that creates the right AI leaders Lv

    → AI Leadership value = (target a large number greater than 1) • Kc → inquisitive knowledge of industry led by curiosity about why things are the way they are + • Spj → visionary AI strategy informed by problems to be solved and jobs to be done + • C2 → communication & coordination to not allow AI to get in the way + • (a)Ti 2 → application of actionable transformative AI tech fully integrated into complex workflows + • Rpfu → understanding performance, financial, and utilization risk (shared, one-sided, two-sided) • Ewo → execution through AI-enabled workforce optimization • SQ → status quo is a constant, the size of which depends upon your organization. It means do no harm, focus on patient safety, reliability, intermediation, & maintain eminence and consensus based decision making Lv = SQ Kc + Spj + C2 + (a)Ti 2 + Rpfu + Ewo
  19. Thank You. Find this and many other of my decks

    at http://www.SpeakerDeck.com/shah ML and AI are here and your judgement matters more than ever. How will the medical profession change in the next 10 years? @ShahidNShah (Twitter) shah (GitHub) @shahidshah.com (Bsky) [email protected] linkedin.com/in/shahidshah