Beyond overconfidence: Embedding curiosity and humility for ethical medical AI

dc.contributor.authorSebastián Andrés Cajas Ordóñez
dc.contributor.authorRowell Castro
dc.contributor.authorLeo Anthony Cel
dc.contributor.authorRoben Delos Reyes
dc.contributor.authorJustin Engelmann
dc.contributor.authorAri Ercole
dc.contributor.authorAlmog Hilel
dc.contributor.authorMahima Kalla
dc.contributor.authorLeo Kinyera
dc.contributor.authorMaximin Lange
dc.contributor.authorTorleif Markussen Lunde
dc.contributor.authorMackenzie J. Meni
dc.contributor.authorAnna E. Premo
dc.contributor.authorJana Sedlakova
dc.date.accessioned2026-02-27T06:23:24Z
dc.date.issued2026
dc.description.abstractContemporary medical AI systems exhibit a critical vulnerability: they deliver confident predictions without mechanisms to express uncertainty or acknowledge limitations, leading to dangerous overreliance in clinical settings. This paper introduces the BODHI (Bridging, Open, Discerning, Humble, Inquiring) framework, a dual-reflective architecture grounded in two essential epistemic virtues: curiosity and humility, as foundational design principles for healthcare AI. Curiosity drives systems to actively explore diagnostic uncertainty, seek additional information when faced with ambiguous presentations, and recognize when training distributions fail to match clinical reality. Humility provides complementary restraint, enabling uncertainty quantification, boundary recognition, and appropriate deference to human expertise. We demonstrate how these virtues function synergistically in a dynamic feedback loop, preventing both reckless exploration and excessive caution while supporting collaborative clinical decision-making. Drawing from psychological theories of curiosity and cross-species evidence of epistemic humility, we argue that these capacities represent fundamental biological design principles essential for systems operating in high-stakes, uncertain environments. The BODHI framework addresses systemic failures in medical AI deployment, from biased training data to institutional workflow pressures, by embedding uncertainty awareness and collaborative restraint into foundational system architecture. Key implementation features include calibrated confidence measures, out-of-distribution detection, curiosity-driven escalation protocols, and transparency mechanisms that adapt to clinical context. Rather than pursuing algorithmic perfection through pure optimization, we advocate for human-AI partner ships that enhance clinical reasoning through mutual accountability and calibrated trust. This approach represents a paradigm shift from overconfident automation toward collaborative systems that embody the wisdom to pause, reflect, and defer when appropriate.
dc.identifier.citationCajas Ordóñez, S. A., Castro, R., Celi, L. A., Delos Reyes, R., Engelmann, J., Ercole, A., ... & Sedlakova, J. (2026). Beyond overconfidence: Embedding curiosity and humility for ethical medical AI. PLOS Digital Health, 5(1), e0001013.
dc.identifier.urihttps://ir.must.ac.ug/handle/123456789/4264
dc.language.isoen
dc.publisherPLOS Digital Health
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/
dc.subjectContemporary medical AI systems
dc.subjectBeyond overconfidence
dc.titleBeyond overconfidence: Embedding curiosity and humility for ethical medical AI
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Beyond overconfidence- Embedding curiosity and humility for ethical medical AI.pdf
Size:
417.61 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: