Beyond overconfidence: Embedding curiosity and humility for ethical medical AI
| dc.contributor.author | Sebastián Andrés Cajas Ordóñez | |
| dc.contributor.author | Rowell Castro | |
| dc.contributor.author | Leo Anthony Cel | |
| dc.contributor.author | Roben Delos Reyes | |
| dc.contributor.author | Justin Engelmann | |
| dc.contributor.author | Ari Ercole | |
| dc.contributor.author | Almog Hilel | |
| dc.contributor.author | Mahima Kalla | |
| dc.contributor.author | Leo Kinyera | |
| dc.contributor.author | Maximin Lange | |
| dc.contributor.author | Torleif Markussen Lunde | |
| dc.contributor.author | Mackenzie J. Meni | |
| dc.contributor.author | Anna E. Premo | |
| dc.contributor.author | Jana Sedlakova | |
| dc.date.accessioned | 2026-02-27T06:23:24Z | |
| dc.date.issued | 2026 | |
| dc.description.abstract | Contemporary medical AI systems exhibit a critical vulnerability: they deliver confident predictions without mechanisms to express uncertainty or acknowledge limitations, leading to dangerous overreliance in clinical settings. This paper introduces the BODHI (Bridging, Open, Discerning, Humble, Inquiring) framework, a dual-reflective architecture grounded in two essential epistemic virtues: curiosity and humility, as foundational design principles for healthcare AI. Curiosity drives systems to actively explore diagnostic uncertainty, seek additional information when faced with ambiguous presentations, and recognize when training distributions fail to match clinical reality. Humility provides complementary restraint, enabling uncertainty quantification, boundary recognition, and appropriate deference to human expertise. We demonstrate how these virtues function synergistically in a dynamic feedback loop, preventing both reckless exploration and excessive caution while supporting collaborative clinical decision-making. Drawing from psychological theories of curiosity and cross-species evidence of epistemic humility, we argue that these capacities represent fundamental biological design principles essential for systems operating in high-stakes, uncertain environments. The BODHI framework addresses systemic failures in medical AI deployment, from biased training data to institutional workflow pressures, by embedding uncertainty awareness and collaborative restraint into foundational system architecture. Key implementation features include calibrated confidence measures, out-of-distribution detection, curiosity-driven escalation protocols, and transparency mechanisms that adapt to clinical context. Rather than pursuing algorithmic perfection through pure optimization, we advocate for human-AI partner ships that enhance clinical reasoning through mutual accountability and calibrated trust. This approach represents a paradigm shift from overconfident automation toward collaborative systems that embody the wisdom to pause, reflect, and defer when appropriate. | |
| dc.identifier.citation | Cajas Ordóñez, S. A., Castro, R., Celi, L. A., Delos Reyes, R., Engelmann, J., Ercole, A., ... & Sedlakova, J. (2026). Beyond overconfidence: Embedding curiosity and humility for ethical medical AI. PLOS Digital Health, 5(1), e0001013. | |
| dc.identifier.uri | https://ir.must.ac.ug/handle/123456789/4264 | |
| dc.language.iso | en | |
| dc.publisher | PLOS Digital Health | |
| dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | en |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | |
| dc.subject | Contemporary medical AI systems | |
| dc.subject | Beyond overconfidence | |
| dc.title | Beyond overconfidence: Embedding curiosity and humility for ethical medical AI | |
| dc.type | Article |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Beyond overconfidence- Embedding curiosity and humility for ethical medical AI.pdf
- Size:
- 417.61 KB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: