Diagnosis by AI: Why trust is the key to acceptance
Errors in emergency diagnostics are common – artificial intelligence (AI) promises to improve diagnostic accuracy. However, both patients and doctors are often skeptical about the use of AI in medicine. Why is this the case – and how can trust in AI-supported diagnostics be strengthened?
AI with potential – and acceptance problems
Diagnoses in emergency medicine are often fraught with uncertainty – in over 12% of cases, the initial assessment and the subsequent discharge diagnosis differ significantly (Marcin et al., 2023). Artificial intelligence (AI) could help to reduce such errors (ten Berg et al., 2024), but patients and doctors often still distrust AI (Castelo et al., 2019; Li & Wang, 2024; Longoni et al., 2019; Shaffer et al., 2013). High risks in the healthcare sector increase concerns about data security and human oversight, with trust acting as the key to greater acceptance. In our study, we therefore investigated AI acceptance factors of patients and physicians.
What motivates patients and doctors: Insights from interviews
In our interviews with 20 patients and 11 doctors, we identified trust, safety, and supervision as key topics. Patients initially showed positive attitudes towards AI, but their trust in AI diagnoses was mixed. Main concerns were related to physician oversight, the doctor-patient relationship (fear of reduced human interaction), data security, patient safety, liability, and lack of understanding about AI effectiveness. Doctors were cautiously optimistic and saw benefits, but were unsure whether they could fully trust AI diagnoses. They shared similar reservations: Data security, patient safety, and risk of algorithmic bias. Doctors also emphasized the need for clear regulations for AI. Both groups recognized the potential of AI, but identified barriers to trust and called for oversight and evidence. Patients were concerned about the doctor-patient relationship and the potential reduction in human interaction, while doctors also emphasized the need for a regulatory framework.
Acceptance in detail: Patients sceptical, doctors cautious
In two further experiments, we investigated the acceptance of AI in diagnostics. Participants were shown videos in which a doctor used an AI assistant (versus a medical textbook) to arrive at a complex diagnosis. In the patient experiment with 214 participants, the use of AI led to significantly less trust in the doctor, less willingness to recommend the doctor, and less acceptance of future AI diagnoses. This shows “algorithm aversion”: patients distrust AI and the doctor who uses it (Castelo et al., 2019). The experiment with 62 doctors showed no significant differences in acceptance between AI and traditional diagnostic tools, such as medical textbooks. Doctors had no clear preference and did not mistrust AI in principle; they were open to using it if it worked well. The difference between the two groups of participants is revealing that doctors evaluate tools according to performance and efficiency (Hsieh, 2023), whereas patients pay more attention to the human relationship and emotional safety.
Ways to build trust: Practical recommendations for AI integration
Trust is central to AI acceptance in clinical decision-making. Even with benefits, human skepticism can slow the adoption of AI. Key strategies to improve acceptance are:
- Increasing transparency: AI systems require comprehensible decision pathways to build trust (Feurer et al., 2021).
- Ensuring human supervision: AI can support doctors in diagnostics, but cannot replace them. Doctors must monitor AI recommendations. This minimizes concerns and preserves the doctor-patient relationship.
- Strengthening data security and privacy: Strict protection of patient data and data privacy are essential to minimize barriers to trust.
- Education and communication: Doctors and patients must be trained and educated about AI. Clear communication reduces fears and improves trust (Hsieh, 2023; Feurer et al., 2021).
- Establishing ethical standards and regulations: Guidelines for AI applications in healthcare (e.g. validation requirements, liability framework) strengthen trust in responsible use.
Conclusion and outlook: Shaping AI responsibly for better health
AI acceptance in healthcare depends on technology, human factors, and governance. Trust is the key lever for successful integration, leading to better collaboration and diagnostic outcomes. Our project promotes health (UN Sustainable Development Goal 3), innovation (SDG 9), and trust in institutions (SDG 16) through responsible AI integration and ethical frameworks. AI implementation should be understood as an iterative process that incorporates user feedback and refines technology and frameworks to realize the full potential of AI while strengthening trust in the healthcare system.
Bibliography
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019), “Task-Dependent Algorithm Aversion,” Journal of Marketing Research, 56(5), 809-825.
Feurer, S., Hoeffler, S., Zhao, M., & Herzenstein, M. (2021). Consumers’ response to really new products: A cohesive synthesis of current research and future research directions. International Journal of Innovation Management, 25(8), 2150092.
Hsieh, P.-J. (2023). Determinants of physicians’ intention to use AI-assisted diagnosis: An integrated readiness perspective. Computers in Human Behaviour, 147, 107868.
Li, W., & Wang, J. (2024). Determinants of artificial intelligence-assisted diagnostic system adoption intention: A behavioural reasoning theory perspective. Technology in Society, 78, 102643.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 455-468.
Marcin, T., Hautz, S. C., Singh, H., Zwaan, L., Schwappach, D. L. B., Krummrey, G., et al. (2023). Effects of a computerised diagnostic decision support tool on diagnostic quality in emergency departments: study protocol of the DDx-BRO multicentre cluster randomised cross-over trial. BMJ Open, 13(3), e072649.
Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, Hal R., & Medow, M. A. (2013). Why do patients derogate physicians who use a computer-based diagnostic support system? Medical Decision Making, 33(1), 108-118.
ten Berg, H., van Bakel, B., van de Wouw, L., Jie, K. E., Schipper, A., Jansen, H., et al. (2024). ChatGPT and generating a differential diagnosis early in an emergency department presentation. Annals of Emergency Medicine, 83(1), 83-86.
About the study
The study was conducted and authored by Elisa Konya-Baumbach (BFH Business), Gert Krummrey (BFH Engineering and Computer Science) and Samira Abdullahi (BFH Business). The study will be presented at the Innovation and New Product Development Conference 2025 (https://www.xcdsystem.com/eiasm/program/k4ssn6u/index.cfm) and at the American Marketing Association Summer Conference 2025 (https://www.ama.org/2025-ama-summer-academic-conference-call-for-papers/).

Leave a Reply
Want to join the discussion?Feel free to contribute!