Responsibility and Risk: Generative AI in Patient-Centred Healthcare Applications

Sb Hny

Generative artificial intelligence (AI) is transforming digital healthcare applications, particularly those involving direct patient interaction. However, this progress brings new risks, ranging from misinformation and data privacy issues to unrealistic user expectations. The question is: how can these challenges be addressed?

Digital health applications based on generative AI offer patients better access to health information. Chatbots, health assistants and interactive platforms that generate content using large language models (LLMs) are no longer just a vision of the future; they are a reality. These tools can inform, explain and reassure, often using language that appears to be written by humans. The Help Near You platform offers a chatbot to help users find complementary therapies. In a joint project with the same-named startup company, potential risks posed by patient-facing chatbots in general, and by this chatbot in particular, were identified. This project was funded as a voucher within the framework of the BFH thematic field Human Digital Transformation. We developed strategies and concrete recommendations on how to address these risks.

The Opportunities and Promise of Generative AI

Supported by models such as Generative Pre-trained Transformers (GPT), generative AI makes it possible to process complex medical information individually. Of particular interest is the concept of “Retrieval-Augmented Generation” (RAG), which involves AI-generated content being supported by verified sources. The aim is to provide well-founded, personalised support for patients. However, the greater the capabilities of such systems, the greater the risk of misuse and error. When these systems provide health advice, these risks can endanger patient safety. Therefore, it is crucial to recognise these risks and implement appropriate countermeasures.

The Dark Side: Technological Risks

A central problem is that of so-called AI ‘hallucinations’ – statements that sound plausible but are factually incorrect. These arise due to the probabilistic nature of language models, which are unable to distinguish between fact and fiction. This is particularly dangerous when the system makes medical recommendations despite not being designed for this purpose. The recommendations may then be generic or even incorrect.

Additional technological sources of danger include:

  • Prompt Injection: Attackers can manipulate input prompts to extract confidential information from the AI system..
  • Jailbreaking: Malicious users may craft inputs that bypass built-in safety measures, allowing them to misuse the chatbot.
  • Data Misuse: There is often a lack of transparency around how user inputs are stored and whether they are used for further training, raising concerns about compliance with data protection regulations.

Humans at the Center: User-Related Risks

The greatest risk may lie in users misinterpreting the information provided by AI systems. Many people underestimate that these tools cannot replace medical professionals. Their human-like language and simulated empathy can foster trust—sometimes excessively so. In critical situations, this misplaced trust may lead individuals to delay seeking medical attention or to make harmful decisions.

There is also a risk of digital exclusion. Certain user groups—such as older adults or individuals with limited digital literacy—may struggle to access or use AI systems effectively. Complex interfaces and poor accessibility further exacerbate this issue. Adopting user-centered design principles can help mitigate these barriers and promote more inclusive usage.

Corporate Risks and Ethical Challenges

There is also much to consider at an organisational level. For example, if a provider violates data protection requirements by using interfaces to foreign language models on which the chatbot is based, they face legal consequences and reputational damage. Similarly critical is when generative AI provides recommendations based on unverified sources or unqualified providers. This can lead to legal disputes and a loss of trust.

Ethical risks include discriminatory responses resulting from biased training data, and making promises of healing that cannot be fulfilled. Such errors undermine trust in digital health solutions as a whole.

Strategies for Risk Minimization

However, there are ways to address these risks. The most important measures include:

  • Transparent Communication: Users must clearly understand the capabilities and limitations of the AI chatbot.
  • Accessible Design: Intuitive navigation, strong visual cues, and plain language significantly enhance usability.
  • Data Protection-Compliant Model Selection: Using proprietary, locally hosted models allows for greater control over data privacy and compliance.
  • Responsible Content Creation: Healing promises should be explicitly excluded via prompt engineering. All recommendations and information provided by the chatbot must be verified through trusted validation mechanisms.
  • Risk Frameworks such as the NIST AI RMF [1]: Such frameworks offer a structured approach to identifying, assessing, and mitigating potential risks associated with AI deployment.

Conclusion: Using Technology with Responsibility

Generative AI holds significant potential to enhance patient-facing healthcare applications—by increasing accessibility, improving comprehension, and empowering patients. However, this potential can only be realized if technological responsibility is taken seriously.

Developers, providers, and institutions must collaboratively and systematically assess risks, addressing them through thoughtful design, transparent communication, and appropriate regulation. In this context, Help Near You has taken meaningful steps to mitigate the risks identified during the project. As a result, the project has already delivered valuable insights that can inform future development efforts.

Ultimately, trust is the most essential currency in digital healthcare—and it can only be built through responsible innovation.

 

Helpnearyou Genai

Fig. 1: Categories of risks from language model-based, patient-facing health chatbots and possible countermeasures.

 

References

[1] Team NA. NIST AIRC – Playbook [Internet]. [zitiert 9. Dezember 2024]. Verfügbar unter: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook

Creative Commons Licence

AUTHOR: Kerstin Denecke

Prof. Dr Kerstin Denecke is Professor of Medical Informatics and Co-Head of the Institute of Patient-centred Digital Health at Bern University of Applied Sciences. Her research focusses on issues such as artificial intelligence and the risks and opportunities of digital healthcare solutions.

AUTHOR: Beatrice Kaufmann

Beatrice Kaufmann is an artistic-scientific assistant in the Institute of Design Research at the BFH University of the Arts. She leads the project "Talking Pictures" and is a collaborator in the interdisciplinary working group Health Care Communication Design HCCD.

AUTHOR: Denis Moser

Denis Moser is an assistant at the Institute for Patient-Centered Digital Health and is studying for a Master's degree in Medical Informatics at the FHNW.

Create PDF

Related Posts

None found

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *