What artificial intelligence is already doing in healthcare today
AI could extend the healthy life years of many people. But technological progress does not make it into practical implementation. We are researching how to overcome the obstacles.
Everyone is talking about generative artificial intelligence (ChatGPT & Co). Their achievements are impressive: ChatGPT can explain complex concepts accurately and understandably at the same time, including how it works itself. However, generative AI does not engage in rational cognitive reasoning when it explains something. For example, in April, ChatGPT did not yet know the concept of gender. If one asked for female authors, primarily male authors were suggested in male-dominated areas of knowledge.
Such deficits are not surprising. Generative AI does not produce cognitive inferences, but artefacts – texts, images, code – that are meant to please and function situationally. Art creators, for example, use it as a disruptor that provokes their creativity. It then functions precisely by producing errors. But it also helps in many other ways. It takes work off the hands of artists and makes new art practices possible. This too is not surprising. In the past, technical progress has always made new things in art possible. Haydn’s E-flat Major Trumpet Concerto is a famous example of how even short-lived technical innovations – in the case of the keyed trumpet – can have long-lasting effects.
For many technical tasks, on the other hand, you need an AI that can do the right thing, not just deliver something pleasant and functional – an AI that you can rely on in routine operation. ChatGPT can be imagined in the series “Dr. House”, because the series shows the search for solutions under extreme circumstances, and thought-provoking ideas are important in this context. In the routine of hospital operation, on the other hand, which simply functions perfectly as a matter of course in Dr. House, one cannot afford to make creative mistakes. That’s why ChatGPT is a no-go there for the time being. For routine health care, we need a different kind of AI. This has been around for some time at a high level of performance: an AI that improves human performance in a narrowly defined task area for which it has been specially trained.
Fig: d-Health 2023 (credit: Reinhard Riedl)
Reliable AI with a narrow working focus
In recent years, there have been hundreds, even thousands of more or less successful experiments with AI for narrowly focused tasks in healthcare – more accurate diagnosis and warnings of individual risks:
- An elderly person has fallen
- a patient has cancer
- a patient is about to collapse in the next few minutes
- a side effect that is extremely rare on average is very likely in a specific case
- the risk of a mental crisis is high in the coming night.
AI can already provide this and other information in a laboratory context with high precision and reliability.
Often it is a matter of predictions. Not only the specific manifestation of a disease with the associated statistically probable course of healing should be diagnosed, but also the probable course of healing in a specific patient. Whether the AI is a real machine intelligence – i.e. what most people today imagine to be a “real AI” – or actually a Big Data-based classification is unimportant. The boundaries are fluid anyway. The important thing is that the predictions are as accurate as possible as often as possible. And they do that surprisingly often.
AI is always a promising option when there are many documented correct decisions – that is, when there is test data in a clearly delimited decision context so that the AI can train to make correct decisions. The AI reads them in and configures itself with them so that it can subsequently make good decisions. It often outperforms human decision-makers. However, the interaction between AI and human decision-makers is almost always even better than the AI alone.
However, the differences are great: it depends on the organ, the quality of the data source and the medical task.
- When it comes to filtering out false alarms in intensive care, despite great progress, we have not yet reached the point where AI can be given a role.
- There are fundamental concerns about warnings of mental illness from consumer gadgets. In some diagnostic situations, however, medical services could be significantly improved and made cheaper – higher quality at lower cost.
- And simple forms of AI have already arrived in GP practices – but only in a few practices for now.
To be or not to be, why is this a question?
This raises the central question: Why do we hear so little concrete about the use of AI in healthcare?
- The first answer is: unspectacular forms of use are of little interest. For practitioners, AI is too much of a marginal phenomenon to be discussed in an organised way. On the other hand, the current forms of use are too banal, too everyday and probably too practically concrete for the masterminds of digitalisation.
- The second answer is that the transfer from the laboratory to clinical research in the case of AI in healthcare is actually very slow. An objectively complex situation is factually the main cause, diffuse fears and a lack of (or even negative) financial incentives are blocking on the user side, some solution developers are creating additional problems with non-transparent communication and overemphasis on the importance of the technology, the sales logic is getting in the way on the provider side, and public administration and politics are getting tangled up in multi-stakeholder management. Basically, there is little willingness to see the introduction of AI in healthcare as a transformation process that goes hand in hand with major cultural changes.
An almost flawless AI? That is already possible today!
A frequently cited blockade reason is the fear of liability. Legal science has not yet done its homework here. Technically, human-AI cooperation can in many cases be designed in such a way that the machine hardly makes any mistakes. It typically prepares the diagnosis by eliminating all data without relevance. This elimination can be done very reliably. Among other things, it makes the work in image diagnosis much easier and reduces the probability that a diseased area will be overlooked.
The use of AI for customised prevention is also relatively low-risk. On the one hand, a comprehensively healthy life is unrealistic for many people and they need advice on what healthy living will pay off for them. On the other hand, a comprehensively healthy life is not enough if there are individually significantly increased risks. Here, multimorbidity research can provide useful indications for concrete preventive measures. The actual measures must then be discussed between the doctor and the patient and consistently implemented by the latter.
Essentially, from a professional point of view, there are only two difficult sticking points – quality control for the entire utilisation process and the solution of the financing question – and two real problems: Big Data and AI are still hardly used in education and therefore the integration into the work processes is often a big challenge.
Good cooperation with the specialists? Big big problem!
Even in cutting-edge medicine, progress is slow – former data science thought leaders fall from faith when they collaborate with mathematicians and computer scientists. Both groups cannot be led by outsiders. This means that the oncologist must engage with her mathematicians on an equal footing – and vice versa. They have to learn from each other. That is – still – difficult to imagine.
The example of the experimental oncologist who wonders whether the crazy theories of mathematics might make sense after all – e.g. random wanderings on networks for cancer diagnosis – has so far only been anecdotally documented and will nevertheless be the norm rather than the exception in the future. However, the digital transformation demands everything from leaders. They cannot control directly, must be multidisciplinarily versed, must analyse the changes in value perspectives with ethnographic precision and must intervene skilfully in such a way that the appropriation of the digital tools happens by itself. Precise blurring is just as important as social altruism and a great deal of curiosity.
Our research and engagement
Research at BFH is concerned with various aspects of digital health: firstly, the provision of data; secondly, value perspectives, appropriation practices, quality standards, control tools and narratives; and thirdly, leadership interventions in the complex ecosystem. We conduct qualitative empirical and design science research, focusing on systems modelling and enterprise architectures.
At the same time, we are directly involved in practical projects as scientific facilitators and organise workshops, symposia and stakeholder discussions – with partners and colleagues from the BFH (Health), Switzerland (including Spitex Bern, Sitic.org), Austria and the Scandinavian countries. The 5th Praevenire Digital Health Symposium in Vienna was particularly exciting, as was the exchange with Finns, Austrians and Germans on the EU regulation of the Health Data Space at IRIS in Salzburg. The Finns in particular, of whom the Chancellor of Justice was one of the participants, are quite a bit ahead of us in terms of experience with secondary data use. That is what makes them so interesting for us. Because we want to learn from good practices worldwide.