The second part of the symposium at the Praevenire Health Days in Seitenstetten Abbey was about augmented intelligence as an essential element in the further development of the health professions and about the question of how the discussion about digital health can be objectified and how the understanding of the connections can be better communicated.
Augmented Intelligence stands for the use of Artificial Intelligence (AI) in gathering and interpreting information and in making sophisticated factual decisions. In the symposium, it once again became clear that the use of AI will be necessary in the future, especially in cutting-edge medicine and hospital management, if people are to benefit from medical progress in the best possible way.
Quality of AI is crucial
Machines can process data that humans can do nothing with. They can process very large amounts of data very quickly. And they don’t make mistakes in the process due to fatigue or exhaustion. However, the quality of machine data processing depends not only on the available data and the use of the right algorithms for the specific purpose, but also on the quality of the digital tool design, which itself has technical and non-technical aspects. Even though some dangers are now known to a wider public, for example the danger of data-based discrimination, critical aspects are often neglected in everyday research and development because the problem is reduced to delimited sub-problems and the expertise for an overall view of the interactions is lacking.
As diverse as the successful applications of AI in diagnostic laboratory experiments are, AI has not yet actually found its way into practical use. This also has to do with the fact that AI leads to a change in the roles of health professionals and that it is so far unclear how one can control an AI that one does not understand. Explainable AI is currently an intensively researched solution approach, but it will only solve part of the problems. The keynotes by Joachim Buhmann, Dietmar Meierhofer and Richard Greil provided essential impulses for a simultaneously broader and deeper understanding of the opportunities and challenges of using AI in practice. It has the potential to make healthcare much more effective, in concrete terms to treat patients in a more individually tailored way and to organise healthcare facilities better. The latter not only helps to save costs, but also increases the quality of care. However, the keynotes also clearly showed that we are still at the very beginning of the topic of augmented intelligence and that we must approach the topic holistically.
Different needs and concerns
In the last part of the symposium, many communication challenges were addressed in a concrete and committed way and some topics from the previous discussions were deepened. These included, for example, the presentation of the concrete benefits of blended care, ELGA and the use of AI from the patients’ point of view (“what is the benefit for whom in which situation”), the presentation of success stories from abroad in order to convince the stakeholders of the feasibility (how do good practices work) and a possible rebranding, for example of ELGA. For almost all topics, it is necessary to meet the needs of different target groups. Scientists, doctors and patients perceive different aspects of the digital transformation, which is why the principle of “one communication fits all” does not work. Precisely because the symposium was so open and sometimes controversial, it became clear that there is a huge need for more knowledge, more critical debate and more constructive and creative solutions. However, the latter can only succeed if we communicate more successfully than before. As long as practical challenges, important concepts and technical solutions are only known in small circles, digital health can only advance at a snail’s pace, because there is great resistance to it.
Augmented Intelligence is more than dystopia
Digitalisation – especially Augmented Intelligence – is often branded as a dystopia, especially on prominent occasions. There are powerful social forces fighting against the use of digital tools. These forces are also at home in important bodies and in academia. They get a lot of applause and are almost never contradicted because such contradiction is unpopular. The acceptance of new digital solutions is therefore not a matter of course. It presupposes trust. Trust, in turn, presupposes trustworthiness. And trustworthiness in turn presupposes at least two things: compliance with one’s own principles, for example the protection of privacy – this was an important topic in the symposium – and comprehensible communication. We have to make sure that people understand the intended goals, the challenges and the approaches to solutions if we want to seek acceptance for the new digital tools. It is not enough that digital health factually contributes to more healthy life years. The fact that this is so and why this is so must also be understood by people. We therefore need a good combination of fact-based and storytelling in communication and we need to address the resistance to digital health proactively and publicly.