Schlagwortarchiv für: digital Technology

Interview mit Harold Thimbleby: «The quality of health IT has become very poor»

Healthcare has numerous problems, from its rising costs to the problems of error, which have recently been shown to be comparable to diseases like cancer. And increasing numbers of older people, obesity and diabetes and other problems are increasing the pressure. Everyone agrees that IT can lead improvement in healthcare, but will IT really make things better or contribute to more error and problems? What should we do? We talked with the IT specialist Harold Thimbleby about these questions.

Friederike Thilo: On your website you state, “Even the lowest estimates put preventable error as a much bigger killer than accidents; healthcare is the most dangerous industry!” What is your personal experience from navigating the healthcare system?

Dr. Harold Thimbleby: My father died in hospital as a result of a preventable error in 2014. The problems escalated into me making a formal, lengthy written complaint, and the response was not very helpful. I realized that my long list of complicated problems had not helped the hospital reply constructively to the core problem! So, with my wife, I made a very short video (3 minutes) of the key issue I really wanted to be addressed and that I felt could realistically be addressed. I sent the short story on a DVD to the Board of Directors of the hospital for them to watch. The real work was working out what I wanted to say, and to stay focused on a point that would communicate my message.

My story had an immediate effect, changed practice, and I was asked to give some training on human factors. I have since heard that the hospital has had an increase in reports of system problems on their computerized incident reporting system. So, if they start fixing these systematic problems, and everybody will benefit.

What is going wrong in healthcare regarding patient safety in general?

When failures happen everything has gone wrong — this is the basic lesson of the Swiss Cheese model. Yet we like to blame the person at the sharp end, as this seems a far simpler solution than fixing the system.

Sometimes, very rarely in fact, this may be the right thing to do. But almost always blaming a person underplays all of the other failures: failures in IT design, failures in management practice, failures in training, overwork, and all sort of other things that contributed to the incident. There is a very good rule: if somebody is accused of making a mistake, but somebody else could have made the same mistake under the same circumstances, then it isn’t their fault.

Where do you believe the problems arise?

Although blaming individuals is deceptively simple and easy, it means we overlook the reasons why those individuals have got caught up in the problem. Often, there have been near misses for weeks but nobody is reporting them or doing anything about them. Then one day, just by chance, one too many things goes wrong, and a patient is harmed. There was not enough spare capacity to sort out or avoid the problem; the team is not working well together, or whatever. But (usually) the people in the room on the day it goes wrong did not cause the problem — they just made it visible. The path to the problem had already been laid by poor design, poor training, poor workload management, whatever. Not listening to problems being solved every day without causing any patient harm misses important chances to learn before it is too late.

I like Chick Perrow’s book «Normal Accidents». He argues that accidents happen a lot but are not noticed, but sometimes they turn into catastrophes which of course are noticed. If we only study catastrophes, we miss all the learning opportunities from accidents. Worse, what turns an accident into a catastrophe is usually just bad luck, from which we can learn very little. Perrow’s “accidents” are, in other words, “near misses” that could expose how the system is failing.

Who is to blame?

We all are! We buy newspapers, consume news, we stimulate the naïve story that individuals are to blame, and we perpetuate the “bad apple” theory. We buy the news that says “the nurse turned into a witch and betrayed us all” nonsense. Lawyers want to sue people. Hospitals want to show they are solving problems — by disciplining staff or sacking them. Insurance companies find it easier. Manufacturers deny liability using all sorts of legal tricks. For example, CE marks which are required for any European products protects manufacturers from many liabilities, reducing incentives to make better products. Often manufacturers require hospitals to indemnify them from liability (in so-called hold harmless clauses), or in requiring professional clinical judgement, which means that problems are must be checked by the clinicians — which sort of sounds sensible until you realise that it protects IT errors at the same time. If you agree in writing that a professional clinician should not make mistakes, then they should not make mistakes even if the IT is wrong, so the manufacturer is legally protected if the IT has problems.

And hospital procurement is so keen on the benefits of the products it very rarely argues against such restrictive clauses.

This seems to be not only widespread in the health care sector…

I agree, our whole culture likes to blame people; we understand these “personal” stories. Blaming — and hoping to fix — big complicated systems that nobody understands is more important, but harder. Indeed, in the UK the problems run all the way down to the Criminal Justice Act which says that IT is presumed to work correctly. (When have you had IT work properly for any length of time??) If you follow the Act’s logic then any problems must be the users’! This law (and it isn’t the only one) protects manufacturers and hospitals, and misdirects attention to the users (usually nurses) rather than to the systems and the design of IT.

One area of blame nobody likes is that just being excited by IT itself causes problems. The UK, for example, had a massive project called the National Programme for IT (NPfIT) which was supposed to update all of the UK’s healthcare IT. But it was a massive, extraordinarily expensive failure, wasting billions of pounds. I like to argue that if somebody said they had a breakthrough treatment for cancer, you’d ask for proper evidence before treating people everyone. Why, then, do we spend so much on IT — which is, after all, a treatment or medical intervention — with no evidence that the treatment works? Why is nobody doing serious experiments to find out? I’ve got some answers to this, which I’ll explore below.

What do you think – is the health care system in Switzerland a safe haven for patients?

I’ve visited Swiss hospitals, but never as a patient. So far as I can see, Switzerland is not much different to other western healthcare systems. In particular, it (like everyone else) thinks new IT will be part of the solution to the usual problems — but this is a superficial argument. IT is also part of the problem, and it needs improving.

Which health care system in the world is doing the best job in regards to patient safety?

Sitting here in the UK, with Brexit and four nations (England, Wales, Scotland, Northern Ireland), I think nationalism is a big mistake. Patients cross borders, and diseases certainly don’t respect borders. We need more international collaboration to learn what is best. And what’s best in pediatrics and oncology and health informatics etc. in each country will be different — and we need to work out how to work together. Standards are needed. Regulations are needed. There is a lot to learn from the openness of the USA (e.g., MAUDE) but also a lot to learn about the problems of special interest groups (“The USA has the best democracy that money can buy”). Without collaboration, the IT we use will be incompatible — and patients and staff will suffer.

The digitalization of the health care system is strongly welcomed by different stakeholders, including patients. However, according to your publications, it seems that the digitalization is threatening patient’s lives. You emphasize that one main problem is the user interface between medical devices and systems used by health professionals and patients. What are the problems with what kind of medical devices and systems?

There are closely related problems, I think. First, we all live in a world where computers (wifi, cloud, Amazon, Facebook and so on) are wonderful and we all want more. But none of the stuff we want was designed for healthcare, so there is a very real danger that our excitement for IT drives our desires for healthcare, and that applies whether we are patients, nurses, anaesthetists or procurement or health IT Directors of big hospitals.

Then there is “success bias.” Obviously, Facebook and Amazon and eBay and so on are hugely successful and we’d like nice systems like that in healthcare. But we don’t see all of the thousands of failed IT ideas that haven’t made it to big time. So we think IT is wonderful, but it’s hard to think about the chances of it being wonderful. If there are a 999 failed companies for each Amazon, then the chance that we can build a healthcare IT system that works is around 1 in a 1,000. And that’s also forgetting that Amazon has at least 1,000 programmers working for it. How many work on your pet health IT project? Three? How good are they? It’s not going to turn out well.

Why do you think so?

Because we are all so uncritical, the quality of health IT has become very poor. Our regulations and legislation don’t help. To be more positive, I think IT can tell us a lot about healthcare. Computers cannot do things that are impossible, and just computerizing healthcare therefore makes existing problems and confusion very apparent if we only look. We should not just do what people want, but we should explore how to make healthcare more effective and computerize something that works. For example, the calls for “interoperability” tell us a lot, not so much about how badly designed computers have been, but how divergent healthcare practices are that allowed inconsistent computer systems to get a hold in the first place. Computers can help poor systems run faster, they can audit and monitor them with ease, but being more efficient at a poorly-structured job is not as useful as doing a good job. So, yes, I hope that more computer scientists will get pro-actively engaged in healthcare.

On your web site and in some of your articles on healthcare, you talk about attribute substitution. What does it mean?

When we have a problem to solve, we look for “attributes,” the key features of the issues we have to understand. For example, interviewing someone for a new job is a familiar, hard problem: it is hard work to fairly assess each new candidate in a few minutes. So, rather than doing the hard work, it is tempting just to decide that we would like to employ the nice looking candidate. For most jobs, what somebody looks like is hardly relevant, so falling for this temptation we have substituted the simple attribute of “nice looking,” which is very quick and easy to assess, for the much harder attribute of “how well can they do job?” Attribute substitution happens in this case when we confuse “they look nice” for “they are nice” (which is very hard to assess correctly) and hence we think they can do the job. It is not surprising that rigorous interviews carefully follow objective evaluation criteria and checklists to try to manage these unconscious biases.

Attribute substitution also happens when we buy hospital IT. We know stuff like tablets and phablets, clouds and blockchains look very good, so we tend to think these things are good. This seems such a quick and natural decision, we fall into believing it before questioning how valid it is. Of course, whether such stuff is any good for clinical use in a hospital requires assessing much more complex attributes than whether we just like it and would want one for ourselves!

Indeed, I keep asking, where is the evidence that all this innovation in IT actually helps healthcare or patient outcomes? Such thinking would be unacceptable for pharmaceutical innovations — we would rightly demand evidence, based on rigorous experiments such as randomized controlled trials. We would ask about doses and side effects. Too many people are too excited by IT to ask — they are substituting the seductive attributes of new and exciting for the formal attributes of effectiveness and safety.

Elsewhere you talk about cognitive dissonance. Can you tell us what you mean by this?

Cognitive dissonance is an academic way of saying we can have conflicting ideas, which is uncomfortable so we will usually find ways out of the conflict. For example, perhaps I smoke. But smoking is bad. But I am not bad. So these two thoughts are at conflict in me. One solution is to say “I like smoking.” Another example might be: I put a lot of work into learning a computer system. I could think that all my work was caused by the bad design of the computer system (which means I wasted my time learning it). But I’m not stupid — I like to believe I do not waste my time! I could resolve this conflict by convincing myself “this is a wonderful computer system and everyone should use it.” In fact, once everyone else starts to use it, I will become indispensable. That’s almost as crazy as a smoker saying the benefit is that they get to talk with all the other smokers outside the building.

You called your suggested solution to tackle errors related to user interfaces “techealth.” What do you mean by techealth?

When we give something a name, we can start to think about it intentionally, and we can point out (in any team we work with) that we need to think about it, because we can call it by name. If we just think we can buy new healthcare IT and everything will get better, then we are missing out on the deeper problems we are trying to solve.

So by techealth I mean: this is the name for the work that needs doing to ensure healthcare and IT (and other technologies) work well together.

What kind of research project epitomizes how techealth might be addressed or realized?

CHI-MED is a great place to start. They have made two great booklets that can be downloaded from  — a summary of their findings, and a manifesto for improvement.

Improving safety of medical devices and systems is key to making healthcare a less dangerous “industry” for patients. What are the duties and tasks of a) researchers, b) clinicians, c) healthcare managers, d) politicians and e) patients?

What are the duties of all of us? I think we need to accept that IT is very complicated and often inadequate for the task. In the consumer world, it is fine when companies sell us a dream, and then a year later sell us another dream. Like everyone else, I want the latest mobile phone or watch too! But our private addiction to consuming the latest IT is no guide at all for what we need in healthcare.

Because our addiction to IT is expensive, we risk cognitive dissonance: we justify to ourselves our high spending by convincing ourselves how wise we are buying the latest stuff. But that is hardly a good reason to get the latest stuff into a hospital. So, our duty: we must all think more clearly. And put techealth on our agenda.

 And finally, tell us about your area of research or about something which you think our readers would be interested in.

I am fascinated by the collision of human nature and computer nature. For example, because of workload or whatever, humans make errors. By their nature, we don’t notice errors as they happen — if we did, we would have avoided them. On the other hand, computers are programmed and in principle we can design computers to manage error. For example, infusion pumps have been around for years and we have a really good idea about how they are used, and therefore we know how to program them so that they can be safer.

Understanding how they are really used, in our labs we have programmed better infusion pumps, and we have reduced error rates. So that’s our lab research, improving IT to make things safer; but my “meta-research” is to understand why so few people want to improve their IT. There are lots of answers there, but perhaps one that we should prioritise first is that hospital procurement should aim to improve safety rather than just save money (and of course, safety will save money in the long run).

My wife, Prue Thimbleby, works in arts in healthcare, and she does an inspiring job of helping people — patients and staff — tell powerful stories on In fact, the arts are central to changing attitudes and I think it is only through applying the arts that we will change the world. Underlying that, it is only through science that will we have a clear idea of the better world we want — and whether, and if so how, we are fooling ourselves before we get there. We certainly need more science behind IT in healthcare, and we need more art to share best practice as we discover it.


Harold Thimbleby is an Honorary Fellow of the Royal College of Physicians, a Fellow of the Royal College of Physicians, Edinburgh, the Institute of Engineering Technology, the Learned Society of Wales, and an Honorary Fellow of the Royal Society of Arts . His passion is to improve healthcare by improving IT. He is an internationally respected computer scientist, and has won many awards and prizes and is also a well-known speaker and has been invited to speak out on these issue in over 30 countries.


Harold Thimbleby, “Trust me I’m a computer,” Future Hospital Journal, 2017 in press.

Harold Thimbleby, “Improve IT. Improve Healthcare,” IEEE Computer, pages 40–45, June 2017.

Creative Commons LicenceCreate PDF

Ähnliche Beiträge

Es wurden leider keine ähnlichen Beiträge gefunden.

Ist die Digitale Stadt unmenschlich?

Die Beziehung zwischen Mensch und Technologie ist so alt wie die Menschheitsgeschichte selbst. Technische Innovationen haben zu unserem materiellen Wohlstand beigetragen und lösen immer wieder Euphorie aus. Sie verlangen von Menschen aber auch Anpassungen und stossen deshalb auch auf Widerstand. Die Digitale Revolution gilt als still, doch ihre Wirkung stellt gar das menschliche Selbstverständnis in Frage. 

Luppicini (2012) beschreibt, dass sich mit der Digitalisierung deshalb etwas Fundamentales verändert, weil wir uns als Menschen nicht mehr in einer «Auseinandersetzung mit Technologie» oder einem Prozess des «Dafür- oder Dagegen-Entscheidens» befinden. Vielmehr hebt die Digitalisierung die Dualität von Mensch und Technik auf. Die Technik wird also einerseits menschlicher, aber der Mensch ist andererseits auch immer stärker mit Technologie verwoben. Und vielleicht ist es gerade diese zunehmende Vermischung, die uns – oft auch unbewusst – herausfordert. Wie können wir in dieser hybriden Welt, in der „Wirklichkeit“ ein unscharfer Begriff geworden ist, den Kern des Mensch-Seins definieren? Was macht den Menschen aus und unterscheidet ihn von Technologie?

Im Zusammenhang mit dieser Frage ist der Begriff der menschlichen Identität zentral. Die Entwicklungspsychologie betrachtet die Identitätsbildung als einen Prozess, der sich als fortdauernde Interaktion von Person und Umwelt abspielt und normalerweise im frühen bis mittleren Erwachsenenalter eine gewisse Stabilität erreicht hat. Mit dem Begriff der „Digital Natives“ wird angedeutet, dass digitale Technologien für jüngere Generationen bereits fester Bestandteil der Identitätsbildung sind. Damit wird aber auch gesagt, dass dies für ältere Menschen nicht gilt- sie wurden noch weit stärker von einer analogen Umwelt geprägt und nehmen digitale Technologien in der Regel nicht als Teil ihrer Identität sondern durchaus noch als ein Gegenüber wahr, zu dem sie eine bewusste Haltung einnehmen können und wollen. Insofern ist die Herangehensweise älterer Menschen an digitale Technologie oft eine analytischere und weniger intuitive. Die oben genannten Befürchtungen gegenüber digitaler Technologie akzentuieren sich deshalb bei älteren Menschen oft stärker als bei jüngeren.

Es ist zunehmend schwierig geworden, Erfahrungen zu sammeln, die „technologiefrei“ sind (Croon Fors, 2013). Man denke an das allgegenwärtige Phänomen der Handykameras, die es uns vermeintlich erlauben, Erlebnisse festzuhalten. Doch das, was die Einmaligkeit eines Erlebnisses ausmacht, lässt sich vorderhand nur unzureichend digitalisieren. Die Schnappschüsse bringen uns die Wirklichkeit des Erlebten nicht wirklich zurück. Und hier liegt wohl ein weiterer Quell des menschlichen Unbehagens in der digitalen Welt: Die Beobachtung, dass an die Stelle echter und oft auch sozial geteilter Erfahrung digitale Abbilder treten und dass durch diese „Stellvertretung“ etwas essenziell Menschliches verloren geht.

Hinzu kommt ein weiteres fundamentales Problem der Digitalisierung – der Mangel an Vertrauenswürdigkeit. Während uns in zwischenmenschlichen Kontakten eine Vielzahl von feinsten Antennen zur Verfügung steht, mit denen wir uns ein Urteil über die Vertrauenswürdigkeit eines Menschen bilden, bleibt dies in der digitalen Welt eine gewaltige Herausforderung. Dieses Problem wird dadurch verschärft, dass die Datenhegemonie und die damit verbundene Macht weniger globaler Unternehmen tatsächlich sehr bedenklich ist.

Die Alternative eines digitalen Abseitsstehens ist aber längst zur Illusion geworden. Dazu ist die digitale Revolution schon viel zu weit fortgeschritten. Selbst dem eingefleischtesten analogen Zeitungsleser können mittlerweile Texte begegnen, die allein das Ergebnis von Algorithmen sind (van Dalen, 2012). Um es einfacher auszudrücken: Es geht nicht um das „Ob“ sondern um das „Wie“ der Digitalisierung. Für die Stadt der Zukunft ist es deshalb entscheidend, dass die digitale Technologie auch jene Aktivitäten ermöglicht bzw. erleichtert, die unser Menschsein ausmachen. Dies wären zum Beispiel die zwischenmenschliche Begegnung, das Gespräch und das Erleben und der Ausdruck von Emotion. Zum anderen gehören zum Menschen auch Tätigkeiten, die nicht einfach rational und zweckgebunden, aber dennoch oft sehr sinnvoll sind: Nämlich Spiel, Kontemplation und künstlerisches Schaffen.

Allwinkle & Cruickshank (2011) betonen diesbezüglich den Unterschied zwischen intelligenten und smart Cities. Demnach weisen intelligente Städte zwar sehr viel Innovation vor, beziehen aber den Menschen wenig ein. Im Gegenteil: Vanolo (2013) äussert sogar die Sorge, dass die Bürger der intelligenten Stadt „ruhiggestellt“ seien, und Regierung, Verwaltung und Wirtschaft in Ruhe liessen, weil die Digitalisierung ihr Leben „bequem“ mache. Der Preis dieser Bequemlichkeit ist die unhinterfragte datengetriebene Kontrolle von Bürgerinnen und Bürgern und ein Verlust an Zugänglichkeit von Behörden und Verwaltung.

In einer Smart City jedoch, werden digitale Innovationen auch tatsächlich bürgernah umgesetzt. Eben diese Umsetzung bleibt eine grosse Aufgabe sowohl für Politik und Verwaltung als auch für die anwendungsorientierte Forschung. Die so gedachte Stadt der Zukunft überquillt nicht vor digitalen Dienstleistungen an passiv empfangende Einwohner. Sie ist vielmehr ein Ort, der die aktive Mitgestaltung der Lebenswelt von unterschiedlichsten Bürgerinnen und Bürgern fördert. In einer solchen Stadt bleibt die Digitalisierung Mittel zum – guten – Zweck.


  • Allwinkle, S., & Cruickshank, P. (2011). Creating Smart-er Cities: An Overview. Journal of Urban Technology, 18(2), 1–16.
  • Croon Fors, A. (2013). The ontology of the subject in digitalization. In R. Luppicini (Ed.), Handbook of Research on Technoself: Identity in a Technological Society (pp. 45–63). Hershey, PA: IGI Global.
  • Luppicini, R. (2012). The Emerging Field of Technoself-Studies (TSS). In R. Luppicini (Ed.), Handbook of Research on Technoself: Identity in a Technological Society (pp. 1-25). Hershey, PA: IGI Global.
  • Van Dalen, A. (2012). The Algorithms behind the Headlines. Journalism Practice, 6 (5-6). http: //
  • Vanolo, A. (2013). Smartmentality: The Smart City as Disciplinary Strategy. Urban Studies, 42098013494427.
Creative Commons LicenceCreate PDF

Ähnliche Beiträge

Es wurden leider keine ähnlichen Beiträge gefunden.