Why usability testing does not guarantee a successful product

As someone who has been teaching usability and human-centred design for many years and in various countries, I am of course pleased that the term usability is now much better known than it was just a few years ago and that usability testing is now also increasingly expected, if not demanded. But it is unrealistic to expect that successful usability testing guarantees a successful product. Why? A deep dive by our author and former BFH researcher.

The design of user-centred informatics is about a) understanding and supporting actual user needs and b) making the support as meaningful as possible, although this can vary greatly depending on the context and the people involved. In summary, I call user-oriented informatics informatics systems with which people (employees, students, customers) interact by means of a “user interface” of whatever kind. According to this definition, I do not include IT systems without such interactions in user-oriented IT.

The principles of successful interactions

According to current knowledge, there are just five fundamental types of interaction that describe how to interact with a computer science system: instructing, conversing, manipulating, exploring, responding (Sharp et al 2019, Lueg et al 2019). Equally well known are the fundamental principles of successful interactions (Nielsen 1994), which history has shown are applicable to virtually all use situations. The tangibility of these findings may now sound as if it is not particularly important whether we are talking about the design of a user-oriented informatics system for use in an office, in a hospital or in the timber industry, because the basic interaction mechanisms and the fundamental principles are now well known!? In fact, the opposite is more the case. For one thing, the findings mentioned are not prescriptions, but descriptions. Interaction types, for example, describe how we interact, but they do not prescribe how we interact corresponding interaction types are then implemented on the technical level. The conversing interaction type as an example can be based on text input in many ways, as is common with chatbots. If necessary, the type could also be based on Morse code, speech recognition or an optical system for interpreting sign language. In the same way, the principles of successful interactions according to Nielsen are not a description of the way, but rather a description of a desirable state. The principles describe the desired state, such as “The system status is easily recognisable at all times”, but not how to achieve the state.

What belongs to usability

What does this have to do with usability testing? Usability is a property of a product that can be represented and, to a certain extent, quantified on the basis of various criteria. Widely accepted characteristics are learnability, efficiency, memorability, errors and satisfaction (Nielsen 2012). After a usability test, one knows to what extent a user-oriented IT product corresponds to the current state of usability knowledge. If weaknesses are identified, they can be corrected accordingly, whereby one should not forget (which unfortunately happens quite often from experience) that the adaptations should also be tested for usability. Why does usability testing with appropriate improvements still not guarantee a successful product? As already mentioned, it is about the users and their needs. The often underestimated challenge here is that even experienced developers tend to view an application situation on the basis of their own knowledge and horizon of experience. This is deeply human and is sometimes described by the acronym PLUs (People Like Us). However, as I said, it is not about one’s own (ultimately projected) needs, but about the knowledge and experience of those whose needs are being developed for.

Uniting two points of view

There is a mountain of literature on the subject that I cannot detail here for lack of space. Nielsen himself summarises very briefly that usefulness, i.e. real usefulness, includes both usability and utility, where utility describes the relevance to the use case: Usefulness = Usability + Utility (Nielsen 2012). Thematically, then, it is about Human Centered Design aka Interaction Design (Sharp et al 2019). It is about iterative and participatory development, in which the voices of the users are taken into account from the beginning and also repeatedly in between. The systematic involvement of the users ensures that one does not concentrate too much on certain aspects and neglect others. Human Centered Design is an approach that opens up solution spaces and explores them curiously, but also knowingly, and in which conceivable solution approaches are routinely evaluated prototypically (Discovering requirements, Designing alternatives, Prototyping alternative designs, Evaluating product and its user experience throughout and if necessary Rinse, Repeat). Usability testing can of course be part of a human-centred design process, but it cannot replace the complex human-centred design process. I would like to use three simplified examples to illustrate briefly why the results of successful usability testing can easily be overinterpreted:

  1. The first example is a usability test of a blood pressure monitor for home use, which was examined by medical informatics students as part of a user-centred design class. The usability testing showed that the usability of the device in and of itself was quite successful. However, the knowledgeable testing also showed that the device could also deliver incorrect data without it being apparent that the device had not been used correctly. So in this case there was a kind of meta usability level whose weaknesses the students were able to identify on the basis of their expertise and verify with the help of a professional blood pressure measuring device.
  2. The second example is a software for data entry that is supposed to take place under extreme conditions. Extreme in this case means constant noise and unpredictable horizontal and vertical shocks like those found in vehicles, ships and helicopters. This means that uncontrollable vehicle movements can collide with actual data input movements. Usability testing successfully conducted under typically calm laboratory conditions might confirm the logic of the software to some extent, but testing could only evaluate actual usefulness under real conditions to a very limited extent without major effort.
  3. The third example is a mobile navigation aid that is supposed to enable people who are unfamiliar with a place to find their way in buildings they do not know, such as hospitals. From a human-centred design perspective, the question would arise as to who would benefit most from such a navigation aid. The likely answer would probably be that the target persons are rather insecure, perhaps not so keen to ask for help, possibly also have to deal with cognitive weaknesses and/or visual impairments. The app would have to be more reliable the more the users would have to rely on it, which is a formidable challenge. So the question would be whether a smartphone app could do such a thing at all, both from a technical and an ethical perspective. Another question would be how such an offer of help could be designed alternatively. Would better signage help? What are the current weaknesses of signage? Would human helpers be conceivable, as they are used at some tourist destinations, or could human-guided robo-guides be used, where human helpers can intervene if necessary and should therefore also be much more reliable?

Finally, I would like to emphasise once again that I am really pleased that the term usability is now on everyone’s lips. However, in all this joy we must not forget that usability testing is not a shortcut that allows us to dispense with real human-centred design. Successful IT developments that support real needs in an appropriate way and are also accepted by the users are typically the result of demanding and sometimes lengthy Human Centered Design work.


Krug, Steve (2010). Rocket surgery made easy. The Do-It-Yourself Guide to Finding and Fixing Usability Problems. New Riders. Lueg, C., Banks, B., Michalek, J., Dimsey, J., Oswin, D. (2019). Close Encounters of the 5th Kind: Recognizing System-Initiated Engagement as Interaction Type. Journal of the Association for Information Science and Technology, 70(6):634-637. Wiley . https://doi.org/10.1002/asi.24136. Nielsen, J. (1994). 10 Usability Heuristics for User Interface Design (updated 2020). Available from https://www.nngroup.com/articles/ten-usability-heuristics/ Nielsen, J. (2012). Usability 101: Introduction to Usability. https://www.nngroup.com/articles/usability-101-introduction-to-usability/ Sharp, H., Preece, J., Rogers, Y. (2019). Interaction Design: Beyond Human-computer Interaction. 5th Edition. Wiley.

Creative Commons Licence

AUTHOR: Christopher P. Lueg

Christopher Lueg is a professor at the University of Illinois. Before that, he was a professor of medical informatics at BFH Technik & Informatik. He has made it his mission to rid the world of bad user interfaces. He has taught Human Centered Design and Interaction Design for more than a decade at universities in Switzerland, Australia and the USA

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *