Digital sins – Part 2: Messiness in digital management
Part 1 of this series addressed on how we tend to focus too much on trivialities and fail to discuss how digital transformation can actually change the world for the better. Part 2 looks at the other side of the coin: when implementing digital transformation, we often ignore essential aspects because we consider them trivial.
There are many forms of sloppiness. But digitality exudes something thoroughly orderly, as it consists of ordered sequences of zeros and ones. What could be sloppy about that?
An illustrative review: Swiss e-government
Imagine driving a car from a famous brand that brakes and comes to a standstill every time you press the accelerator. How does that feel? This is exactly what happened with Swiss e-government in the past. Attempts to speed up its development repeatedly ended in further slowdowns.
In order to speed things up, scientific support for key projects was scrapped – “because science slows everything down and we need to move forward quickly” – including a project that is still not complete 20 years later. And instead of solving the problems in a technically sound manner, they were solved through negation or delegation to big-name organisations.
What some big-name organisations did in the process is as bizarre as it is instructive. Among the flashes of inspiration were, among other things, providing address verification via credit card companies or reusing military airport technology for e-government. But even renowned experts made original contributions that presented good concepts in the wrong context.
Often, the problem was excessive idealism: one example of this is the now-closed business process exchange platform, which failed because the search for a business model only began after the fact. Or there was too much trust in self-organisation: Switzerland produced a large number of e-government standards that are rarely used. One mistake made in the eagerness to standardise was to rely on tried and tested procedures and practices from the legislative branch (motions and consultations). Unfortunately, there are major differences between laws and standards: standards can be ignored, laws cannot – and the benefits of laws are usually much more immediate than those of standards. Effective standardisation can draw on experience from the legislative branch, but should focus on those cases where laws primarily serve as enablers.
What may seem like a minor detail – legislative practices in general versus legislative practices in specific areas – may appear to be an academic detail and be unknown to many legal experts, but it makes a big difference in terms of impact.
In fact, the sloppy handling of digitality is often idealistic, pragmatic and experience-based – and often comes across as highly orderly. People who want to serve the common good rely on inappropriate heuristics. This only becomes a real problem when critical discourse is lacking. And that is almost always the case.
The result is usually poor design: the quality of the approach, the product and the user experience are all lacking. Sometimes this is intentional (partly for good, partly for bad), sometimes it is a form of misguided do-goodism. But there is also a third cause: ideological bias.
Ignoring the perspectives of other disciplines
Of course yes – yes, of course. No – of course not really!
Everyone is in favour of interdisciplinarity (or – even more modern – transdisciplinarity), just not in practice! And there are good reasons for this: people are in favour of it because being in favour of interdisciplinarity is the cultural norm. In practice, however, it is negated because it is expensive, intellectually demanding and detrimental to individual credibility and professional success, and because it is regularly penalised in objective evaluation procedures. Exceptions? Yes, there are more and more of them, but overall they remain marginal.
It is, of course, perfectly acceptable for people to limit themselves to their own discipline and ignore others – even when it comes to digitalisation. Software developers are allowed to ignore the area of application, the legal situation, the economic aspects, the cultural aspects, ethics and sustainability. Lawyers are allowed to regulate technology without understanding it. Managers are allowed to set unachievable IT targets or formulate governance that is not technically feasible. And none of the above need concern themselves with cultural aspects. No one can be blamed for limiting themselves to their own field of expertise – just as you cannot blame a theatre director who stages Dürrenmatt’s The Physicist for not understanding physics.
There is just one tiny problem: monodisciplinary digitalisation works poorly, and in many cases not at all. Multidisciplinarity can help, but even that is usually not enough. For example, the concept of “interfaces between monodisciplinary teams from different disciplines”, borrowed from computer science, fails due to mutual misunderstanding and an unwillingness to communicate with each other (i.e. the “ontological discrepancy”). It is necessary to address several disciplines together, not just each one individually.
The invisibility of structural deficits
However, there is a lack of understanding of this necessity. A key reason for this is that the problems of mono- and multidisciplinary approaches to digitalisation are not apparent as such. I have never seen anything criticised as monodisciplinary. In most cases, the talk was of mistakes. Criticism was levelled at solutions that were poorly designed, poorly implemented, incorrectly used or insufficiently monitored . Or they covered it up or claimed that success would not have been possible anyway. “Failure was inevitable because the data protectionists forced it!”
The consequences include recurring IT scandal projects in public administration and the senseless burning of money on digital innovations in the private sector. If you take a closer look at the failed projects, you will often see that important technical aspects were not clarified because they were considered irrelevant – for various reasons: on the client side, no one was responsible for success; the resources for project management were too limited or were diverted to other tasks; the contractors had no desire to consider non-technical aspects; there was never a consensus within the implementation team about what was important (or even what the project was actually about). And so on. Much of this can be reduced to a common denominator: no one was interested in the practical benefits of the digitisation project; the result was of little significance, and the consequences were not personally important to anyone.
Systematic ignorance out of conviction
Despite the absence of accountability, the results and outcomes need not be as unsatisfactory as they often are. The elephant in the room is logical authoritarian thinking.
If you take a step back and observe how people view and deal with digitalisation, it becomes clear that authoritarian thinking is widespread, with two extreme camps dominating. One camp regards IT as a subordinate servant. The other camp stands for IT competence and presents itself to the world like the Roman governors of old in the provinces.
There is not much in between. Although many people use the arguments of one or the other main camp depending on the situation, only the hyphenated IT disciplines represent a genuine middle ground: legal informatics, business informatics, administrative informatics.
The first main camp includes many social scientists, managers and experts. In business administration, IT is a resource like any other. There is a profound truth in this – like the “human resource”, it is very idiosyncratic – but this is ignored. Many managers responsible for resources argue seemingly logically when it comes to IT issues, but base their arguments on experiences, constructs and concepts from analogue everyday life and come to completely wrong conclusions. Many experts in individual economic sectors assume that the ecosystems of their sector will not change with digitalisation. Furthermore, many ethicists argue purely normatively and from the perspective of the analogue world. Many new laws regulate the digital economy as if feasibility were not an issue. Countless project managers give lectures on how technology is never the problem. And so on and so forth.
This colourful first camp lives under the illusion that it knows what it is talking about. And they do – but in a world other than the digital one.
The second camp sees the world through IT glasses and wants to subjugate it. Programmers, visionary entrepreneurs, scientists, philosophers, autodidacts, etc. Each in their own way, they represent the primacy of technology. They consider the content of digital platforms to be worthless, want to guarantee fairness through technology, break down complex problems into app-friendly bite-sized chunks, etc. – and they believe, among other things, that the elimination of jobs for humans is inevitable and the reign of AI is unavoidable.
This second main camp follows in the tradition of changing the world through human design. Innovative design once brought us agriculture, bureaucracy, mathematics, legislation, sport, theatre, democracy, etc., and belief in it ultimately gave us 400 years of future-optimistic modernity. But it has also repeatedly given rise to practices that disregard humans, animals and nature.
What can we do?
We must leave our extremist comfort zones, regardless of whether we belong to the first or second main camp – or, like many, lean towards one or the other on a case-by-case basis.
It doesn’t take much. It is sufficient that all disciplines relevant to the situation are adequately taken into account in the design of digital solutions, that high actual benefits are sought, and that abuse is anticipated and restricted. Or to put it in slightly idealistic terms: it is sufficient if (not the individual, but) people are at the centre of digital innovation.
Create PDF


Contributions as RSS
Comments as RSS
Leave a Reply
Want to join the discussion?Feel free to contribute!