Nobody is afraid of the growing loss of self-evident facts in our increasingly complex society. Hardly anyone is bothered by the fact that established design and engineering knowledge is being lost in practice – or more precisely, that the gap between the ever-improving state of knowledge and widespread practice is widening. But everyone is talking about utopias and dystopias in the use of digital tools. Miracles and ethics are the order of the day these days. There is less time for natural intelligence. Some are too busy selling machine intelligence and others are busy pumping warnings about it into the market for profit. Who can find the time to think! I used spanners as a metaphor for digital tools (which include multi-layer neural networks for machine intelligence) in Part I of this column. I could have used other everyday tools, scissors for example. Everyone knows how dangerous they are – and do-it-yourselfers know how difficult it often is, as a non-professional, to figure out what kind of scissors you need. Most of the time, the salespeople in the shops that sell scissors are completely clueless and no help at all. They have this in common with many machine intelligence salespeople. Moreover, adults like to warn children not to use scissors – and often do so regardless of whether the warning is justified or not. The problem with the scissors metaphor is that it is so rich in content that it obscures what is actually meant. That is why I stick to my spanner metaphor. We are currently seeing that in many areas the ease of use (i.e. usability) of spanners is being lost, while in other areas, thanks to increasingly professional design, the user experience (i.e. the UX) with spanners is becoming more and more awesome. We get more and more awful spanners in one place and more and more awesome spanners in another, but neither the spanner preachers nor the spanner ethicists care. Both groups are, for the most part, too detached to be interested in the mean real life spanners. They have more important things to do, namely to sell their concepts – usually wrapped in the cloud of artistic incomprehensibility that eludes the understanding of normal people.
For me, it’s about more fundamental things: about knowledge and know-how as a matter of course in design and engineering. This includes the theoretical foundations of wrench-making as well as established good practices and a knowledge of recurring bad practices.
Example: algorithms in the courtroom. Numerous authors are currently looking at their dangers, but very few are interested in whether judges actually use them. There is, however, some evidence that they are largely ignored because judges are not illiterate and have long since realised how dubious their use is. In contrast, the questions that are actually interesting – for example, the reconstruction of events from amateur videos – are for the time being only marginally present in both academic and public discourse. The latter will change at some point and is not my concern. Nor do I want to preach situational design here in terms of usability and UX or other good practices for the design and engineering of sustainable usable digital solutions. My concern is more fundamental: knowledge and know-how as a matter of course in design and engineering. This includes the theoretical basics of wrench-making as well as established good practices and a knowledge of recurring bad practices. And it also includes practice, practice and more practice in spanner making that leads to the acquisition of know-how. In education and training, this presents us with special challenges. On the one hand, we have to take away people’s shyness by giving them practical experience so that they do not blindly follow the dystopia preachers and utopia salesmen (or vice versa). On the other hand, we need to promote their awareness of the need for much more knowledge and knowhow, take away the illusion that everything is easy, and demand that they ask the right questions. We at BFH Wirtschaft are trying to go down this path in the new Master’s programme “Digital Business Administration” by reversing the old practice of “First the theory, then the application!”. Students first learn to apply digital tools and only afterwards the theoretical foundations that enable them to understand why one worked and the other did not. This is a procedure that has been used for a long time in university physics education, for example (which is why it is also known what can go wrong). As banal as it sounds, it is always about theory and practice – and there is never a contradiction between the two, because a theory that is disproved in practice when applied correctly is a false theory. And a practice that (outside of scientific experiments) goes against well-documented theories is an irresponsible practice. Theoretical as well as generic knowledge is indispensable if one wants to use spanners – for example big data and artificial intelligence – responsibly. In this case, this means that one must have a basic knowledge of information processing before establishing ethical guidelines. Otherwise, they quickly become dangerous, inhumane instruments. Often, for example, machines are fairer than experienced decision-makers because they process less information. Anyone who blocks the use of machine intelligence is quickly confronted with the fact that he or she endangers human lives out of complacency – intervening in an ethically responsible way feels great. For example, by letting people die (in health care) or condemning them to live a gruesome life (in court practice). But the same goes for those who want to use machine intelligence without understanding the static basics where it is hopelessly inferior to humans. We are therefore confronted with the necessity of constantly researching the truth precisely because of the opportunities, risks and everyday problems of the use of spanners AND constantly developing specialist disciplinary practice. Feyerabend’s “anything goes” is an important principle here. It does not mean that doing is more important than knowing, but that situationally fit knowledge and know-how are necessary for successful doing. It can be anything, but it should be the right thing, or better: the fittest thing we know and which helps us to develop even more correct/fitter solutions in the future. The balance between the quality of the tools and their suitability for learning is a big issue in itself. In a way, the wrench-warners are right: these spanners simply make our lives more difficult. That’s the price our ancestors paid for starting to develop intelligent tools hundreds of thousands of years ago. Not paying it is at best an individual option, but not a social one.