On the (in)objective use of spanners
Many people are appalled that in the future many jobs will be done with spanners. Some find the use of spanners highly ridiculous. Some expect that spanner use will fundamentally change human thinking and feeling. Scientists even predict that eventually spanners will take over the whole universe. Spanners are a topic that clearly evokes high emotions. Why is that? I lack passion on the subject of spanners. Although I have an engineering degree, issued by a school named after a local surveyor. Despite this down-to-earth background, spanners are simply tools to me. The most I’ve learned about them was during theatre visits with a grandmaster spanner maker. But that’s already my most emotional connection to the subject. Because let’s face it: spanners are generic tools or generic tool components that can be usefully incorporated into different tools. No more, no less. You have probably already guessed, dear readers: When I think of spanners, I am not thinking of the usual, metal tool, but of contemporary digitalisation technologies, specifically artificial intelligence (AI). This was once invented not as a tool, but as an instrument to understand human thinking. The Turing Award winner and winner of the Nobel Prize in Economics, Herbert Simon, developed his first expert systems as models of human thought. For decades, research on artificial intelligence was also concerned with the question of how intelligence arises – or “emerges”, as it used to be called. Hardly anyone was upset by all this, but some people were moved to interdisciplinary cooperation. But since AI is increasingly becoming a tool, everything is different. Obviously, for many people the fun stops when it comes to tools. Philosophically, tools are extensions of human abilities and an expression of the cyborg nature of human beings. But we rarely deal with tools philosophically. Mostly not even factually. Both the object and the use of the object become our pleasure and joy, sometimes also a burden or a symbol of torment. And last but not least, a social must. Currently, AI use is on its way to becoming the new social must. We absolutely have to use AI for personnel recruitment, some HR thought leaders tell us, for example. They imply: We have to scrape together all the data on candidates we can get, by whatever means. Soon they will demand AI use for promotions and for managing employee development. While the bureaucratic monsters of training hell have long been in place in large digital corporations, AI can be used to exacerbate their effects. (Tip for young authors: write about the torments of the trained in the multinational corporation! Or, loosely based on George Orwell, write about how you fell in love with further education) “We must”, they say here. “We must”, they say there. “We must teach machine learning to business administrators too”, we say at the university, for example. At least I heard myself saying something similar on the radio. We have to make our graduates fit for the AI that will await them in the workplace in ten to fifteen years, we agree with the canon of the AI movers. Because then our graduates will have to use AI themselves. Is that really true? The answer is: Yes! The most important argument is an indirect one. It is to be expected that AI will cause a lot of damage in the hands of those who have to use it. We will see downright borderline stupid uses of AI. Plus an endless dispute over principles. It is well known that many people find it difficult to distinguish between the verb “to use” and the noun “benefit”, but prefer apodictic statements about the noun to situational statements about the verb. This is then a matter of considerable debate. The real point is that the fundamental decision in favour of AI says nothing about the fact that we actually want to use AI in a situational context. Thus, the must in teaching comes from the possibility in practice, not from the must in practice. Those who want to freely decide on concrete use need knowledge and a certain amount of practical experience. That is why it is indeed urgent that we familiarise our students with AI in business administration courses. Those who counter with digitalisation detox should consider that one does not prevent the use of powerful tools by personally doing without them. (Dürrenmatt: The Physicists). So we see ourselves almost a little forced to learn how to use AI. This is unpleasant, but it applies to many other tools as well. For example, many do not use Whatsapp voluntarily, but because they cannot or do not want to withstand the social pressure and the negative social consequences of not using it. We live in relative freedom, but under the dictates of our environment: this is especially true for the digital transformation of the economy. It is crucial that we resist the dictate where it exists only in our minds. This means that our resistance takes place in the concrete rather than the fundamental. The concept of “resist the beginnings” makes sense with political crimes, but hardly with technologies. It may happen to individuals that they accidentally fall into a technology slide from which there is hardly any escape, but this too is more a narrative than an inevitability. For the way our society deals with digitalisation technologies, however, this danger is quite small. What is really dangerous is only the insistence on not wanting to use useful tools. Despite all the discussion about AI hype, we should also regularly remind ourselves that AI is always a tool for gaining knowledge about intelligence. The questions in this area have not been answered. Even if some partial answers sound frustrating, the emergence of intelligence remains a wondrous phenomenon, closely interwoven with the emergence of complex structures in energy-filled systems. We know little and understand less. As long as this is the case, we need not really fear the takeover of the spanners in the works.
Leave a Reply
Want to join the discussion?Feel free to contribute!