The underestimated automation

Kolumne_Banner

Automation supposedly has two very different meanings: On the one hand, we use it to describe the taking over of human tasks by machines (= automata) and, on the other hand, our own learning to perform tasks unconsciously. However, both forms of automation are very similar. In both cases, we outsource tasks from conscious thinking, on the one hand to machines external to the body and on the other hand to internal, unconscious processes, i.e. to our own internal machines or automata. And in both cases, the outsourcing is not complete. External machines we have to maintain, operate and control, while unconscious processes interact with our conscious thinking in subtle and manifold ways.

A typical example of the interaction of automated and of conscious thinking is the exploration of contexts through writing. Column writing, for example, draws on automated random language production developed in earlier texting and is the basis for learning to make new discoveries in text writing. Without automated text production skills, this would not work. Nor would it work if our thinking rejected the control provided by automated text production. Heinrich von Kleist described this process of gaining knowledge, which at first glance seems quite strange, in an exemplary way, using the example of oral explanation. Both the random on the micro level and the contingent on the macro level of the search for knowledge depend essentially on the versatility of automation, openness and curiosity: highly developed automation and open curiosity together lead to superior results. Automation is therefore the learning and training goal in many fields: in sport as in art, in crafts as in mathematics and philosophy.

Automation as a cognitive development opportunity

The similarity of the two forms of automation suggests that we should not necessarily regard automation with external machines as being as deterministic and universal as possible, as we normally do with machines. It is also conceivable to have individual and random automata that do work for us – much like our unconscious thought processes. It would also be possible to build them with machine intelligence technology. We would only have to take our human nature as a model for this.

The advantage of the unconscious processes in the human brain, and thus the advantage of human automation, is that they are fast, highly parallel and organised bottom-up. In contrast, conscious thought takes place top-down and is extremely slow. It is a paradox of evolution that this slow process of conscious thinking proved to be particularly successful – and not only in humans.

Nevertheless, during conscious thinking, unconscious processes usually take over the direction and execution of what is thought. What distinguishes these processes from, at best, future superintelligent machines is that, for better or worse, they are our own machines. But it would only be natural to supplement endogenous “machines” with personalised external machines. We have already made such additions, extensions or externalisations of human capabilities in many areas. Many of our tools augment our bodies: the car, for example, augments our abilities to move forward quickly, while the abacus augmented our computational abilities long ago.

Cultural fears

The fact that there are so many dystopian and some utopian ideas about the machine future has various reasons: mainly cultural, but also quite rational. The memory of the experience of industrialisation, for example, still exists in cultural knowledge. We are afraid of the associated upheavals because we know what great dislocations automation caused in the 19th and 20th centuries. Moreover, we probably unconsciously regard machines as cultural objects – specifically, as cultural objects that refuse subjective appropriation and instead assign us our place in society by proxy. In this way, we perceive technology as an oppression that we either submit to or resist. The fact that many are currently using machines intensively in the form of digital social media for social reputation gain does not lessen the fears, but rather intensifies them.

The danger of dumbing down

But there are also two objectively well-founded fears. The first concerns the developmental step towards the cyborg with all kinds of cognitive enhancements. It could disturb the balance of our human psyche and cognition. For our conscious thinking must successfully interact with many unconscious processes in the brain – and this interaction may be disturbed by the addition of digital machines.

Quite pragmatically – and detached from fundamental philosophical considerations – machines deprive us of the possibilities to automate abilities when they take over the corresponding tasks for us, as Eduard Käser has pointed out. The problem with this is, on the one hand, that our learning of the automated skills fails and we thus cannot acquire cognitive skills from earlier generations. On the other hand, the digital machines cooperate with our conscious thought processes much less closely than skills automated by ourselves. The UX (user experience) of the digital machine is much worse compared to the UX of our own unconscious thought processes. As a result, more advanced developmental steps of our thinking are blocked. So we are hit twice with a learning loss and have to compensate for it if we don’t want to become dumbed down.

The economic risks

The second fear is that machine automation will destroy more jobs than it creates. Daron Acemoglu and Pascual Restrepo have shown that since 1987, the elimination of tasks in the US has been about twice as high as the creation of new tasks (while in the 40 years before that, about the same amount of new tasks were created as were eliminated).

Automation with digital machines reduces the need for labour unless it lays the foundations for new practices and services. Ideally, digital automation would work in the same way as the automation of human skills, which allows for the development of skills based on them, as outlined above. But this would require that we design and use user interfaces accordingly. We would have to rethink automation – or more precisely: think human.

The alternatives are the development of digital tools independent of automation that enable us to perform new tasks, and the science-protected political fight against the negative effects of automation in analogy to the fight against global warming, as Acemoglu proposes to fight inequality in society.

Conclusion

Digital automation is doubly underestimated. Few recognise the dangers and only some see the ontogenetic and evolutionary opportunities. A creative, multi-perspective view shows that automation is something very complex that can only be inadequately described by common buzzwords like rationalisation.

Creative Commons Licence

AUTHOR: Reinhard Riedl

Prof. Dr Reinhard Riedl is a lecturer at the Institute of Digital Technology Management at BFH Wirtschaft. He is involved in many organisations and is, among other things, Vice-President of the Swiss E-Government Symposium and a member of the steering committee of TA-Swiss. He is also a board member of eJustice.ch, Praevenire - Verein zur Optimierung der solidarischen Gesundheitsversorgung (Austria) and All-acad.com, among others.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *