The logical world of mathematics eludes political thinking. Nevertheless, we must not regard mathematics as a black box when regulating digital tools. A contribution from our event “Dangerous Mathematics”. The question of regulating mathematics has been an issue for some time. In Switzerland, for example, the association eJustice.ch dedicated its conference Computer Science and Law to the topic of Big Data Governance as early as 2014. In 2017, I myself was approached by politicians about the regulation of algorithms for the first time, at the European Health Forum Gastein. The question was put to me by a member of the Portuguese parliament during a panel discussion on early diagnosis. It was essentially this: How can you classify algorithms according to social criteria so that this also makes mathematical sense?
From what we understand so far, the answer is: it is not possible! We can say some serious things about the context of use, but not about the mathematical algorithms. But the regulation of the context of use is also unclear. What is involved in the use, or rather the application, of algorithms?
In her impressive book Weapons of Math Destruction , Cathy O’Neill addresses important characteristics of the use of algorithms, for example the presence or absence of feedback, but such criteria are only necessary but by no means sufficient. Imagine if Formula 1 were to limit itself to the specification of a few basic rules, such as the presence of brakes, and made no specifications about the chassis, the engine or the fuel. The result would be a competition of the willing to die, in which only the insane would have a chance of winning.
In fact, the algorithms are neither good nor evil, but in a specific context they can be the right or the wrong algorithms and their properties can very well have a hazard-avoiding or homicidal effect in relation to the context of use. However, we have only been interested in this since the development of calculating machines has made enormous progress as a result of the decades-long validity of Moore’s Law. It therefore makes sense to consider mathematics in relation to regulatory issues as part of computational thinking (CT).
The common history of mathematics and computational thinking
CT and mathematics have a closely interwoven history. They emerged together about 6000 years ago. Throughout history, CT developed the vision of avoiding human error through the construction and use of computing machines. In the 20th century, this vision was successfully realised. Today, as one can read in Peter J. Denning and Mattie Tedre’s book on Computational Thinking, CT stands for two complementary aspects: designing machine-executable computations and interpreting the world as information processes.
Mathematics played a triple role in the emergence of modern CT in the 20th century:
- It provided the motivation (automation of computation),
- the foundations (in the form of the Universal Machine, among others)
- and the non-material part of the machine room of modern “computers” (the algorithms).
The motivation has greatly expanded since then: it is now also about automating the control and execution of work and business processes, designing virtual world and understanding the world. In these developments, mathematics played an important role, at least in part, for both aspects of CT!
The fundamentals of CT have also expanded greatly in recent decades. Today, these include the fundamentals of programming (CT on a small scale), the fundamentals of application development (CT on a large scale), design principles (for interaction with humans) and new machine models (for quantum computers, among others). However, when the basics were expanded, mathematics was overwhelmed, and now that mathematics and science are playing more of a role again, it is computer science that is overwhelmed. For example, there are hardly any software engineers for quantum computers.
On the other hand, mathematics has made great progress in the non-material part of the machine room of modern computers, as can be read in Sebastian Stiller’s Planet of Algorithms, among others:
- In part, the acceleration of the algorithms was even more effective than the acceleration of the computers.
- Symbioses between algorithms and evaluation criteria made it possible to solve many practical problems very effectively.
- There was exemplary progress in developing mathematical solutions to social problems.
- Algorithms provided new models to explain social phenomena and thus tools to better understand the world.
The phenomenological impact
With these advances came new trends, which preferably occur as oppositely oriented trend pairs. For example, the world has become and is becoming more and more transparent, but this is firstly only potentially and not practically happening (because in the sea of information important information is easily overlooked) and secondly it is becoming increasingly opaque how all the digital tools work that point us to information.
The result of these pairs of trends is a dynamic (im)balance. We have more free time and more options and we can learn new skills and understand the world better. But we also unlearn many skills, get many new meaningless options for action, understand our tools less and less well and we are increasingly threatened by corruption, namely wherever the non-use of digital tools makes relative inefficiency grow.
Above all, our scope for creative action is shrinking, because human creativity needs neither too much nor too little freedom. But it is precisely the areas in which we are closely managed and controlled, or are exposed to high competitive pressure, and the areas in which almost anything is possible and freedom is almost limitless, that are both growing very strongly thanks to CT. Between these two extreme zones, there is less and less space in which we can unfold our human creativity.
In addition – among many other things – CT and mathematics are destroying the protection of privacy and creating new constraints on action: the former, for example, through the de-anonymisation of data and the latter, for example, through the partial automation of decisions.
The above observations confirm that we need to think about regulation. CT creates big problems even without dystopias, where it is used unreflectively or/and incompetently.
The outlined historical development of CT also provides us with a guideline for regulation. The following aspects of content should be addressed:
- The actual algorithms (which must have the appropriate qualities)
- The interaction between algorithms and criteria for evaluating real-world solutions (which must be adequate for these problems, must not discriminate, etc.)
- The programmes (which must be sufficiently correct)
- The quality(s) of the engineering (i.e. the CT at large)
- The design of the interfaces to the users (which must promote adequate use, whereby the requirements for this vary depending on the context)
- Quality, safety and risk management in practical use
- The provision of information and support
It can be seen that the algorithms represent only one of several aspects. However, this aspect should not be underestimated, as it is also hidden behind other aspects. Furthermore, it is evident that a context-independent regulation is no longer possible in view of the complex diversity of effects. The listed content-related aspects must each be considered in relation to the context of use. So we really have to deal with applied mathematics in the regulatory discussion. This is also new for mathematics!