Networking and loss of consistency – Part 2: Distractions from the essentials

Kolumne_Banner

Almost 10 years ago, the insider discourse on data protection changed fundamentally. The residual utilisation of the findings of that time is in full swing. But instead of reorienting data protection, it is clouding our view of major social problems of the present

We have (hopefully) all realised that the “bullshit in – bullshit out” principle also applies to AI. Anyone who trains AI with discriminatory data or discriminatory data curation – which is not the same thing – will reap a discriminatory AI. This also applies if neither the data nor the data curation is discriminatory, but the model is based on bullshit heuristics. And – this realisation is fairly new, about 5 years old – it also applies if the optimisation problem is unstable to small changes in the data. In such cases, only one thing helps: doing the opposite of what is done against discrimination in the analogue world, namely explicitly including the characteristics that are used to discriminate – for example, place of residence or origin

So far, so simple. We could have imagined a lot of this without seeing concrete examples, but examples help. Cathy O’Neill’s 2016 book “Weapons of Math Destruction” is full of highly illustrative examples. Anyone who reads the book quickly realises why the discourse on data protection changed fundamentally back then. Personal data should not be protected, but rather its misuse should be prohibited – even if it is done with the consent of those affected – according to the realisation at the time. In many respects, possible discrimination was a key reason for the change in perspective. However, it remained unclear at the time – and still remains so today – how good regulation could be formulated in a sensible and practically applicable way

End of the constructive discourse

The fascinating thing about this story is that the discourse of 5 to 10 years ago has paralysed. The practice of legal interpretation tends towards far-reaching destructiveness, which often enough ends in (un)pure lunacy. Example: Hybrid teaching has recently been deemed criminal if many minutes of each lesson are not spent on signature campaigns. Nobody knows who is being harmed by this. No matter, because data protection is the most powerful weapon for preserving the status quo – and this has gradually become the ultimate goal. It is fitting that the ticked-off topic of discrimination in the use of AI is booming, but the prevention of abuse in the use of data is no longer an issue and we have returned to the ideas on privacy protection from 30 years ago. These were correct at the time, with the technology, data management and Internet culture of the time – today they are useless. The only advance in data protection practice – data portability – is an idea that is a good 10 years old, even though the concept was primarily invented to strengthen European cloud providers

Because you can’t see the ones in the dark!

This doesn’t seem too bad, because Europe has long been weak at translating technological progress into economic growth. We have now become accustomed to constantly falling behind – and as European companies are, or rather have to be, more socially responsible than American companies, the falling behind is spread across everyone and is therefore not noticeable to anyone. But the moral and ethical thematisation of AI with well-understood and controllable problems obscures the actual AI problem. AI is 80% researched and developed firstly to place advertising more successfully or to steer people politically, secondly to automate human labour and thirdly to be able to monitor society better. At best, 20% of research expenditure is spent on enabling people to do better work. But it is probably much less than 20%

The result: the position of employees is weakened in the long term and the economy is not getting off the ground. If you need examples to understand better: In the book “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” by Daron Acemoglu and Simon Johnson, there are plenty of analogue stories from economic history. Everything we are currently experiencing follows historical anti-patterns. Even the role of morality – today ethics – is an entirely traditional one. It distracts from the essentials and strengthens the strong

The great lack of understanding about the supposed lack of understanding

And what do we do? Many intellectuals are outraged that people vote for the wrong parties or the wrong men, or refuse to stand together in solidarity during crises. Personally, I have nothing against indignation as long as it is temporary. In most cases, it is indignation against the indignation of others anyway. However, despite all the cheerful or angry indignation, we should at least look at who belongs to those marginalised groups who, in their indignant totality, are marching towards 50% in democratic elections in some countries. In a nutshell – a more comprehensive description can be found in the book “Trigger Points” by Steffen Mau, Thomas Lux and Steffen Westheuser – they are largely the future victims of the direction taken by AI development, supplemented by a small proportion of those who are more aware of social developments than others. Who is still surprised?

Our options for action

The good news is that we as a society have it in our hands, as we have always had it in our hands, what we make of technological progress – specifically in the field of AI. On the one hand, we can change the direction of research and development and, on the other, we can change the rules of the game for the distribution of the added value generated. Entrepreneurship, research funding and politics can each steer progress in the field of AI within their own sphere of action. The question of all questions is: What do we think about people?Do we want to digitally empower them or digitally replace them – or do we not care at all?

Unfortunately, the answer is not as trivial as it might seem. It is true that large parts of research funding and politics are completely uninterested in the future of human labour. They are interested in the past, not the future. It is true that many entrepreneurs are primarily interested in profit growth thanks to new savings opportunities and do not even consider possible new business models. However, it is also true that “digital enabling” is not easy and encounters many blockades

In many industries, IT has invented new tasks for people that are neither fun nor clearly beneficial. Many would like to see a return to the past, i.e. to bring direct benefits to other people through their work, for example in the care sector. Here, “digital enabling” is almost beyond the realms of the imaginable; the digital should bring back the time for working with people that it previously stole. Elsewhere, on the other hand, it is understood that everything is heading towards cyborgisation – in the sense of the old book “Natural-Born Cyborgs – Minds, Technologies, and The Future of Human Intelligence” by Andy Clark – and this raises justified fears. A presentation of the history of human externalisation could (perhaps) help here – I have received surprisingly positive feedback on this – but it would need more popularisation. If you look further afield, you will find other blockages to digital enabling that are quite tangible and go beyond mere mental blocks. Digital enabling must also be learnt

This is why the following also applies with regard to a human-centred future of AI use: you not only have to want to do the right thing (which is not the case for the vast majority of people today), you also have to be able to do it right. At least the first foundations for the latter are being laid at universities today. But just as wanting to without being able to is not enough, neither is being able to without wanting to. Our biggest problem is that the social context speaks against wanting. The situation is highly complex. A variety of sensitivities and power-political interests prevent the networking of findings from a wide range of sciences from helping us to promote digital enabling. One example of this is the fact that anthropology is currently going down the drain – for the sake of justice. So we may be heading in the wrong direction with AI and our ethics for many years to come, but in the long term, a human-centred approach will eventually prevail (or not)


Part 1 of the column was published here.

Creative Commons Licence

AUTHOR: Reinhard Riedl

Prof. Dr Reinhard Riedl is a lecturer at the Institute of Digital Technology Management at BFH Wirtschaft. He is involved in many organisations and is, among other things, Vice-President of the Swiss E-Government Symposium and a member of the steering committee of TA-Swiss. He is also a board member of eJustice.ch, Praevenire - Verein zur Optimierung der solidarischen Gesundheitsversorgung (Austria) and All-acad.com, among others.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *