Networking and loss of consistency (3) – the missing technologies

Kolumne_Banner

Is progress really accelerating? Is AI the technology of the future that will replace all others? Will we humans be relegated to the spectator stands in the long term? The answer to all these questions is currently the same: it is possible, but it does NOT look that way at the moment. Technology development is stalling.

The Internet makes almost everything (or at least a great deal) accessible. However, this alone does not create a complexity of information and knowledge, but first of all a great need for content curation, i.e. the situational compilation of content. Search engines such as Google do this, as do recommendation systems such as Amazon and, more recently, AI-based co-pilots. Versions of these have been and still are customised for specialist disciplines or institutional settings. The more specialised the information requirement, the better the systems work in an internally optimised intranet and the worse they work across information systems. However, the latter would be important for broad knowledge networking

Technological curation is missing

In fact, the functional limits of digital curation services represent hard limits for the possible complexity of our society. Much has been tried in recent years to develop better curation services – especially in academic research – but hardly any of this has found its way into practical use. As a result, technology-based curation still has narrow limits and these in turn limit knowledge networking

This sounds very theoretical, but it is as relevant to practice as a deadly club in the hand of a guard in front of a door you want to go through. Transdisciplinary research in particular is limited today by the fact that there are no really good curation technologies for transdisciplinary researchers. They typically have the ability to quickly approach a discipline from the outside – at least at the level of the usually sufficient “85% understanding” – as long as they are provided with suitable information. Finding this information is currently only possible via people. Search engines, recommendation systems and co-pilots are not suitable for this. The main reason is that there is no data from which the machines could learn, because transdisciplinary research is both rare and highly individualised

Of course, it is currently popular to claim that humans will no longer be able to understand AI in the future. But that turns the world on its head. AI is incapable of understanding humans and, in particular, is unable to gather useful information for them in complex situations

In fact, the two AI groups currently fighting each other – those researching explainable AI and those claiming that this will not be possible in the future – are not even interested in taking up the challenge. Their aim is to put AI on a pedestal, and they do not shy away from Dadist tricks

However, it is not necessary to resort to marginal, highly elitist, transdisciplinary research to prove the technological deficits. The obstacle is also one size smaller: the creation of participatory democracy fails because there are no curation and mediation technologies for participatory engagement. These technologies do not even exist for professional law-making by parliaments! For example, they usually have little information about what other parliaments are doing, specifically when it comes to the implementation of EU regulations by other states. The effort required to obtain this information is simply too high

Of course, you can consider participatory democracy to be a false goal and therefore be pleased that the necessary technologies are lacking, but the fact is that the EU is striving for this, has concretised it in the Treaty of Lisbon, among other things, and has also invested some taxpayers’ money in research into it. So far, however, the successes have been marginal and have mostly been labelled “smart city”. The latter is not meant cynically: as a researcher, I have no normative idea of participation. I am just as interested in its appearance in Amsterdam as I am in Medina – and if it is more popular with the male locals in Medina than in Amsterdam, so be it. In any case, smart cities are the best thing that has been achieved so far in terms of participation

Conclusion: Contrary to all clichés, there is a need for technologies that have not yet been developed. This need cannot be met by AI in the medium term.

However, thanks to scientific and philosophical smoke and mirrors, this has so far been successfully concealed from the public

Growing complexity

Perhaps it is a good thing that AI cannot match human capabilities? Perhaps we should be happy about the lack of technical solutions? Yes, perhaps! It just raises questions about the content of current research. AI research and development is currently focussed on goals that are questionable from a democratic perspective: Microtargeting and manipulation (Schöndeutsch: personalised communication and nudging), hate maximisation (Schöndeutsch: activating participants on social media platforms), monitoring and disciplining (Schöndeutsch: promoting good behaviour for the benefit of good coexistence), paternalism (Schöndeutsch: AI that explains itself), et cetera. However, it does little to strengthen democracy

But wait! I said at the beginning that the complexity of the world is limited by a lack of technology. Isn’t that a desirable phenomenon? Yes, perhaps! But there really is a lot of evidence that complexity growth is something desirable as long as it doesn’t get out of hand. The evolution of life has been like this, the evolution of humanity has been like this and the evolution of economics has been like this – not to mention science. Deliberate limitation leads to dystopias – see “The Physicists” by Dürrenmatt. A restriction caused by technological deficits may be politically harmless, but it may block creative problem-solving. The only question is: Are the side effects of the growth in complexity more negative than the resulting problem-solving options? We should keep this question in mind, but without simply accepting the lack of the necessary technology


This column is the 3rd part of a mini-series. The previous part 1 and part 2 can be found here.

Creative Commons Licence

AUTHOR: Reinhard Riedl

Prof. Dr Reinhard Riedl is a lecturer at the Institute of Digital Technology Management at BFH Wirtschaft. He is involved in many organisations and is, among other things, Vice-President of the Swiss E-Government Symposium and a member of the steering committee of TA-Swiss. He is also a board member of eJustice.ch, Praevenire - Verein zur Optimierung der solidarischen Gesundheitsversorgung (Austria) and All-acad.com, among others.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *