Three facets of the humane digital transformation

How will the digital transformation affect our society and our lives, and what do we need to keep in mind so that people remain at the centre? Our author looks at three facets.

The digital transformation is advancing rapidly, and has the potential to change the way we live, work and learn. Especially with technologies from the field of machine learning, new challenges are emerging. What needs to be considered to ensure that this process of change remains people-friendly?

The new strategic thematic fields of humane digital transformation at Bern University of Applied Sciences addresses precisely this issue. The thematic field is concerned with how technology can be used in the best and most sustainable way, and how people and their needs can be placed at the centre. But what does this mean for the development of technology?

This article aims to shed light on three such aspects, which are guided by the following questions:

  1. How must the cooperation between people and technology be designed so that responsible and acceptable use is possible?
  2. Which ethical and human-related factors are relevant for the training of AI models?
  3. What should be considered regarding stereotypes in the training data and potential discrimination through AI decisions?

Us and the technology

In the current discussion, artificial intelligence is unfortunately regularly portrayed as an omniscient robot that will eventually set its sights on destroying humanity. This happens both in the broad media and in a subset of the professional discussion. Opinions differ as to whether such a scenario will ever come about, and if so, when. However, this goal of developing general artificial intelligence, which in the way it is often described more like a Hollywood script than a useful tool, must be questioned in terms of a humane digital transformation.

There are much more urgent problems in the collaboration between humans and technology, which are already relevant for tools that exist today. In a recent contribution, BFH researchers called to address in particular the challenges of how people and machines work together [1]. An important example here is whether a human receives commands from the machine (or the software), or the human uses these technologies as tools in work support, but the control over what happens and the reflection of decisions remains with the human.

We therefore advocate that such technologies be implemented in the context of augmented rather than artificial intelligence. Instead of completely replacing people, the technologies should be used as smart tools to support people in their everyday (work) lives. On the one hand, this is necessary because only humans can keep an eye on the overall context and critically reflect on decisions or proposals accordingly. AI is only as good as its training data and works with probabilities. This may often work well, but is not sufficient in critical decisions about people or resources.

The people behind the scenes

Another often underestimated point of ethical considerations in relation to AI systems are the people involved in the training processes. In order for such systems to learn, humans are often involved, for example to manually categorise texts. While this can sometimes be unproblematic, there is great potential for discussion in certain use cases and under poor working conditions.

Research by TIME magazine [2] uncovered exactly this in connection with ChatGPT. In order to avoid toxic and offensive texts in the responses of the well-known chatbot, the company OpenAI had outsourced this task to a company in Kenya. There, partly disturbing texts had to be edited by humans for around 2 dollars an hour.

Another question in this context is the origin of the data. Who wrote the texts from which new works are created using AI? It is similar with images and audio content, where the AI training data was created by human artists. Dealing with this problem offers many open questions for the digital society of the future.

Fairness and bias

The third aspect of the human components of digital transformation involves the problem of discrimination by AI. Many examples in recent years have shown how such technologies reflect society’s stereotypes. This has an impact on the results of such models. In one well-known example, software was trained to sort job application files [3]. In a previous SocietyByte article, we had reported on stereotypes in ChatGPT [4]. Solving this problem technically is very challenging and still the subject of ongoing research. In particular, developing such recognition methods for different languages is challenging, and there may be cultural differences in the stereotypes that are present [5][6].


About AI research at BFH

The Applied Machine Intelligence research group addresses the scientific, technical and societal challenges of AI technologies. With an applied focus, they put people at the centre. Among other things, they are the technical partner in the EU project BIAS, which is researching how to identify and reduce discrimination in AI applications in the field of HR.


References

[1] https://www.frontiersin.org/articles/10.3389/frobt.2022.997386/full

[2] https://time.com/6247678/openai-chatgpt-kenya-workers/

[3] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[4] https://www.societybyte.swiss/2022/12/22/hi-chatgpt-hast-du-vorurteile/

[5] https://www.frontiersin.org/articles/10.3389/fdata.2021.625290/full

[6] https://ceur-ws.org/Vol-2624/paper6.pdf

Creative Commons Licence

AUTHOR: Mascha Kurpicz-Briki

Dr Mascha Kurpicz-Briki is Professor of Data Engineering at the Institute for Data Applications and Security IDAS at Bern University of Applied Sciences, and Deputy Head of the Applied Machine Intelligence research group. Her research focuses, among other things, on the topic of fairness and the digitalisation of social and community challenges.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *