“We need to remove bias from language models” – a podcast episode about AI in administration and justice

Chat GPT is all the rage right now. The trained programme can actually chat about any topic and hold conversations that seem human. But ChatGPT also makes mistakes. In the latest episode of the podcast “Let’s Talk Business”, AI expert Mascha Kurpicz-Briki and data scientist Matthias Stürmer talk about how to avoid bias, why the term augmented intelligence fits better than AI and how it can be used in justice and administration. Click here for the episode and an abridged written version.

Today we talk about a topic that has been making a splash since the beginning of the year. ChatGPT. The trained language model can chat about any topic and have conversations that seem quite human. But ChatGPT also makes factual errors and another problem is bias. You investigated in a research project that artificial intelligence discriminates. How does that show up?

Prof. Dr. Mascha Kurpicz-Briki is deputy head of the Applied Machine Intelligence Research Group at the Department of Technology and Computer Science at BFH.

Mascha Kurpicz-Briki: Bias is a big problem. There are already biases in the training data on which such systems have been trained. This is data that our society has produced. That is, the data contains all the discriminations and stereotypes that we have in society. If we then train the AI on it, then these stereotypes are automatically present. This is a problem in general and also in the field of automatic language or text processing. This is what we call Natural Language Processing, which also includes ChatGPT. In the beginning, it was easier to detect a bias. Now the system often reacts very evasively by saying, “I don’t want to comment on that.” Or: “I’m just a language model and the topic is too sensitive for me.”

So would one have to train ChatGPT with non-discriminatory data? And is that even possible?

Mascha Kurpicz-Briki: The topic is being tackled in research and by large companies. But it’s not that simple. The easiest thing would be if we as a society simply stopped repeating stereotypes or coding them in the data. But ultimately, that is not possible from one day to the next. This means that we work with data that has been created historically, it is a product of its time. Part of our research is to get these biases out of the training data and language models.

Matthias, you are currently developing a language model for legal texts at your institute. What do you mean by that?

Matthias Stürmer: We are focusing on legal topics from the administrative context. We are examining whether court decisions that are anonymised can be de-anonymised in order to re-identify people. That would be a threat to privacy. Nevertheless, court decisions are published so that there is transparency about the court system. Together with lawyers, we are examining what are the possibilities or, fortunately, also the limits of AI, so that this is not possible so quickly. In this context, we have examined how legal texts are structured. Court decisions have clear structures and certain paragraphs have specific functions. In the process, we realised that a language model that focuses on legal terminology and understands German, French and Italian does not yet exist. We are now working on these basics.

What is the anonymisation of court decisions important for?

Prof. Dr. Matthias Stürmer heads the Institute Public Sector Transformation at the Department of Economics at BFH.

Matthias Stürmer: Anonymisation is generally practised in courts so that the privacy of the plaintiff parties is preserved. When company and personal names are anonymised, the content of the court case can be made public. That’s why our research project is called Open Justice versus Privacy, because it’s a field of tension: on the one hand, we want to protect privacy, and on the other hand, we want transparency so that judgments, especially federal court decisions, are comprehensible. Lawyers are afraid that with Big Data, with artificial intelligence, it will suddenly be possible to simply de-anonymise data. But we have already been able to disprove that. Even ChatGPT can’t de-anonymise court rulings, and it can’t be done with reasonable effort.

Besides data protection, what other risks are there?

Matthias Stürmer: Bias and discrimination are certainly a huge issue. But what we are still dealing with is the idea of digital sovereignty. Today, a lot of AI models are produced by companies from the USA and China. We want to show that Switzerland can produce such models with its own resources. Because all the existing chat programmes are a black box: we don’t know how, with which data they were trained, with which algorithms and with which protective mechanisms they are equipped. We want to make this transparent and show it. So that we are less dependent and aspects like fake news are curbed. Ultimately, this is important for democracy and our society.

What can your model be used for later?

Matthias Stürmer: It’s like a foundation on which different applications are built. I often compare it to an engine, a jet plane, so to speak, that you can use in different places. In our case, we generate a language model for the Federal Supreme Court that anonymises very well. Until now, this was done manually, i.e. with search and replace commands. This worked very well, but often court decisions also contain personal identifying features, and our language model should filter these out even better. We also experimented: What is the difference between generic and specific language models? The latter recognises cloze texts much better, for example, and can fill the gaps more precisely than a generic language model. And now you can come up with any ideas you like.

Which ones, for example?

Matthias Stürmer: One idea is to make legal texts, which are often difficult to understand, easier to formulate. In Natural Language Processing there is a process step called text simplification. This can be used to produce simpler legal language. We haven’t tried it yet, but it’s a well-known area of research in NLP. This makes difficult texts more inclusive so that all people can understand them well.

Mascha Kurpicz-Briki: This is a very nice example of how language models can be used for inclusion. Because on the one hand we have the risk of discrimination, but on the other hand there are many exciting use cases where language models are very useful to us.

This is a shortened version of the conversation, you can hear the full length here:


Links to the topic

Institute Public Sector Transformation

Project OpenJustice vs. Privacy

Applied Machine Intelligence Working Group

Project Detection & Mitigation of Biases in AI Applied to the Labour Market


TRANSFORM 2023: Artificial Intelligence in the Public Sector

The theme of TRANSFORM 2023 is “artificial intelligence in the public sector”. Machine learning, chatbots, natural language processing and other methods based on artificial intelligence (AI) offer many opportunities, but also certain risks for public authorities. Where are we today with the application of AI? What experiences does the administration have with AI? Where is there potential for the use of AI in administrative work processes? What are the possible risks and opportunities? These questions will be discussed together with speakers from science, administration and other organisations.
Keynotes will be given by Paulina Grnarova (DeepJudge) and Bertrand Loison (FSO), followed by a reality check by Matthias Mazenauer (Canton ZH) and commentary by Marc Steiner (IPST). This will be followed by a variety of different contributions with practical experiences, legal considerations and critical questioning.

Further information and registration can be found here.


This podcast is produced with the kind support of: Audioflair Bern and Podcastschmiede Winterthur.

Creative Commons Licence

AUTHOR: Anne-Careen Stoltze

Anne-Careen Stoltze is Editor in Chief of the science magazine SocietyByte and Host of the podcast "Let's Talk Business". She works in communications at BFH Business School, she is a journalist and geologist.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *