The Debiaser – Novel Tools for Bias Detection and Mitigation in AI Systems

Bias in AI is a major challenge of today’s artificial intelligence (AI) technologies. The proof-of-concept technology developed in the Horizon Europe project BIAS addresses this problem. The first results were recently presented at the EWAF’23: European Workshop on Algorithmic Fairness by BFH researchers in collaboration with partners from Leiden University.

The Challenges of Bias in AI

Over the last years, many examples have exposed the problem of bias in applications of artificial intelligence. AI recruiting bias against women [2], racist chatbots [3], or passport checkers exposing bias against dark-skinned women [1] are just a few examples of what can happen when bias is included in machine learning training data or models.

There are many challenges in the detection and mitigation of bias. AI technologies come in many different forms, and thus require different training data including videos, images, texts or structured data. Different training data might require different bias detection and mitigation. Also, the stereotypes of the society leading to such a bias in the data can be directed towards very different attributes including gender, origin, age and many other personal characteristics. Additionally, the bias can be intersectional. The bias can also be introduced at different stages of the development process, including the training data reflecting a historical bias of the society, or at later stages in the process in the way the technology is applied or developed. Finally, the definition of fairness is challenging. What is fair or not can mean different things to different people and thus requires additional investigation for each specific use case.

AI in the Labor Market

AI technologies are increasingly applied also in the context of the labor market. The questions of fairness and bias in this context are investigated in the Horizon Europe funded project called “BIAS.” In a first round of fieldwork, 70 HR managers and AI developers located in different European countries were interviewed [4]. In general, participants had a positive attitude towards the deployment of AI applications supporting the recruitment and selection process, however, some expressed concerns about the management of staff involving AI. The participants also called for the adoption of mitigation measures to address diversity biases in this context, with a special emphasis on gender bias.

The implementation of methods to detect and mitigate bias in AI applications, and in particular language models, as well as the development of fair decision-making in the context of HR applications is the goal of the BIAS project. A proof-of-concept technology, called the Debiaser, is developed by the project for this purpose.

The Debiaser

The Debiaser consists of three different components, as shown in Figure 1.

Figure 1: The different components of the Debiaser.

 

At first, it examines how fair recruiting can be made by using case-based reasoning. This will require a use-case specific definition of fairness, ensuring that similar candidates are treated in a similar way. The core of case-based reasoning is the idea to solve similar problems by reusing successful solutions previously applied to similar problems, manually curated by humans.

The Debiaser furthermore investigates how applications from the field of natural language processing (NLP) in this context can be biased. On one side, it looks at text-based decision making. It aims to explain how decisions are made, and thus expose potential bias in the training data and propose methods to mitigate this bias.

Finally, the Debiaser investigates how societal stereotypes are reflected in language models. Language models are the models behind applications like ChatGPT or Bard. They encode the relationships between words in mathematical vectors, and thus calculations can be done to compute whether, for example, two words are related to each other. Here the bias comes into play. It has been shown that there is a bias among career/family words and male/female first names [5]. In particular, there is indication in research that these types of societal stereotypes encoded in language models are dependent on the language and the cultural context [6]. The Debiaser aims at quantifying and reducing bias in language models of different European languages, as shown in Figure 2.

Figure 2: One part of the Debiaser investigates how to measure and reduce bias in pre-trained language models of different European languages.

 

First insights into the project and the on-going work on the Debiaser was recently presented at the European Workshop on Algorithmic Fairness EWAF’23 by researchers from the Applied Machine Intelligence research group at BFH in collaboration with partners from the eLaw center of the Leiden University:

Rigotti, C., Puttick A., Fosch-Villaronga, E., and Kurpicz-Briki, M. (2023) The BIAS project: Mitigating diversity biases of AI in the labor market. European Workshop on Algorithmic Fairness (EWAF’23), Winterthur, Switzerland, June 7-9, 2023. Available at https://ceur-ws.org/Vol-3442/paper-47.pdf


References

 [1] https://www.bbc.com/news/technology-54349538.amp

[2] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[3] https://www.bbc.com/news/technology-35902104

[4] Rigotti, C., Puttick A., Fosch-Villaronga, E., and Kurpicz-Briki, M. (2023) The BIAS project: Mitigating diversity biases of AI in the labor market. European Workshop on Algorithmic Fairness (EWAF’23), Winterthur, Switzerland, June 7-9, 2023. Available at https://ceur-ws.org/Vol-3442/paper-47.pdf

[5] Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186.

[6] Kurpicz-Briki, M., & Leoni, T. (2021). A World Full of Stereotypes? Further Investigation on Origin and Gender Bias in Multi-Lingual Word Embeddings. Frontiers in big Data, 4, 625290. Available at https://www.frontiersin.org/articles/10.3389/fdata.2021.625290/full


Link to eLaw – Center for Law and Digital Technologies https://www.universiteitleiden.nl/en/law/institute-for-the-interdisciplinary-study-of-the-law/elaw

Link to AMI research group: bfh.ch/ami

Link to Project BIAS biasproject.eu

Creative Commons Licence

AUTHOR: Mascha Kurpicz-Briki

Dr Mascha Kurpicz-Briki is Professor of Data Engineering at the Institute for Data Applications and Security IDAS at Bern University of Applied Sciences, and Deputy Head of the Applied Machine Intelligence research group. Her research focuses, among other things, on the topic of fairness and the digitalisation of social and community challenges.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *