Fairness and Bias in AI Applications for the Labor Market

Projektstart Bias

For the 2024 Applied Machine Learning Days (AMLD) conference held at EPFL, the BFH Applied Machine Intelligence group and NLP expert Dr. Elena Nazarenko organized a track on fairness in AI applications in the labor market. The conference brings together over 1500 participants from over 40 countries across industry, academia, and government.

For this year’s edition of the conference, the Applied Machine Intelligence research group from the Bern University of Applied Sciences together with Dr. Elena Nazarenko from the Lucerne University of Applied Sciences organized a track focusing on fairness in AI applications in the labor market. The track brought academic and industry perspectives together and drew from a broad range of disciplines, including data science, law, philosophy, economics and psychology. After a short introduction from track co-organizers Elena Nazarenko and Mascha Kurpicz-Briki, Mascha presented BIAS, an EU- and Swiss-funded project on mitigating AI bias in the labor market, for which she is one of the technical leads.

Prof Dr Mascha Kurpicz-Briki (in the middle) and the authors of this article infront of their track screen.

Eduard Fosch-Villaronga, Associate Professor at Leiden University and leader of the team specializing in law within the BIAS project, gave the first talk. He described friction between how job applicants and HR practitioners understand fairness and placed these understandings within the context of EU law and the AI act. His research also demonstrates the importance of the track topic by establishing that AI applications in the labor market are already widespread. Finally, he described the positions of AI developers, HR practitioners and the general populace on the use of AI in the labor market. There is a widespread recognition of potential benefits and harms, but also uncertainty about when and how such tools are/should be used .

The next talk was given by Preethi Lahoti, Research Scientist at Google, who discussed the process of building safe, inclusive, and fair Large Language Models (LLMs). She spoke about the use of LLMs to enhance their own safety and fairness, focusing on adversarial testing and mitigation strategies. She introduced AI-assisted Red Teaming (AART) for adversarial testing, where an LLM generates its own prompts to probe for harmful responses, as well as a novel method called collective-critiques and self-voting (CCSV), which improves diversity in an LLM’s output by having it generate, critique, improve and vote on multiple responses for a given prompt.

As the last speaker of the first session, Alejandro Jesús Castañeira Rodriguez, from Janzz.technology, addressed the critical issue of fairness and bias in AI-powered recommendations within the workforce. He  presented Janzz Technology’s Recommendation System. Crucially, their system differs from pure machine learning systems by explicitly relying on relevant features and knowledge graphs, thereby increasing transparency and mitigating unfair bias.

The afternoon session began with a talk from Christoph Heitz, Professor at ZHAW, who specializes in algorithmic fairness. He presented different viewpoints on fairness, ranging from Aristotle to techno-solutionism, the belief that technology can solve all social problems. He argued that to create a fair algorithmic system, we must first define an ethical stance. We then need corresponding fairness metrics that aim to measure the degree to which the system is aligned with the chosen notion of fairness. His talk concluded with a unifying framework for existing fairness metrics, showing that each arises from a specific choice of utility function, demographic groupings, and justifiers, i.e., moral reasons for justified inequality.


Elena Nazarenko (left), Alexandre Puttik (3rd from right) and Mascha Kurpicz-Briki (2cnd from right) with colleagues.

The next presenter was Cynthia Liem, an Associate Professor at TU Delft. She described cross-disciplinary work with psychologists on algorithmic job candidate screening, as well as soon-to-be published work on mathematical notions of fairness. She highlighted important misunderstandings between disciplines. For example, personality tests have been very questionably used to label data for the automatic evaluation of video interviews. Furthermore, her recent research indicates that no single mathematical fairness notion is suitable for early candidate selection.

Following this, Jana Mareckova, Assistant Professor in Econometrics at the University of St. Gallen, discussed using causal machine learning to evaluate the effectiveness of unemployment programs, such as courses and subsidies. Her talk covered measuring treatment effects for different groups as well as for a given individual. The latter poses particular challenges, since in practice we cannot observe what would have happened if the individual had done a different program or no program at all. She highlighted the insights these models provide and the interpretability challenges they pose.

At the end of the session Pencho Yordanov, Lead Data Scientist at The Adecco Group, presented his work on the use of LLMs in high-risk sectors like Human Resources. He highlighted the role of psychology in understanding and mitigating biases in decision-making, describing how human cognitive biases are also present in LLMs and how such biases affect candidate selection. He explained the influence of decoy candidates, who are very similar but slightly worse than other candidates, on recruiter preferences and observed similar effects in experiments on GPT3.5 and GPT4, demonstrating a further level of care that must be taken when employing such systems.

About AMLD

The Applied Machine Learning Days take place at the EPFL in Lausanne. This is one of the largest machine learning and AI events in Europe, focussing specifically on the applications of machine learning and AI. The event brings together more than 2,000 leaders, experts and enthusiasts from academia, industry, start-ups, NGOs and public authorities from more than 41 countries.

Creative Commons Licence

AUTHOR: Elena Nazarenko

Elena Nazanrenko is a data scientist at the Zurich-based developer Witty Works. She has a background in theoretical and computational physics and has already developed, among other things, an NLP project for collaborative work management, a chatbot prototype and improved the free text search of an eCommerce platform. Previously, she worked as a scientist at the Paul Scherrer Institute (ETH Domain, Switzerland) and at national research institutes in Sweden and France.

AUTHOR: Alexandre Puttick

Dr. Alexandre Puttick is a post-doctoral researcher in the Applied Machine Intelligence research group at the Bern University of Applied Sciences. His current research explores the development of clinical mental health tools and detecting and mitigating bias in AI-driven recruitment tools.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *