On the blind spots of AI – from discrimination to technological responsibility

The relationship between AI and societal diversity is intricate. BFH-researcher Mascha Kurpicz-Briki probes this in her interview with AI luminary Roger A. Søraa, who explored in his groundbreaking book, “AI for Diversity.” He warns of the subtle dangers of AI-induced exclusion, emphasizing the overlooked intersections of discrimination. Not only he challenges the notion of technological determinism, instead he is advocating for a holistic sociotechnical approach.

Societybyte: In your recent book AI for Diversity you discuss different aspects on how AI can lead to the exclusion of different groups of the society: where do you see the biggest danger in these latest advances?

Roger A. Søraa: In AI for Diversity I focus on several different aspects where AI can exclude based on e.g. gender, age, race and class. There are many well known examples where AI has been shown to lead to discrimination due to this. However, I think the biggest dangers are in our blindspots—the cases where we don’t realize discrimination occurs. In the book’s last chapter I focus on “intersectional” exclusion, where two or more parameters together can create exclusion practices. With AI inferring potentially a lot from personal data, we need to be extra aware of discriminatory exclusion in AI’s recommendations and decisions.

Prof Roger A. Søraa is expert in AI.

You mention also the dangers of technological determinism, could you explain what stands behind this term and how it can be a problem?

Technological determinism is the belief that technological development drives societal change. This can be both on the utopian side, where we trust technology to solve our problems, as well as on the dystopian side, blaming AI when things go wrong. I argue for a sociotechnical approach, seeing society and technology not as separated and impacting each other, but deeply interwoven. What we do as societies thus impacts technology and vice versa.

Are there also ways in which AI can support diversity?

AI has tremendous potential to change the world—and while I urge us to be careful about its dangers, I also see a strong potential for making AI for good. Leveraging AI’s enormous potential for supporting diversity, e.g. on representation of marginalized groups, making content more accessible e.g. to people with disabilities, supporting educational equality by targeted learning strategies for individuals, making language learning and translation more seamless, as well as reducing bias in recruitment and management (see for example the BIAS project (biasproject.eu) that I am leading) to mention some of the many potential benefits.

You state that a responsible use of AI technologies is crucial. Can you point out some key elements to be aware of?

AI needs to have humans in the loop to ensure technological responsibility. This relates to issues of transparency (e.g. what data is included, in what way, how is it being interpreted?); fairness for all groups impacted by the AI; issues on privacy; as well as ethical considerations. The European research community has for a long time focused on Responsible Research and Innovation. Although this approach is both solid and sound, AI is creating new challenges—especially for diversity as mentioned in my book—that requires good collaboration across knowledge fields to solve.

About the person

Roger A. Søraa is Associate Professor  in Science and Technology Studies (STS) at the Department of Interdisciplinary Studies of Culture (KULT). His research focus is on automation, robotization, and the digitalization of society—how humans and technology relate to each other. Dr. Søraa is especially interested in the social domestication of technology, see e.g. his research on hospital robots and gerontechnologies of the home. He’s also a Senior Researcher at NTNU Social Research and partner of the Horizon Europe Project BIAS (LINK: BIAS (bfh.ch)) with BFH School of Engineering and Computer Science.

Creative Commons Licence

AUTHOR: Mascha Kurpicz-Briki

Dr Mascha Kurpicz-Briki is Professor of Data Engineering at the Institute for Data Applications and Security IDAS at Bern University of Applied Sciences, and Deputy Head of the Applied Machine Intelligence research group. Her research focuses, among other things, on the topic of fairness and the digitalisation of social and community challenges.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *