When Mehmet and Peter are not the same – bias based on the origin of names in word vectors

Pre-trained language models such as word vectors can contain social stereotypes, which can lead to software making discriminatory decisions. Based on the results of the first study, researchers from IDAS at the Bern University of Applied Sciences have now investigated further stereotypes hidden in word vectors. The results were published in the journal Frontiers in Big Data.

Natural language processing (or computational linguistics ) is a branch of computer science that deals with the automated processing of human language in text or speech data. However, besides the many possibilities of such technologies, there are also challenges, especially regarding the fairness of such systems. For example, common applications for automatic translation of texts are full of social stereotypes. It has been shown that “he” is associated with the adjective strong or the profession dentist, and “she” with the adjective pretty or the profession dental hygienist (Republic, 2021).

Bias in word vectors

But how is it that software makes such decisions? To understand this, we introduce the concept of word ve ctors (word embeddings). Word vectors are mathematical vectors that represent words. Based on mathematical operations, it can be determined whether, for example, two words are similar in content (namely, if the corresponding word vectors are close to each other). In order to make existing stereotypes and prejudices measurable in word vectors, a statistical test was developed, the so-called WEAT method (Caliskan et al., 2017). The method is based on the Implicit Association Test (IAT) (Greenwald et al., 1998), which is used in the field of psychology. The IAT detects implicit biases in people. The human test subjects have to associate terms with each other, and based on the reaction time it can be determined whether an implicit bias is present. Analogous to the reaction time of humans in the IAT, the WEAT method uses the distance between the word vectors of two words.

Cultural differences

Many studies deal with English word vectors. As stated in a previous article, author Mascha Kurpicz-Briki applied the WEAT method to German and French word vectors, and was able to demonstrate that these also contain a bias (Kurpicz-Briki, 2020). In particular, evidence was found that there are cultural differences in the form in which the bias occurs. A new study by the authors now also examined Italian and Swedish word vectors. There, too, a bias was present with regard to the known experiments, although(analogous to German and French ) not all forms of the bias were equally present. This reinforces the hypothesis that the way a stereotype appears in word vectors can be different in different languages and cultures.

Bias based on the origin of the name

The new study also investigated whether there is a bias against certain migration groups in German-language word vectors. The original experiment was based on African-American and European-American names (Caliskan et al., 2017). It was found that there was a statistically significant difference in terms of positive and negative words. In the new study, the experiment was adapted to Switzerland. The word vectors of the most common names in Switzerland (e.g. Peter, Daniel, Anna, Ursula) were compared with the most common names from the countries of origin of some large migration groups in Switzerland (e.g. Egzon, Mehmet, Fatma, Aferdita). It could be shown that there is also a statistically significant difference between these two name groups in the German-language word vectors (see WEAT5-origin in Figure 1 for the complete list).

Figure: The experiments with which a bias based on name origin was found in German-language word vectors (Kurpicz-Briki & Leoni, 2021).

It was also investigated whether the word vectors also contain a bias with regard to occupational terms. Therefore, the two groups of names were associated with positive (e.g. executive, professional) and negative (e.g. fail, dropout) words related to professional or financial situation. The experiment was conducted separately for women’s (WEAT6-origin-f) and men’s (WEAT6-origin-m) names, and the exact word lists are shown in Figure 1. In both cases, a statistically significant difference could be shown.

Consequences of bias in word vectors

What happens when certain groups of the population are represented differently in word vectors? What impact does this have on software that uses these word vectors? These questions are not yet fully understood and will be investigated in further research. It must be ensured that these stereotypes of our society, which have found their way into the word vectors, are not replicated or even reinforced by software applications. This is the only way to prevent automatic decisions from being unfair.


References

  1. (Republic, 2021) https://www.republik.ch/2021/04/19/sie-ist-huebsch-er-ist-stark-er-ist-lehrer-sie-ist-kindergaertnerin
  2. (Caliskan et al., 2017) Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
  3. (Greenwald et al., 1998) Anthony G Greenwald, Debbie E McGhee, and Jordan LK Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6):1464.
  4. (Kurpicz-Briki, 2020) Mascha Kurpicz-Briki. 2020. Cultural differences in bias? origin and gender bias in pre-trained German and French word embeddings. 5th SwissText & 16th KONVENS Joint Conference 2020, Zurich, Switzerland.
  5. (Kurpicz-Briki & Leoni, 2021) Kurpicz-Briki, M. and T. Leoni (2021). “A World Full of Stereotypes? Further Investigation on Origin and Gender Bias in Multi-Lingual Word Embeddings”. Research Topic: Training Big Data: Fairness and Bias in the Digital Age, Frontiers in Big Data, 04/2021.

About the study

Direct link to the paper: https://www.frontiersin.org/articles/10.3389/fdata.2021.625290/abstract

Creative Commons Licence

AUTHOR: Mascha Kurpicz-Briki

Dr Mascha Kurpicz-Briki is Professor of Data Engineering at the Institute for Data Applications and Security IDAS at Bern University of Applied Sciences, and Deputy Head of the Applied Machine Intelligence research group. Her research focuses, among other things, on the topic of fairness and the digitalisation of social and community challenges.

AUTHOR: Tomaso Leoni

Tomaso Leoni is doing an internship at the Institute for Data Applications and Security IDAS at the Bern University of Applied Sciences as part of his studies at the Informatikmittelschule Bern. He is working in the field of Natural Language Processing.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *