Do Neural Ranking Models Intensify Gender Bias?

Navid Rekabsaz, Markus Schedl

Research output: Chapter in Book/Report/Conference proceedingConference proceedingspeer-review

Abstract

Concerns regarding the footprint of societal biases in informationretrieval (IR) systems have been raised in several previous studies.In this work, we examine various recent IR models from the per-spective of the degree of gender bias in their retrieval results. Tothis end, we first provide a bias measurement framework whichincludes two metrics to quantify the degree of the unbalanced pres-ence of gender-related concepts in a given IR model’s ranking list.To examine IR models by means of the framework, we create adataset of non-gendered queries, selected by human annotators.Applying these queries to the MS MARCO Passage retrieval col-lection, we then measure the gender bias of a BM25 model andseveral recent neural ranking models. The results show that whileall models are strongly biased toward male, the neural models, andin particular the ones based on contextualized embedding models,significantly intensify gender bias. Our experiments also show anoverall increase in the gender bias of neural models when theyexploit transfer learning, namely when they use (already biased)pre-trained embeddings.1
Original languageEnglish
Title of host publicationProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
Pages2065–2068
Number of pages4
DOIs
Publication statusPublished - Jul 2020

Fields of science

  • 202002 Audiovisual media
  • 102 Computer Sciences
  • 102001 Artificial intelligence
  • 102003 Image processing
  • 102015 Information systems

JKU Focus areas

  • Digital Transformation

Cite this