Second International Workshop on
Algorithmic Bias in Search and Recommendation (Bias 2021)

to be held as part of the 43rd European Conference on Information Retrieval (ECIR 2021)

April 1, 2021 09:00-16:30 CEST - ONLINE EVENT

Workshop Aims and Scope

Both search and recommendation algorithms provide a user with a ranking that aims to match their needs and interests. Despite the (non) personalized perspective characterizing each class of algorithms, both learn patterns from historical data, which conveys biases in terms of imbalances and inequalities.

In most cases, the trained models and, by extension, the final ranking, unfortunately strengthen these biases in the learned patterns. When a bias impacts on human beings as individuals or as groups with certain legally protected characteristics (e.g., race, gender), the inequalities reinforced by search and recommendation algorithms lead to severe societal consequences like discrimination and unfairness.

Challenges that arise in the real-world applications are focused, among others, on controlling the effects generated by popularity bias to improve the user's perceived quality of the results, supporting consumers and providers with fair rankings, and transparently explaining why a model provides a given (less) biased result. Hence, being able to detect, measure, characterize, and mitigate bias while keeping high effectiveness is a prominent and timely challenge.

BIAS 2021 will be the ECIR's workshop aimed at collecting new contributions in this emerging field and providing a common ground for interested researchers and practitioners. Specifically, BIAS 2021 will be the second edition of this dedicated event at ECIR, coming after a very successful 2020 delivering. Given the growing interest of the community in these topics, we expect that this workshop will be more and more of interest, with a stronger outcome and a wider community dialog.

Workshop Topics

The workshop welcomes contributions in all topics related to algorithmic biasand fairness in search and recommendation, focused (but not limited) to:

  • Data Set Collection and Preparation:
    • Managing imbalances and inequalities within data sets
    • Devising collection pipelines that lead to fair and unbiased data sets
    • Collecting data sets useful for studying potential biased and unfair situations
    • Designing procedures for creating data sets for research on bias and fairness
  • Countermeasure Design and Development:
    • Conducting exploratory analysis that uncover biases
    • Designing treatments that mitigate biases (e.g., popularity bias)
    • Devising interpretable search and recommendation models
    • Providing treatment procedures whose outcomes are easily interpretable
    • Balancing inequalities among different groups of users or stakeholders
  • Evaluation Protocol and Metric Formulation:
    • Conducting quantitative experimental studies on bias and unfairness
    • Defining objective metrics that consider fairness and/or bias
    • Formulating bias-aware protocols to evaluate existing algorithms
    • Evaluating existing strategies in unexplored domains
    • Comparative studies of existing evaluation protocols and strategies
  • Case Study Exploration:
    • E-commerce platforms
    • Educational environments
    • Entertainment websites
    • Healthcare systems
    • Social media
    • News platforms
    • Digital libraries
    • Job portals
    • Dating platforms

Important Dates

  • Submissions: January 4, 2021 January 11, 2021
  • Notifications: February 8, 2021 February 15, 2021
  • Camera-Ready: March 1, 2021 March 15, 2021
  • Workshop: April 1, 2021 - ONLINE EVENT

All deadlines are 11:59pm, AoE time (Anywhere on Earth).

Submission Details

All submissions must be written in English. Authors should consult ECIR paper guidelines and Fuhr’s guide to avoid common IR evaluation mistakes, for the preparation of their papers. Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either LaTeX or Word. Papers should be submitted as PDF files to Easychair at https://easychair.org/conferences/?conf=bias2021. Please be aware that at least one author per paper needs to register and attend the workshop to present the work.

We will consider three different submission types:

  • Full papers (12 pages) should be clearly placed with respect to the state of the art and state the contribution of the proposal in the domain of application, even if presenting preliminary results. In particular, research papers should describe the methodology in detail, experiments should be repeatable, and a comparison with the existing approaches in the literature should be made.
  • Reproducibility papers (12 pages) should repeat prior experiments using the original source code and datasets to show how, why, and when the methods work or not (replicability papers) or should repeat prior experiments, preferably using the original source code, in new contexts (e.g., different domains and datasets, different evaluation and metrics) to further generalize and validate or not previous work (reproducibility papers).
  • Short or position papers (6 pages) should introduce new point of views in the workshop topics or summarize the experience of a group in the field. Practice and experience reports should present in detail real-world scenarios in which search and recommender systems are exploited.

Submissions should not exceed the indicated number of pages, including any diagrams and references.

The reviewing process will be coordinated by the organizers. Each paper will receive three reviews from the programme committee, according to reviewers' expertise.

The accepted papers and the material generated during the meeting will be available on the workshop website. The workshop proceedings will be also published as a Springer's Communications in Computer and Information Science (CCIS) revised post-proceedings volume, and indexed on Google Scholar, DBLP and Scopus. Authors of selected papers may be invited to submit an extended version in a journal special issue.

We expect authors, PC, and the organizing committee to adhere to the ACM’s Conflict of Interest Policy and the ACM’s Code of Ethics and Professional Conduct.

Keynote

Carlos Castillo

Carlos Castillo
Universitat Pompeu Fabra (Spain)

Title: Fairness and Transparency in Ranking

Abstract: Ranking in Information Retrieval (IR) has been traditionally evaluated from the perspective of the relevance of search engine results to people searching for information, i.e., the extent to which the system provides "the right information, to the right people, in the right way, at the right time." However, people in current IR systems are not only the ones issuing search queries, but increasingly they are also the ones being searched. This raises several new problems in IR that have been addressed in recent research, particularly with respect to fairness/non-discrimination, accountability, and transparency.

Short Bio: Carlos Castillo is a Distinguished Research Professor at Universitat Pompeu Fabra in Barcelona, where he leads the Web Science and Social Computing research group. He is a web miner with a background on information retrieval, and has been influential in the areas of crisis informatics, web content quality and credibility, and adversarial web search. He is a prolific, highly cited researcher who has co-authored over 80 publications in top-tier international conferences and journals, receiving a test-of-time award, four best paper awards, and two best student paper awards. His works include a book on Big Crisis Data, as well as monographs on Information and Influence Propagation, and Adversarial Web Search.

Slides: Download here

Program

Due to the ongoing worldwide COVID-19 situation, the BIAS@ECIR2021 workshop will take place online on April 1, 2021, 09:00-16:30, CEST. To participate, you need to register to the ECIR conference. Once registered, you will receive by e-mail further details on how to join the workshop.

Timing Content
09:00 09:05 Welcome Message
09:05 10:00 Keynote Talk on Fairness and Transparency in Ranking by Carlos Castillo (Universitat Pompeu Fabra)
10:00 10:30
Paper Session I (short introduction)
  • 10:00 - 10:15 (12 mins + 3 mins Q&A)
    Towards Fairness-Aware Ranking by Defining Latent Groups Using Inferred Features
    Yunhe Feng; Daniel Saelid; Ke Li; Ruoyuan Gao; Chirag Shah
  • 10:15 - 10:30 (12 mins + 3 mins Q&A)
    Media Bias Everywhere? A Vision for Dealing with the Manipulation of Public Opinion
    Michael Färber; Frederic Bartscherer
10:30 11:00 Coffee Break
11:00 13:00
Paper Session II (short introduction)
  • 11:05 - 11:20 (12 mins + 3 mins Q&A)
    Preliminary Experiments on the Stability of Fairness-Aware Techniques
    Toshihiro Kamishima; Shotaro Akaho; Yukino Baba; Hisashi Kashima
  • 11:20 - 11:40 (15 mins + 5 mins Q&A)
    Detecting Race and Gender Bias in Visual Representation of AI on Web Search Engines
    Mykola Makhortykh; Aleksandra Urman; Roberto Ulloa
  • 11:40 - 12:00 (15 mins + 5 mins Q&A)
    Equality of Opportunity in Ranking: A Fair-Distributive Model
    Elena Beretta; Antonio Vetrò; Bruno Lepri; Juan Carlos De Martin
  • 12:00 - 12:20 (15 mins + 5 mins Q&A)
    Incentives for Item Duplication under Fair Ranking Policies
    Giorgio Maria Di Nunzio; Alessandro Fabris; Gianmaria Silvello; Gian Antonio Susto
  • 12:20 - 12:40 (15 mins + 5 mins Q&A)
    Users' Perception of Search-Engine Biases and Satisfaction
    Bin Han; Chirag Shah; Daniel Saelid
  • 12:40 - 13:00 (15 mins + 5 mins Q&A)
    Quantification of the Impact of Popularity Bias in Multi-Stakeholder and Time-Aware Environment
    Javier Ruiz; María Ignacia Sánchez; Francisco Guíñez
13:00 14:10 Lunch Break
14:10 16:15
Paper Session III (short introduction)
  • 14:15 - 14:35 (15 mins + 5 mins Q&A)
    When is a Recommendation Model Wrong? A Model-Agnostic Tree-Based Approach to Detecting Biases in Recommendations
    Joanna Misztal-Radecka; Bipin Indurkhya
  • 14:35 - 14:55 (15 mins + 5 mins Q&A)
    Evaluating Video Recommendation Bias on YouTube
    Baris Kirdemir; Maryetta Morris; Esther Mead; Muhammad Nihal Hussein; Nitin Agarwal
  • 14:55 - 15:15 (15 mins + 5 mins Q&A)
    An Information-Theoretic Measure for Enabling Category Exemptions with an Application to Filter Bubbles
    Bowen Wu; Chenyu Jiang; Sanghamitra Dutta; Pulkit Grover
  • 15:15 - 15:35 (15 mins + 5 mins Q&A)
    Perception-Aware Bias Detection for Query Suggestions
    Fabian Haak; Philipp Schaer
  • 15:35 - 15:55 (15 mins + 5 mins Q&A)
    Crucial Challenges in Large-Scale Black Box Analyses
    Tobias Krafft; Martin Reber; Roman Krafft; Anna Couturier; Katharina Zweig
  • 15:55 - 16:15 (15 mins + 5 mins Q&A)
    New Metrics for Offline Evaluation of Content-based TV Recommendation Systems
    Luísa Simões; Vaibhav Shah; João Silva; Nelson Rodrigues; Nuno Leite; Nuno Lopes
16:15 16:30 Discussion and Concluding Remarks

Committee

Workshop Chairs

Program Committee

  • Himan Abdollahpouri, Northwestern University (United States)
  • Luca Maria Aiello, Nokia Bell Labs (United Kingdom)
  • Mehwish Alam, FIZ-Karlsruhe & KIT (Germany)
  • Marcelo Gabriel Armentano, ISISTAN Research Institute CONICET-UNCPBA (Argentina)
  • Alejandro Bellogin, Universidad Autonoma de Madrid (Spain)
  • Bettina Berendt, Katholieke Universiteit Leuven (Belgium)
  • Glencora Borradaile, Oregon State University (United States)
  • Federica Cena, University of Turin (Italy)
  • Jeffrey Chen, RMIT University (Australia)
  • Pasquale De Meo, Vrije Universiteit Amsterdam (The Netherlands)
  • Sarah Dean, University of California Berkeley (United States)
  • Danilo Dessì, FIZ-Karlsruhe & KIT (Germany)
  • Michael Ekstrand, Boise State University (United States)
  • Francesco Fabbri, Universitat Pompeu Fabra (Spain)
  • Jean Garcia-Gathright, Spotify (United States)
  • Aniko Hannak, Northeaster University (United States)
  • Nina Grgic-Hlaca, MPI-SWS (Germany)
  • Genet Asefa Gesese, FIZ-Karlsruhe (Germany)
  • Toshihiro Kamishima, AIST (Japan)
  • Martha Larson, Radboud University and TU Delft (The Netherlands)
  • Aonghus Lawlor, University College Dublin (Ireland)
  • Sandy Mayson, University of Georgia (United States)
  • Rishabh Mehrotra, Spotify (United Kingdom)
  • Brent Mittelstadt, University of Oxford (United Kingdom)
  • Cataldo Musto, University of Bari (Italy)
  • Panagiotis Papadakos, Information Systems Laboratory - FORTH-ICS (Greece)
  • Mykola Pechenizkiy, Eindhoven University of Technology (The Netherlands)
  • Simone Paolo Ponzetto, University of Mannheim (Germany)
  • Elissa Redmiles, MPI-SWS (Germany)
  • Flora D. Salim, RMIT University (Australia)
  • Ruofei Shen, Facebook (United States)
  • Damiano Spina, RMIT University (Australia)
  • Antonela Tommasel, ISISTAN Research Institute CONICET-UNCPBA (Argentina)
  • Joris van Hoboken, University of Amsterdam (The Netherlands)
  • Kyle Williams, Microsoft (United States)
  • Eva Zangerle, University of Innsbruck (Austria)
  • Markus Zanker, Free University of Bozen-Bolzano (Italy)
  • Meike Zehlike, MPI-SWS(Germany)
  • Dong Zhou, Hunan University of Science and Technology (China)
  • Arkaitz Zubiaga, Queen Mary University of London (United Kingdom)

Proceedings

Advances in Bias and Fairness in Information Retrieval

Second International Workshop on Algorithmic Bias in Search and Recommendation, BIAS 2021, Lucca, Italy, April 1, 2021, Proceedings

Editors: Boratto, L., Faralli, S., Marras, M., Stilo, G. (Eds.)

https://link.springer.com/book/10.1007/978-3-030-78818-6

Past Editions

We also invite you to check out previous editions of our similar workshops:

Contacts

For general enquiries on the workshop, please send an email to ludovico.boratto@acm.org, stefano.faralli@unitelmasapienza.it, mirko.marras@epfl.ch, and giovanni.stilo@univaq.it.