Workshop Aims and Scope
Creating efficient and effective search and recommendation algorithms has been the main objective of industry practitioners and academic researchers over the years. However, recent research has shown how these algorithms trained on historical data lead to models that might exacerbate existing biases and generate potentially negative outcomes. Defining, assessing and mitigating these biases throughout experimental pipelines is a primary step for devising search and recommendation algorithms that can be responsibly deployed in real-world applications. This workshop aims to collect novel contributions in this field and offer a common ground for interested researchers and practitioners.
Workshop Topics
The workshop accepts contributions in all topics related to algorithmic bias and fairness in search and recommendation, focused (but not limited) to:
- Data Set Collection and Preparation:
- Studying the interplay between bias and imbalanced data or rare classes
- Designing methods for dealing with imbalances and inequalities in data
- Creating collection pipelines that lead to fair and less unbiased data sets
- Collecting data sets useful for the analysis of biased and unfair situations
- Designing collection protocols for data sets tailored to research on bias
- Countermeasure Design and Development:
- Formalizing and operationalizing bias and fairness concepts
- Conducting exploratory analysis that uncover novel types of bias
- Designing treatments that mitigate biases in pre-/in-/post-processing
- Devising methods for explaining bias in search and recommendation
- Studying causal and counterfactual reasoning for bias and fairness
- Evaluation Protocol and Metric Formulation:
- Performing auditing studies with respect to bias and fairness
- Conducting quantitative experimental studies on bias and unfairness
- Defining objective metrics that consider fairness and/or bias
- Formulating bias-aware protocols to evaluate existing algorithms
- Evaluating existing mitigation strategies in unexplored domains
- Comparative studies of existing evaluation protocols and strategies
- Analysing efficiency and scalability issues of debiasing methods
- Case Study Exploration:
- E-commerce platforms
- Educational environments
- Entertainment websites
- Healthcare systems
- Social media
- News platforms
- Digital libraries
- Job portals
- Dating platforms
Important Dates
- Submissions:
January 12, 2023January 19, 2023 - Notifications: February 16, 2023
- Camera-Ready: March 2, 2023
- Workshop: April 2, 2023 - Dublin, Ireland (with support for remote attendance)
All deadlines are 11:59pm, AoE time (Anywhere on Earth).
Submission Details
We invite authors to submit unpublished original papers, written in English. Submitted papers should not have been previously published or accepted for publication in substantially similar form in any peer-reviewed venue, such as journals, conferences, or workshops.
The authors should consult ECIR paper guidelines and Fuhr’s guide to avoid common IR evaluation mistakes, for the preparation of their papers. The authors should consult the Springer’s authors’ guidelines and use their proceedings templates, either LaTeX or Word.
Papers should be submitted as PDF files to Easychair at https://easychair.org/conferences/?conf=bias2023.
We will consider three different submission types:
- Full papers (12 pages) should be clearly placed with respect to the state of the art and state the contribution of the proposal in the domain of application, even if presenting preliminary results. In particular, research papers should describe the methodology in detail, experiments should be repeatable, and a comparison with the existing approaches in the literature should be made.
- Reproducibility papers (12 pages) should repeat prior experiments using the original source code and datasets to show how, why, and when the methods work or not (replicability papers) or should repeat prior experiments, preferably using the original source code, in new contexts (e.g., different domains and datasets, different evaluation and metrics) to further generalize and validate or not previous work (reproducibility papers).
- Short or position papers (6 pages) should introduce new point of views in the workshop topics or summarize the experience of a group in the field. Practice and experience reports should present in detail real-world scenarios in which search and recommender systems are exploited.
Submissions should not exceed the indicated number of pages, including any diagrams and references.
All submissions will go through a double-blind review process and be reviewed by at least three reviewers on the basis of relevance for the workshop, novelty/originality, significance, technical quality and correctness, quality and clarity of presentation, quality of references and reproducibility.
Submitted papers will be rejected without review in case they are not properly anonymized, do not comply with the template, or do not follow the above guidelines.
The accepted papers and the material generated during the meeting will be available on the workshop website. The workshop proceedings will be published as a Springer's Communications in Computer and Information Science (CCIS) revised post-proceedings volume, indexed on Google Scholar, DBLP and Scopus. The authors of selected papers may be invited to submit an extended version in a journal special issue.
Please be aware that at least one author per paper needs to register and attend the workshop to present the work.
We expect authors, the program committee, and the organizing committee to adhere to the ACM’s Conflict of Interest Policy and the ACM’s Code of Ethics and Professional Conduct.
Keynote Speakers
Dr. Asia J. Biega
Max Planck Institute for Security and Privacy - MPI-SP (Germany)
Title: Fair Ranking Systems: What's Missing?
Abstract: Many approaches to fairness in ranking center on algorithms, metrics, and data. Will focusing solely on these factors allow us to create equitable systems in practice? This talk will explore three issues that might influence the effectiveness of fairness interventions. First, we'll examine tensions between fairness and other responsibility principles mandated by data protection laws. Second, we'll demonstrate the role of interface design in the societal outcomes of a ranking platform. Finally, we'll conclude with a reflection on the importance and practical outcomes of normatively grounding our fairness metrics.
Short Bio: Asia J. Biega is a tenure-track faculty member at the Max Planck Institute for Security and Privacy (MPI-SP) leading the Responsible Computing group. Her research centers around developing, examining and computationally operationalizing principles of responsible computing, data governance & ethics, and digital well-being. Before joining MPI-SP, Asia worked at Microsoft Research Montréal in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) Group. She completed her PhD in Computer Science at the MPI for Informatics and the MPI for Software Systems, winning the DBIS Dissertation Award of the German Informatics Society. In her work, Asia engages in interdisciplinary collaborations while drawing from her traditional CS education and her industry experience including stints at Microsoft and Google.
Dr. Henriette Cramer
Spotify (USA)
Title: Algorithmic Impact Practices in Product Development
Abstract: When organizations want to assess their algorithmic impact, more is required than research alone. Unintended negative side effects of Machine Learning are gaining attention. Using historical data can perpetuate stereotypes, machine models and recommendations can amplify existing inequalities. However, pragmatic challenges stand in the way of practitioners that are committed to address these issues. There are no clear, off-the-shelf industry-standard processes that can be readily applied in practice on what negative impacts to assess, or how to address them. Barriers include the extensive research necessary to understand domain-specific issues, turning that research into the development of concrete methods to assess and address those issues, and challenges to implementing solutions at an organizational scale and in large infrastructures. This includes translation between Product, Trust & Safety and Algorithmic Impact communities. In this talk, I’ll share lessons learnt from both organizational and technical practice that could be useful for others trying to address such challenges in product development, and for those that study algorithmic impact methods academically.
Short Bio: Henriette Cramer is Head of Algorithmic Impact, and a Director in Spotify’s Trust & Safety team. Her current work focuses on assessing and addressing the impact of data, platform and machine learning decisions in music and podcast streaming. This includes translating abstract calls to action into concrete structure and tooling at organizational scale, as well as data-informed product direction. Henriette has experience in a variety of Product, Policy and Research settings, ranging from exploratory human-robot interaction in academia to practical search, voice, advertising and recommender systems in product settings. This makes her especially interested in how different fields, functions and domains diverge in histories and incentives, and how that impacts collaboration. Henriette has a PhD from the University of Amsterdam focused on people’s interaction with adaptive and autonomous systems, a collection of patent filings related to her industry work, and 50+ peer-reviewed academic publications, which can be found at henriettecramer.com.
Dr. Harrie Oosterhuis
Radboud University (The Netherlands)
Title: Counterfactual Learning to Rank for Search and Recommendation
Abstract: Search and recommendation systems are vital for the accessibility of content on the internet. The basis for these systems are ranking models that turn collections of items into rankings: small, ordered lists of items to be displayed to users. Modern ranking models are mostly optimized based on user interactions. Generally, learning from user behavior leads to systems that receive more user engagement than those optimized based on expert judgements. However, user interactions are biased indicators of user preference: often whether something is interacted has less to do with preference and more with where and how it was presented. In response to this bias problem, recent years have seen the introduction and development of the counterfactual Learning to Rank (LTR) field. This field covers methods that learn from historical user interactions, i.e. click logs, and aim to optimize ranking models in terms of the actual user preferences. In order to achieve this goal, counterfactual LTR methods have to correct the biases that affect clicks. In this talk, I will compare counterfactual estimation for bandit algorithms with methods specifically made for learning to rank. Additionally, I will introduce a recently introduced doubly robust method for correcting position-bias in user interactions with rankings.
Short Bio: Harrie Oosterhuis is an assistant professor at the Data Science Group of the Institute of Computing and Information Sciences (iCIS) of the Radboud University. His research lies on the intersection of machine learning and information retrieval, it primarily concerns learning from user interactions on rankings. In particular, it focuses on methods that correct for the effects of interaction biases. He received his PhD cum laude from the University of Amsterdam under supervision of prof. dr. Maarten de Rijke on the thesis titled "Learning from User Interactions with Rankings: A Unification of the Field". He is also a recipient of the 2021 Google Research Scholar Award for early career researchers and the WSDM'21, SIGIR'21 and ICTIR'22 best paper awards.
Program
The workshop will take place in presence in Dublin, Ireland, with support for remote attendance, on April 2, 2023, 09:00 17:15 GMT. The workshop program is structured as follows.
Timing | Content |
---|---|
09:00 09:02 | Welcome Message |
09:02 09:50 |
Keynote Talk on "Fair Ranking Systems: What's Missing?" Dr. Asia J. Biega from Max Planck Institute for Security and Privacy - MPI-SP (Germany) [Slides - Video] |
09:50 10:35 |
Paper Session I : Biases Exploration and Assessment
|
10:35 11:00 | Coffee Break |
11:00 12:30 |
Paper Session II : Mitigation Strategies against Biases
|
12:30 13:30 | Lunch Break |
13:30 14:20 |
Keynote Talk on "Counterfactual Learning to Rank for Search and Recommendation" Dr. Harrie Oosterhuis from Radboud University (The Netherlands) [Slides - Video] |
14:20 15:05 |
Paper Session III : Biases in Newly Emerging Domains of Application
|
15:05 15:30 | Coffee Break |
15:30 16:15 |
Paper Session IV : Novel Perspectives and Conceptualizations of Biases
|
16:15 17:05 |
Keynote Talk on "Algorithmic Impact Practices in Product Development" Dr. Henriette Cramer from Spotify (USA) [Slides - Video] |
17:05 17:10 | SoBigdata.it - European Integrated Infrastructure for Social Mining and Big Data Analytics - Promoting Diversity and Inclusion [Slides - Video] |
17:10 17:15 | Closing Message |
Organization
Workshop Chairs
- Ludovico Boratto, University of Cagliari (Italy)
- Stefano Faralli, Sapienza University of Rome (Italy)
- Mirko Marras, University of Cagliari (Italy)
- Giovanni Stilo, University of L’Aquila (Italy)
Program Committee
- Marcelo Gabriel Armentano, National Scientific and Technical Research Council (CONICET), Argentina
- Ashwathy Ashokan, University of Nebraska Omaha, USA
- Ebrahim Bagheri, Ryerson University, Canada
- Christine Bauer, Utrecht University, The Netherlands
- Alejandro Bellogin, Universidad Autónoma de Madrid, Spain
- Jeffrey Chan, RMIT University, Australia
- Evgenia Christoforou, CYENS Centre of Excellence, Cyprus
- Giordano D'Aloisio, University of L'Aquila, Italy
- Andrea D'Angelo, University of L'Aquila, Italy
- Yashar Deldjoo, Polytechnic University of Bari, Italy
- Danilo Dessì, GESIS – Leibniz Institute for the Social Sciences, Germany
- Francesco Fabbri, Spotify, Spain
- Nina Grgic-Hlaca, Max Planck Institute for Software Systems, Germany
- Danila Hettiachchi, RMIT University, Australia
- Toshihiro Kamishima, National Institute of Advanced Industrial Science and Technology, Japan
- Kunal Khadilkar, MIT College of Engineering, USA
- Dominik Kowald, Know-Center, Austria
- Emanuel Lacic, Technical University of Graz, Austria
- Dana Mckay, RMIT University, Australia
- Giacomo Medda, Univrsity of Cagliari, Italy
- Cataldo Musto, University of Bari, Italy
- Julia Neidhardt, Technical University of Wien, Austria
- Harrie Oosterhuis, Radboud University, The Netherlands
- Panagiotis Papadakos, Information Systems Laboratory - FORTH-ICS, Greece
- Alessandro Sebastian Podda, University of Cagliari, Italy
- Simone Paolo Ponzetto, University of Mannheim, Germany
- Lorenzo Porcaro, Joint Research Centre EC, Italy
- Erasmo Purificato, Otto-von-Guericke Universität Magdeburg, Germany
- Alessandro Raganato, University of Helsinki, Finland
- Amifa Raj, Boise State University, USA
- Vaijanath Rao, Quicken Inc., USA
- Yongli Ren, RMIT University, Australia
- Mete Sertkan, Technical University of Wien, Austria
- Manel Slokom, Delft University of Technology, The Netherlands
- Nasin Sonboli, Tufts University, USA
- Tom Sühr, Technische Universität Berlin, Germany
- Marko Tkalcic, University of Primorska, Slovenia
- Antonela Tommasel, National Scientific and Technical Research Council (CONICET), Argentina
- Christoph Trattner, University of Bergen, Norway
- Rohini U, Glassdoor, USA
- Eva Zangerle, University of Innsbruck, Austria
- Arkaitz Zubiaga, Queen Mary University of London, UK
Register
Please register from the ECIR 2023's main conference website following the instructions indicated at http://ecir2023.org/attending/registration.html. Both in-presence and virtual participation options are available.
Past Editions
We also invite you to check out the two previous editions:
Contacts
For general enquiries on the workshop, please send an email to ludovico.boratto@acm.org, stefano.faralli@uniroma1.it, mirko.marras@acm.org, and giovanni.stilo@univaq.it.
Acknowledgements
The "Fourth International Workshop on Algorithmic Bias in Search and Recommendation (Bias 2023)" event was organised as part of the SoBigData.it project (Prot. IR0000013 - Call n. 3264 of 12/28/2021) initiatives aimed at training new users and communities in the usage of the research infrastructure (SoBigData.eu). SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.