Robustness in Fairness against Edge-level Perturbations in GNN-based Recommendation

European Conference on Information Retrieval (ECIR) 2024

Abstract

Efforts in the recommendation community are shifting from the sole emphasis on utility to considering beyond-utility factors, such as fairness and robustness. Robustness of recommendation models is typically linked to their ability to maintain the original utility when subjected to attacks. Limited research has explored the robustness of a recommendation model in terms of fairness, e.g., the parity in performance across groups, under attack scenarios. In this paper, we aim to assess the robustness of graph-based recommender systems concerning fairness, when exposed to attacks based on edge-level perturbations. To this end, we considered four different fairness operationalizations, including both consumer and provider perspectives. Experiments on three datasets shed light on the impact of perturbations on the targeted fairness notion, uncovering key shortcomings in existing evaluation protocols for robustness. As an example, we observed perturbations affect consumer fairness on a higher extent than provider fairness, with alarming unfairness for the former.

Motivation

Why study robustness in fairness? While recommendation robustness typically focuses on maintaining utility under attacks, little research explores how attacks affect fairness. An attacker could exploit this blind spot to compromise a system's fairness without significantly changing overall accuracy—potentially damaging a company's reputation and violating emerging regulations.

Our work addresses the intersection of two critical properties: robustness (resilience to attacks) and fairness (equitable treatment across demographic groups). We investigate whether GNN-based recommender systems can maintain fair outcomes when subjected to edge-level perturbations.

Methodology

Perturbation Framework

We extend graph perturbation techniques to assess fairness robustness. Given a user-item bipartite graph $G = (V, E)$ encoded as adjacency matrix $A$, we iteratively perturb the graph to produce $\tilde{A}$ and measure the fairness impact:

$$\Delta = M(f(\tilde{A}, W), A) - M(f(A, W), A)$$

where $M$ is a fairness metric and $f$ is the GNN-based recommender.

Fairness Operationalizations

We evaluate four fairness notions covering both stakeholder perspectives:

PerspectiveMetricDescription
ConsumerCP (Consumer Parity)Equal recommendation quality across user groups
CS (Consumer Satisfaction)Equal satisfaction levels across demographics
ProviderPP (Provider Parity)Equal exposure across item groups
PS (Provider Satisfaction)Equal visibility for provider categories

Perturbation Types

  • Edge Deletion: Removing existing user-item interactions
  • Edge Addition: Injecting fake interactions into the graph

Experimental Setup

Datasets

DatasetDomainUsersItemsInteractions
MovieLens-1MMovies6,0403,7061,000,209
Last.FM-1KMusic26851,609200,586
InsuranceInsurance346201,879

GNN Models

  • GCMC - Graph Convolutional Matrix Completion
  • NGCF - Neural Graph Collaborative Filtering
  • LightGCN - Simplified Graph Convolution for Recommendation

Key Findings

Main Result: Edge-level perturbations affect consumer fairness to a much greater extent than provider fairness. Even small perturbations can dramatically increase unfairness between demographic groups.

Consumer vs Provider Fairness

  • Consumer fairness is highly vulnerable: Unfairness levels across consumer groups can be significantly increased by a small number of perturbations
  • Provider fairness shows limited impact: The effect on provider fairness is bounded by the prior unfairness level in the original recommendations
  • Asymmetric sensitivity: Models exhibit different sensitivity patterns for deletion vs. addition attacks

Implications

  • Evaluation protocols are insufficient: Current robustness evaluation focusing only on utility misses critical fairness degradation
  • Regulatory concerns: Given recent regulations on fairness and robustness of automated systems, these findings highlight worrying vulnerabilities
  • Defense priorities: Consumer-side fairness requires more attention in robustness mechanisms

Contributions

  • Novel analysis framework: First comprehensive study of robustness in fairness for GNN-based recommendation
  • Multi-stakeholder perspective: Evaluation covering both consumer and provider fairness notions
  • Practical insights: Uncovering shortcomings in existing robustness evaluation protocols
  • Open source: Full implementation available for reproducibility

BibTeX

@inproceedings{boratto2024robustness,
  author = {Boratto, Ludovico and Fabbri, Francesco and Fenu, Gianni and Marras, Mirko and Medda, Giacomo},
  title = {Robustness in Fairness against Edge-level Perturbations in GNN-based Recommendation},
  booktitle = {Advances in Information Retrieval - 46th European Conference on Information Retrieval, ECIR 2024},
  series = {Lecture Notes in Computer Science},
  year = {2024},
  publisher = {Springer},
  doi = {10.1007/978-3-031-56063-7_42}
}