Description

An in-processing fairness technique that employs meta-learning to modify any existing classifier for optimising fairness metrics whilst maintaining predictive performance. The method learns how to adjust model parameters or decision boundaries to satisfy fairness constraints such as demographic parity or equalised odds through iterative optimisation. This approach is particularly valuable when retrofitting fairness to pre-trained models that perform well but exhibit bias, as it can incorporate fairness without requiring complete retraining from scratch.

Example Use Cases

Fairness

Retrofitting an existing hiring algorithm to achieve demographic parity across gender and ethnicity groups by using meta-learning to adjust decision boundaries, ensuring equitable candidate selection whilst maintaining the model's ability to identify qualified applicants.

Transparency

Modifying a pre-trained credit scoring model to provide transparent fairness guarantees by learning optimal parameter adjustments that satisfy equalised odds constraints, enabling clear reporting on fair lending compliance to regulatory authorities.

Reliability

Adapting a medical diagnosis model to ensure reliable performance across patient demographics by meta-learning fairness-aware adjustments that maintain diagnostic accuracy whilst reducing disparities in treatment recommendations across age and socioeconomic groups.

Limitations

  • Meta-learning approach can be complex to implement, requiring expertise in both the underlying classifier and meta-optimisation techniques.
  • Requires extensive hyperparameter tuning to balance fairness constraints with predictive performance, making optimisation challenging.
  • May result in longer training times compared to simpler fairness techniques due to the iterative meta-learning process.
  • Performance depends heavily on the quality and characteristics of the base classifier being modified, limiting effectiveness with poorly-performing models.
  • Theoretical guarantees about fairness-accuracy trade-offs may not hold in practice due to finite sample effects and optimisation challenges.

Resources

Research Papers

Algorithmic decision making methods for fair credit scoring
Darie MoldovanSep 16, 2022

The effectiveness of machine learning in evaluating the creditworthiness of loan applicants has been demonstrated for a long time. However, there is concern that the use of automated decision-making processes may result in unequal treatment of groups or individuals, potentially leading to discriminatory outcomes. This paper seeks to address this issue by evaluating the effectiveness of 12 leading bias mitigation methods across 5 different fairness metrics, as well as assessing their accuracy and potential profitability for financial institutions. Through our analysis, we have identified the challenges associated with achieving fairness while maintaining accuracy and profitabiliy, and have highlighted both the most successful and least successful mitigation methods. Ultimately, our research serves to bridge the gap between experimental machine learning and its practical applications in the finance industry.

The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective
Naman Goel et al.Dec 21, 2020

Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.

Documentations

ρ-Fair Method — holisticai documentation
Holisticai DevelopersJan 1, 2024
aif360.algorithms.inprocessing — aif360 0.1.0 documentation
Aif360 DevelopersJan 1, 2018
Welcome to AI Fairness 360's documentation! — aif360 0.1.0 ...
Aif360 DevelopersJan 1, 2018

Tags

Data Type:
Expertise Needed:
Fairness Approach:
Technique Type: