Equalised Odds Post-Processing
Description
A post-processing fairness technique based on Hardt et al.'s seminal work that adjusts classification thresholds after model training to achieve equal true positive rates and false positive rates across demographic groups. The method uses group-specific decision thresholds, potentially with randomisation, to satisfy the equalised odds constraint whilst preserving model utility. This approach enables fairness mitigation without retraining, making it applicable to existing deployed models or when training data access is restricted.
Example Use Cases
Fairness
Post-processing a criminal recidivism risk assessment model to ensure equal error rates across racial groups, using group-specific thresholds to achieve equal TPR and FPR whilst maintaining predictive accuracy for judicial decision support.
Transparency
Adjusting a hiring algorithm's decision thresholds to ensure equal opportunities for qualified candidates across gender groups, providing transparent evidence that the screening process treats all demographics equitably.
Reliability
Calibrating a medical diagnosis model's outputs to maintain equal detection rates across age groups, ensuring reliable performance monitoring and consistent healthcare delivery regardless of patient demographics.
Limitations
- May require randomisation in decision-making, leading to inconsistent outcomes for similar individuals to achieve group-level fairness constraints.
- Post-processing can reduce overall model accuracy or confidence scores, particularly when group-specific ROC curves do not intersect favourably.
- Violates calibration properties of the original model, creating a trade-off between equalised odds and predictive rate parity.
- Limited to combinations of error rates that lie on the intersection of group-specific ROC curves, which may represent poor trade-offs.
- Requires access to sensitive attributes during deployment, which may not be available or legally permissible in all contexts.
Resources
Equality of Opportunity in Supervised Learning
Foundational paper by Hardt et al. introducing the equalised odds post-processing algorithm and mathematical framework for fairness constraints.
Equalized odds postprocessing under imperfect group information
Extension of Hardt et al.'s method examining robustness when protected attribute information is imperfect or noisy.