Attribute Removal (Fairness Through Unawareness)

Description

Attribute Removal (Fairness Through Unawareness) ensures fairness by completely excluding protected attributes such as race, gender, or age from the model's input features. While this approach prevents direct discrimination, it may not eliminate bias if other features are correlated with protected attributes (proxy discrimination). This technique represents the most basic fairness intervention but often needs to be combined with other approaches to address indirect bias through seemingly neutral features.

Example Use Cases

Fairness

Removing gender, race, and age attributes from hiring algorithms to prevent direct discrimination, whilst acknowledging that indirect bias may persist through correlated features like education institution or postal code.

Excluding protected demographic attributes from credit scoring models to comply with fair lending regulations, ensuring no explicit consideration of race, gender, or ethnicity in loan approval decisions.

Building medical diagnosis models that exclude patient race and ethnicity to prevent biased treatment recommendations, whilst ensuring clinical decisions are based solely on medical indicators and symptoms.

Transparency

Creating transparent regulatory reporting systems that demonstrate compliance by explicitly documenting which protected attributes have been excluded from decision-making algorithms, providing clear audit trails for regulatory review.

Limitations

  • Proxy discrimination remains a major concern as seemingly neutral features (education, postal code, previous employment) may strongly correlate with protected attributes, perpetuating indirect bias.
  • Intersectional bias cannot be addressed through simple attribute removal, as complex interactions between multiple demographic characteristics may create compounding discrimination effects.
  • Legal and regulatory compliance may be insufficient, as many jurisdictions require demonstrating disparate impact absence rather than simply removing protected attributes from models.
  • Identifying all potential proxy variables is practically impossible, especially with high-dimensional data where subtle correlations with protected attributes may exist in unexpected features.
  • Performance degradation may occur if removed attributes contain legitimate predictive information, creating tension between fairness objectives and model accuracy requirements.

Resources

Fairness Through Awareness
Research PaperDwork, Cynthia et al.Jan 1, 2012

Foundational paper introducing fairness through awareness concept and demonstrating limitations of fairness through unawareness

Fairness Constraints: Mechanisms for Fair Classification
Research PaperZafar, Muhammad Bilal et al.Jul 19, 2015

Comprehensive analysis of fairness approaches including attribute removal limitations and proxy discrimination challenges

Fairlearn: A toolkit for assessing and improving fairness in machine learning
Software Package

Microsoft's comprehensive fairness toolkit with preprocessing methods including attribute removal and proxy detection tools

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
DocumentationKearns, Michael and Roth, AaronNov 1, 2019

Accessible book covering fairness through unawareness concepts and practical considerations for practitioners

Tags

Applicable Models:
Data Requirements:
Data Type:
Evidence Type:
Expertise Needed:
Fairness Approach:
Lifecycle Stage:
Technique Type: