Traditional Machine Learning Without Fairness
Traditional machine learning without fairness refers to the application of classical ML algorithms (e.g., linear regression, decision trees, support vector machines) without incorporating fairness-aware techniques or considerations for bias mitigation. This approach focuses solely on optimizing predictive performance metrics like accuracy or precision, often ignoring potential discriminatory impacts on protected groups (e.g., based on race, gender, or age). It represents a historical or baseline practice in ML development where ethical and social implications are not explicitly addressed during model training or evaluation.
Developers might use traditional ML without fairness in scenarios where fairness is not a regulatory or ethical concern, such as in non-sensitive applications like weather prediction, spam filtering, or recommendation systems for non-critical content. It can be appropriate for initial prototyping or research where the primary goal is to establish baseline performance before integrating fairness measures. However, this approach is increasingly discouraged in high-stakes domains like hiring, lending, or criminal justice due to risks of perpetuating biases and legal non-compliance.