Random Link ¯\_(ツ)_/¯ | ||
Oct 4, 2021 | » | Journal Reviews on Fairness
7 min; updated Feb 12, 2023
Meta 📑 Instead of changing the data or learners in multiple ways and then see if fairness improves, postulate that the root causes of bias are the prior decisions that generated the training data. These affect (a) what data was selected, and (b) the labels assigned to the examples. They propose the \(\text{Fair-SMOTE}\) (Fair Synthetic Minority Over Sampling Technique) algorithm which (1) removes biased labels (via situation testing: if the model’s prediction for a data point changes once all of the data points' protected attributes are flipped, then that label is biased and the data point is discarded), and (2) rebalances internal distributions such that based on a protected attribute, examples are equal in both positive and negative classes.... |
Jul 4, 2021 | » | Rage Against the Algorithm
4 min; updated Sep 5, 2022
The lack of explanability is a common theme. Higher-ups claim the machine is unbiased, while the workers on the ground say, “It’s not me; it’s the computer”. Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor should be an enlightening read. Computers Can Solve Your Problem. You May Not Like the Answer The algorithm had four guiding principles: Increase # of high school students starting after 8am Decrease # of elementary school students dismissed after 4pm Accommodate the needs of special education students Generate transportation savings Unprecedented opposition to the algorithm’s solution.... |
Check out CMSC 20370/30370: Inclusive Technology: Design For Underserved and Marginalized Communities , and other similar courses. CMSC 20370 references a lot of papers from ACM’s CHI Conference on Human Factors in Computing Systems .