Journal Reviews on Fairness

Dated Oct 4, 2021; last modified on Mon, 11 Oct 2021

WWW ‘21: Proceedings of the Web Conference 2021

User-oriented Fairness in Recommendation

Showed that active users who only account for a small proportion enjoy much higher recommendation quality than the majority inactive users. They propose a re-ranking approach by adding constraints over the evaluation metrics.

There’s a subtlety here. Although the active users are the minority, the recommender considers them the majority as they are the ones providing a lot of the training data.

Maybe recommendations based on collaborative filtering should also divide the user base into similar cohorts, and learning can take place within these cohorts? Hold up, doesn’t that happen by definition? If I watch movie A and someone else watched movies A and B, then the recommender can recommend B to me.

It’d be helpful to actually read to see what sort of constraints were added.

Mitigating Gender Bias in Captioning Systems

Captioning datasets, e.g. COCO, contain gender bias found in web corpora. Authors split the COCO dataset into two where the train and test sets have different gender-context joint distribution. Models that rely on contextual cues fail more on the anti-stereotypical test data. Authors propose a model that provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.

I don’t understand the specifics of this paper from the abstract. What is “visual attention” and “correct gender visual evidence”?

Redrawing District Boundary to Minimize Spatial Inequality in School Funding

Primary source of school district revenue is public money. Authors found that existing school district boundaries promote financial segregation, with highly-funded school districts surrounded by lesser-funded districts and vice-versa. Authors propose the Fair Partitioning problem to divide a set of schools into \(k\) districts such that the spatial inequality in the district-level funding is minimized. Authors show that the problem is strongly -complete, and provide a reasonably effective greedy algorithm.

Understanding User Sensemaking in Machine Learning Fairness Assessment Systems

Considers the tension between de-biasing recommendations which are quick but may lack nuance, and ‘what-if’ style exploration which is time-consuming, but may lead to deeper understanding anf transferable insights. Highlights design requirements and tradeoffs in the design of ML fairness systems.

Neat that ML fairness assessment systems exist in the first place!

Discovering Essential Features for Preserving Prediction Privacy

Aims to discern a subset of features necessary for a target prediction task. Formulates the problem as a gradient-based perturbation maximization method that discovers the subset with respect to the functionality of the prediction model used by the provider. The rest of the features are suppressed using utility-preserving constant values that are discovered through a separate gradient-based optimization process. The service provider’s model can be treated like a black box. The framework’s optimizations reduce the upper bound of the mutual information between the actual data, and the sifted representations that get sent out.

What is “perturbation” in the context of ML?

Perturbation Theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem, i.e. \(A = A_0 + \epsilon^1 A_1 + \epsilon^2 A_2 + …\), where \(A\) is the full solution, \(A_0\) is the known solution to the exactly solvable initial problem, and \(A_1, A_2, …\) represent the first-order, second-order and higher-order terms.

In deep neural network training, perturbation is used to solve various issues, e.g. perturbing gradients to tackle the vanishing gradient problem; perturbing weights to escape the saddle point; perturbing inputs to defend against malicious attacks.

features cases of researchers perturbing inputs to fool ML systems, e.g. perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.

The Curse of Dimensionality and Feature Selection is a related concept. However, seems to suggest that they can pick a relevant subset of the features without ever sending them to the server. How can that be?

References

  1. User-Oriented Fairness in Recommendation. Li, Yunqi; Chen, Hanxiong; Fu, Zuohui; Ge, Yingqiang; Zhang, Yongfeng. Proceedings of the Web Conference, 2021. https://doi.org/10.1145/3442381.3449866 . Apr 19, 2021. ISBN: 9781450383127.
  2. Mitigating Gender Bias in Captioning Systems. Tang, Ruixiang; Du, Mengnan; Li, Yuening; Liu, Zirui; Zou, Na; Hu, Xia. Proceedings of the Web Conference, 2021. https://doi.org/10.1145/3442381.3449950 . 2021. ISBN: 9781450383127.
  3. Fair Partitioning of Public Resources: Redrawing District Boundary to Minimize Spatial Inequality in School Funding. Mota, Nuno; Mohammadi, Negar; Dey, Palash; Gummadi, Krishna P.; Chakraborty, Abhijnan. Proceedings of the Web Conference, 2021. https://doi.org/10.1145/3442381.3450041 . 2021. ISBN: 9781450383127.
  4. Understanding User Sensemaking in Machine Learning Fairness Assessment Systems. Gu, Ziwei; Yan, Jing Nathan; Rzeszotarski, Jeffrey M.. Proceedings of the Web Conference, 2021. https://doi.org/10.1145/3442381.3450092 . 2021. ISBN: 9781450383127.
  5. Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy. Mireshghallah, Fatemehsadat; Taram, Mohammadkazem; Jalali, Ali; Elthakeb, Ahmed Taha Taha; Tullsen, Dean; Esmaeilzadeh, Hadi. Proceedings of the Web Conference, 2021. https://doi.org/10.1145/3442381.3449965 . 2021. ISBN: 9781450383127.
  6. Perturbation Theory. https://en.wikipedia.org/wiki/Perturbation_theory . Accessed Oct 11, 2021.
  7. Perturbation Theory in Deep Neural Network (DNN) Training. Prem Prakash. https://towardsdatascience.com/perturbation-theory-in-deep-neural-network-dnn-training-adb4c20cab1b . Mar 20, 2020. Accessed Oct 11, 2021.
  8. Adversarial Machine Learning. https://en.wikipedia.org/wiki/Adversarial_machine_learning . Accessed Oct 11, 2021.