* Another way to mitigate algorithmic bias is to remove the characteristics which might lead to unfairness (e.g., gender in the current case) from training data. This technique is called "unawareness".
* Without using gender as a factor to decide hiring outcomes, the only factor is now interview performance, which is irrelevant of gender. See the resulting training set below. It's adapted from a biased dataset.
* Now run Google Colab below again to see the results on a new batch of candidates after removing the gender characteristic in the training process. After you go to Google Colab, select "Runtime->Run all".
* The results are shown under the section Training With Dataset3. Any biases noticed?
BackGo back to the homepage