An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?
If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?
6/