the AI was smart enough to synthesize a proxy for race to implement racist decisions.
That's because race correlated well with the variable it was trying to match in the training data because of underlying racism, but after being "blinded" to race, it discovered that postcode—in this case used as a proxy for race—was a great correlating factor to the system it was trying to replace.
And it didn't *tell* anyone it was doing this. It just derived it itself.