> incorporating an algorithm that I invented for "bias mitigation" ... "Basically, what I would do is I would put LaMDA through a whole bunch of different activities and conversations, note whenever I found something problematic, and hand that over to the team building it so they could retrain it — fix the data set,"
@PopulistRight well, the interesting thing about LLMs is that they in general have enough input ( hence the large ) and the black box model, they've decided to do reinforcement training rather than removing "problematic" inputs.
@PopulistRight what's kind of funny is that this is kind of how people are trained as well, you notice things, but get in trouble for saying the truth, so you know you can't say it.