- The original matters because bias in the output suggests bias in the input, and when the input is “the widest slice of society we could grab,” well, what does that say about society?
- My addition matters because if we decide that AI means humans aren’t responsible for anything, then irresponsible and malicious humans will use AI to was their hands of their own awfulness (see “accountability sink”).
@inthehands if using ai absolves you of responsibility one can trivially game that to “use ai” to launder basically any decision by selectively using its results and retrying with different parameters/prompts.