@clacke I think that if I had been talking to someone I expected to have less of a foundational awareness than I did in this case, I would probably have expressed things in more detail, including more of the background that the algorithms just strive to present the output most similar to what their trainers gave high scores for previously and that has so far (it seems to me) caused a lot of untruthful output, because the training did apparently not include veracity in, or at least a large part of, the scoring during the training.
The algorithms just do what their creators told it to do, which makes the human in the machines an extraction of humans, with human error and subjectivity inherited.
But it's if course impossible how I would actually have expressed something in a hypothetical past situation. ;-P
@undergrowthfeed