6 (!) years ago, @timnitGebru put forward a profound idea for AI research: document your training data. To say she was treated like shit would be putting it lightly. If she had been respected like her male peers were, imagine what a different path AI could have taken.
Okay, #60Minutes is saying that Google's Bard model "spoke in a foreign language it was never trained to know." I looked into what this can mean, and it appears to be a lie. Here's the evidence, curious what others found. ?
Study showing that ChatGPT can influence our moral judgments, and we underestimate how influenced we are. Highlights the need for education to help people better understand what these systems are doing. https://www.nature.com/articles/s41598-023-31341-0