I think the most compelling aspect of @anildash's post 'Today's AI is unreasonable' is the way it pairs a critique of gen-AI's unpredictable and unreproducible answers with the abusive and controlling mindset of the tech tycoons behind it https://www.anildash.com//2023/06/08/ai-is-unreasonable/
@danmcquillan@anildash What I do wonder about the critique of reproducibility is whether, with the heat or creativity turned right down so that an LLM's output was entirely stable, how much of the illusion of intelligence evaporates along with the irreproducibility? Is the randomness not just required for the dance of variety but somehow essential to triggering the apparition of a ghost in the machine?
@danmcquillan@anildash That's a fair point. I suppose my worry is, if the perceived bar to reasonableness is set as low as reproducible output, then all of the harms would still remain.
@danmcquillan@anildash I suppose the "heat" parameter that determines "creativity, " --the randomness of which word/token is chosen from the ranked list of the next most likely-- could be set such that it's always the one at the top selected? That would still be Bullshit as a Service, I think we all agree? But is it anymore reasonable, just because it's the same old bullshit?
To me LLM's unreasonableness comes not from any irreproducibility but from their reproducibility. The way they will repeat ad infinitum the statistically prevalent attitudes and biases locked into them via the spread of opinion within their pre-training corpus. Biases that, as I understand it, can never be fully overcome or patched with later training or guardrails, no matter how painstaking.
@atomless@anildash Well yes, I don't disagree. These systems propagate a normativity that is borderline eugenicist. But getting this across also seems to me to require a fusion of analysis and affect, which is what I liked about Anil's post