@nazokiyoubinbou@futurebird …yeah uh you may not have used any government web sites recently but they’re already struggling to keep something only slightly more advanced than static HTML or basically 90’s web CMS tech online, maybe don’t expect them to run a terrible Rails app as part of a distributed system that is trying to make some kind of liveness guarantees
@nonlinear@futurebird@dahukanna@PavelASamsonov I’m currently reading a book about how the brain works, and while they do find that simulations of avalanches in piles of sand can help us understand avalanches in networks of neurons, the facts of the brain avalanche models which are not captured in the sand avalanche models are obviously just as important as the facts which are.
@nonlinear@futurebird@dahukanna@PavelASamsonov The idea of emergence is tha certain levels of abstraction have more predictive power in the information theory sense than others, and lower levels are not always better, but it doesn’t follow from this that at some level of abstraction in these systems all models are perfectly substitutable
@nonlinear@futurebird@dahukanna@PavelASamsonov@knowuh homomorphic implies (aiui) that all operations on one half of the homomorphism can be mapped 1:1 to operations on the other half, and my point here is that we already know that at least in the strongest form that argument is not true.
@nonlinear@futurebird@dahukanna@PavelASamsonov@knowuh (this is the book I’m reading and it goes into quite some detail about how the symmetries break down. BUT, causality and modeling is of great interest to me and I now know what I’m reading next thank you :)
@nonlinear@futurebird@dahukanna@PavelASamsonov@knowuh in the much weaker and non-homomorphic sense that we can use the models on one side to make predictions about the models on the other side and then test them against the real world, sure absolutely. That’s just science! But we really, really can’t assume that the real world will validate our extrapolations.
@maddiefuzz@clacke@drwho At that time, the joke the professor told in my AI class (taken at around the same time) was that as soon as something turned out to be useful people stopped calling it "AI", and thus AI was in a perpetual winter. Not sure if that will happen here with LLMs or not.
Unreflective people really seem to have a "the AI told me it, so it must be true" attitude, when in fact the exact opposite must be our prior—"the AI told me it, so it is presumptively false until verified through other sources." These systems have no ground-truthing mechanism, we _have_ to provide it ourselves.
This is a brilliant way to put this, and, yes, they are. Everybody _thinks_ they want conversational interfaces, but nobody _actually wants_ or uses conversational interfaces.
Principal @ http://complexsystems.group. I keep people safe on the internet (trying). Looking at the world with an “anarchist squint” 🏳️🌈 https://twitter.com/kevinriggle🏙️ Brooklyn, NY🔗 https://complexsystems.group/publications