@HistoPol@simon@annaleen@BBCWorld existing AIs are already doing billions of dollars in harm to our society just by amplifying ongoing harms without getting anywhere near the capabilities that the hype says are dangerous. The real multibillion-dollar AI question is will we choose to stop using it to hurt ourselves?
@HistoPol@simon@annaleen@BBCWorld AI is being put to use where we refuse to fund human work, for things like filtering applications for all kinds of things - jobs, schools, scholarships, asylum, etc - and these AIs are trained on the same human biases that made Microsoft’s Tay turn antisemitic, and makes ChatGPT say doctors and lawyers can’t be women. We’re doing this despite knowing the harm it’s amplifying, and buying in to the hype helps increase the harm
@HistoPol@simon@annaleen@BBCWorld if you mean the kind of self-development a person can do, we first need to develop a way to make software than can reason, and remember; two things none of our existing ML methods can include in their creations. And even when/if we do develop such a method, I we don’t know that it would be capable of developing any faster than a human baby
@HistoPol@simon@annaleen@BBCWorld no it isn’t. The current problems with AI come from it being far less capable than the hype suggests, but being used carelessly despite its limitations. Like law enforcement using facial recognition (which is made with ML, tho not called AI) even though it’s unreliable, and especially unreliable with non-white faces. We already overprosecute non-white people, this use of AI adds to that
@HistoPol@voron@simon@annaleen@BBCWorld any learning system that creates itself with feedback from its body will come out too different to have any low-level copying from one to another work, communication will have to be more abstract, and interpreted - more like how humans do it than software for standard computers. Even more so if it optimizes itself across CPU differences. Remote-control armies would work far better for the foreseeable future
@HistoPol@voron@simon@annaleen@BBCWorld one simple example of a difference: we have IoT systems where thousands of devices are deployed for for a specific time period (e.g. to study volcano vibrations for 5 years), and some of them tune their activity to manage battery life across the small differences allowed by the manufacturing tolerance. A larger system would have that kind of difference across every component - every motor, and every CPU
@HistoPol@voron@simon@annaleen@BBCWorld militaries have owned the best in secure communication for thousands of years, it hasn’t changed recently, and securing military communication is why computers happened when they did. The issue for AI swarming is not security, it’s variety - really independent learning machines can’t just copy each others brains without adaptation, because both their minds and their bodies will have zillions of differences
@HistoPol@simon@annaleen@BBCWorld I’m not sure why you’re linking back to an earlier entry in this same thread, but while it’s a worthy research topic it’s not clear that embodiment is key. I haven’t seen anything that uses ML and has any capacity to handle interruptions, and without being able to handle interruptions trying to control a body is not going to go well
@HistoPol@simon@annaleen@BBCWorld a static model built by a human-directed “machine learning” process doesn’t grow other than as directed, and is certainly not self-organizing
Machine learning begins with taking statistical regression and simplifying it so you can build a model using far more data than is practical with full regression. Regression is a generalization of curve-fitting, finding a mathematical function that fits some given data as well as possible
@HistoPol@simon@annaleen that 10% is a survey result, the survey provides no information about how any respondents chose their responses, so it’s not possible to assess the methodology they used.
I want a “working definition” we could use to decide that something isn’t a GI, or is a GI. Maybe first it has to be able to do more than one thing - LLMs can’t do anything other than words, so LLMs are not GIs. But that’s very incomplete
@HistoPol@reuters@simon@annaleen All we really know about AGI is that we don’t know how to create one, and our inability to agree on a definition illustrates how far we are from figuring that out. That 10% figure isn’t a measure of what an AGI would do but a measure of what some people who don’t know how to create an AGI think one might do. It’s about as credible as people in the 1700s speculating about how aircraft might work. 🧵
@HistoPol@reuters@simon@annaleen in this thread you’ve said both “generative artificial intelligence” and “general artificial intelligence”; I would avoid the latter and use exclusively “artificial general intelligence” 🧵
@inthehands with respect to scholarly works in particular, we also need to reform the peer review process. I don’t know what the right way to do peer review is, but the way we do it now is bad in several ways, including accepting too much pollution.
@inthehands@alan I’ve been trying to make a budget for supporting real journalism and a list of organizations to support; so far I have 404 Media, the Texas Observer, and Zeteo (tho it looks like the only way to support Zeteo is through Substack), but nothing with much focus on government and elections. What other organizations should I be considering?
@j12t@damon choosing an instance specific to one of the many subjects that interest me is contrary to how I relate to those communities. I’m not here for a community bulletin board that allows crossposting, I think we’d need a different interaction paradigm for me to want community features, and it wouldn’t involve tying my whole fediverse identity to a single community. What’s supposed to happen if I joined my hometown library’s instance and then moved away?
@sjuvonen@liferstate@inthehands I think it will motivate increased voter suppression. I recall some concern about potential violence at the polls in 2020, which didn’t materialize as actual violence; I suspect the risk will be higher this time
@inthehands that seems likely, but why wouldn’t they unite to supports Biden himself? What happened behind the scenes? Did Biden intentionally not bring his best to the debate to set this in motion?