@inthehands @TomWellborn I dunno, passing off a death trap on someone else isn’t a great way to handle that
Notices by ShadSterling (shadsterling@mastodon.social)
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Wednesday, 13-Nov-2024 12:58:09 JST ShadSterling -
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Monday, 04-Nov-2024 02:21:12 JST ShadSterling @mekkaokereke @Okanogen @luciano I remember that statement from news reports at the time, but didn’t see the connection to insurrectionists until now. Are there news outlets that did make that clear? I’d sure like to follow them
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Friday, 01-Nov-2024 00:42:07 JST ShadSterling @inthehands @atrupar @LizDye saw a news report the other day that talked about how white women are angry about overturning roe and have made videos about cancelling out your husbands Trump vote a thing on TikTok, and said early voting so far is 53% women. I hope that’s a good sign, but I don’t know of a way to tell. I don’t know who put it this way first, but the stakes being what they are and having unclear signs like that have me “nauseously optimistic”
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Friday, 18-Oct-2024 14:41:44 JST ShadSterling @pixx @eniko @angelastella @gabrielesvelto @cederbs why do people incinerate plastic? Is that better than burying it in a landfill?
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Friday, 11-Oct-2024 03:05:48 JST ShadSterling @inthehands I remember Amelia Bedelia being surprised that some people would “dust” their furniture - she would un-dust her furniture!
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Wednesday, 09-Oct-2024 13:59:33 JST ShadSterling @inthehands wasn’t this was widely believed to be the case at the time?
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Wednesday, 09-Oct-2024 02:48:53 JST ShadSterling @carlysagan I went back to college inspired by the possibility of dramatically increased computing power to improve physical simulations, ultimately completing a triple-major in CS, physics, and statistics. Afterward I haven’t been lucky enough to do that kind of work, but I know a fair bit about doing physics on a computer, and the statistical underpinnings of ML. From my perspective, this prize is a betrayal of everything that made the Nobel prize worth aspiring to.
https://www.reuters.com/science/hopfield-hinton-win-2024-nobel-prize-physics-2024-10-08/
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 08-Oct-2024 17:49:27 JST ShadSterling @futurebird eventually they’ll figure out that Computer Science has a split personality, one side almost the same as Pure Math, the other side they’ll say is like Applied Math but very disorganized because they won’t admit how much that side is like Economics but focused on building Rube Goldberg machines
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Monday, 23-Sep-2024 11:59:53 JST ShadSterling @inthehands there’s a weird variation of this in the online product page Q&A systems, where someone can ask about some obscure detail and the site will email the question to a bunch of previous buyers and a bunch of answers will come in from them basically saying they don’t know
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:50:10 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld existing AIs are already doing billions of dollars in harm to our society just by amplifying ongoing harms without getting anywhere near the capabilities that the hype says are dangerous. The real multibillion-dollar AI question is will we choose to stop using it to hurt ourselves?
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:49:58 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld AI is being put to use where we refuse to fund human work, for things like filtering applications for all kinds of things - jobs, schools, scholarships, asylum, etc - and these AIs are trained on the same human biases that made Microsoft’s Tay turn antisemitic, and makes ChatGPT say doctors and lawyers can’t be women. We’re doing this despite knowing the harm it’s amplifying, and buying in to the hype helps increase the harm
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:48:41 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld if you mean the kind of self-development a person can do, we first need to develop a way to make software than can reason, and remember; two things none of our existing ML methods can include in their creations. And even when/if we do develop such a method, I we don’t know that it would be capable of developing any faster than a human baby
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:48:40 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld no it isn’t. The current problems with AI come from it being far less capable than the hype suggests, but being used carelessly despite its limitations. Like law enforcement using facial recognition (which is made with ML, tho not called AI) even though it’s unreliable, and especially unreliable with non-white faces. We already overprosecute non-white people, this use of AI adds to that
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling @HistoPol @voron @simon @annaleen @BBCWorld any learning system that creates itself with feedback from its body will come out too different to have any low-level copying from one to another work, communication will have to be more abstract, and interpreted - more like how humans do it than software for standard computers. Even more so if it optimizes itself across CPU differences. Remote-control armies would work far better for the foreseeable future
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling @HistoPol @voron @simon @annaleen @BBCWorld one simple example of a difference: we have IoT systems where thousands of devices are deployed for for a specific time period (e.g. to study volcano vibrations for 5 years), and some of them tune their activity to manage battery life across the small differences allowed by the manufacturing tolerance. A larger system would have that kind of difference across every component - every motor, and every CPU
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling @HistoPol @voron @simon @annaleen @BBCWorld militaries have owned the best in secure communication for thousands of years, it hasn’t changed recently, and securing military communication is why computers happened when they did. The issue for AI swarming is not security, it’s variety - really independent learning machines can’t just copy each others brains without adaptation, because both their minds and their bodies will have zillions of differences
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld I’m not sure why you’re linking back to an earlier entry in this same thread, but while it’s a worthy research topic it’s not clear that embodiment is key. I haven’t seen anything that uses ML and has any capacity to handle interruptions, and without being able to handle interruptions trying to control a body is not going to go well
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:35:10 JST ShadSterling @HistoPol @simon @annaleen @BBCWorld a static model built by a human-directed “machine learning” process doesn’t grow other than as directed, and is certainly not self-organizing
Machine learning begins with taking statistical regression and simplifying it so you can build a model using far more data than is practical with full regression. Regression is a generalization of curve-fitting, finding a mathematical function that fits some given data as well as possible
-
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:48 JST ShadSterling @HistoPol @simon @annaleen only 10%? That’s so much better than humanity, we should put them in charge right away!
But more importantly, citation needed.
Also needed: a working definition of general artificial intelligence -
Embed this notice
ShadSterling (shadsterling@mastodon.social)'s status on Tuesday, 17-Sep-2024 20:34:47 JST ShadSterling @HistoPol @simon @annaleen that 10% is a survey result, the survey provides no information about how any respondents chose their responses, so it’s not possible to assess the methodology they used.
I want a “working definition” we could use to decide that something isn’t a GI, or is a GI. Maybe first it has to be able to do more than one thing - LLMs can’t do anything other than words, so LLMs are not GIs. But that’s very incomplete