"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
Thanks for this additional piece of information, Simon.
It reminded me that I had wanted to add a word in my toot: indelibly.
As any #SciFi aficionado will tell you: there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system.
Another classic movie comes to mind in this respect, #Wargames...
@HistoPol@annaleen the original ChatGPT turned out to be a lot less prone to wild vengeful outbursts than whatever model it was that they plugged into Bing - it's a pretty interesting demo of how well the safety features of ChatGPT (which Bing seems not to have) have held up
Just as a teaser: unquestionably, most of the world's endangered species could be rescued if the #HomoSapiens were no longer at the top of the #FoodChain... No Zeroth Law, and a #Bing-empowered, freed #ChatGPT could quickly arrive at this conclusion...
"...person in the amount of general knowledge it has and it *eclipses* them by a long way. In terms of reasoning, it's not as good, but it does *already do* simple reasoning," [Dr #Hinton] said." "And given the rate of progress, we expect things to get better quite *fast*. So we need to *worry* about that."
WHAT IS THE DIFFERENCE BETWEEN ROBERT OPPENHEIMER AND GEOFFREY HINTON?
Hm, that is a valid point. On the other hand, I don't know how to build a car or even know how it functions in some detail. I do not need to know how to build one. I can still drive it.
A vast majority of educated people rightly asks why #RobertOppenheimer didn't stop the #ManhattanProject. He must have known at some point before it was too late.
#GeoffreyHinton "A man widely seen as the godfather of #ArtificialIntelligence (#AI) has quit his job, warning about the growing dangers from developments in the field." "...in a statement to the #NewYorkTimes, saying he now *regretted* his work.”
“He told the #BBC some of the dangers of #AI#chatbots were "quite scary".
@HistoPol@reuters@simon@annaleen in this thread you’ve said both “generative artificial intelligence” and “general artificial intelligence”; I would avoid the latter and use exclusively “artificial general intelligence” 🧵
@HistoPol@reuters@simon@annaleen All we really know about AGI is that we don’t know how to create one, and our inability to agree on a definition illustrates how far we are from figuring that out. That 10% figure isn’t a measure of what an AGI would do but a measure of what some people who don’t know how to create an AGI think one might do. It’s about as credible as people in the 1700s speculating about how aircraft might work. 🧵
...in disagreement about the definition and the terminology.
E.g. in this @reuters article, they state that #LLM's are a form of #GenerativeArtificialIntelligence (#GAI), while also starting that "Like other forms of artificial intelligence, generative AI *learns* how to take actions from past data. "
...they use advanced prediction/statistical models and might show some #nascent (or just inexplicable?) form of "#intelligence," but that’s about it.
As I have posted elsewhere this week, most recent research seems to indicate that #SentientBeings/#intelligences need a corporal form #embodiment) to really comprehend language.
This is why I find the combination of #robotics and #GAI so dangerous, as I'm quite...
If we look at the international situation today: #ClimateCrisis, wars, one small parties of humanity living relatively well in a #PostColonial world order, it would take any #GAI I've read about in #SciFi but a split second to determine what's the root of the problem: #humanity. And then, no...
In fact, I don't even think human programmers will be smart enough to prevent this. Even today; they don't understand all of the code and already the machined are writing thousands of lines of code every day.
TBH, I think, if this were a movie, I'd stop watching it, as the ending is just a dead giveaway.
@HistoPol@simon@annaleen that 10% is a survey result, the survey provides no information about how any respondents chose their responses, so it’s not possible to assess the methodology they used.
I want a “working definition” we could use to decide that something isn’t a GI, or is a GI. Maybe first it has to be able to do more than one thing - LLMs can’t do anything other than words, so LLMs are not GIs. But that’s very incomplete
@ShadSterling, you might just contact the authors Zach Stein-Perlman, Benjamin Weinstein-Raun, Katja Grace, about their methodology used for evaluating the questionnaire. They do have a feedbackbox on the linked page.
Regarding the definition,
a) I am not a computer scientist and therefore, for my ends, do not strive to surpass the level if science journalists of #TIME and #TheEconomist.
"...for example, some bad actor like [Russian President Vladimir] #Putin decided to give robots the ability to create their own sub-goals."
"...digital systems...can *learn* separately but share their knowledge *instantly*. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it..."
"#Yoshua#Bengio, another so-called godfather of #AI,... wrote that it..."
...check the prerequisites for #autopoietic systems (as developed by #Varela), we might reach the conclusion that #GAI is already quite close to #Autopoiesis. (Note:The late #Niklas#Luhmann is one of the most difficult authors to read an is chiefly available in #German; I haven't studied him in a long time, so I cannot apply his whole concept to #AI, of which I also do not enough of).
@HistoPol@simon@annaleen@BBCWorld a static model built by a human-directed “machine learning” process doesn’t grow other than as directed, and is certainly not self-organizing
Machine learning begins with taking statistical regression and simplifying it so you can build a model using far more data than is practical with full regression. Regression is a generalization of curve-fitting, finding a mathematical function that fits some given data as well as possible
Yes, that was the beginning. And the more I read, the more I am convinced that we will not have to wait for the next decade to see #autopoietic systems capable of self-reference and -development.
My gut feeling from what I have read (and studied of the former) natural-language learning and how infants learn to "grasp reality", is that this will happen very swiftly when we let even only a couple of #robots, connected...
@HistoPol@simon@annaleen@BBCWorld no it isn’t. The current problems with AI come from it being far less capable than the hype suggests, but being used carelessly despite its limitations. Like law enforcement using facial recognition (which is made with ML, tho not called AI) even though it’s unreliable, and especially unreliable with non-white faces. We already overprosecute non-white people, this use of AI adds to that
@HistoPol@simon@annaleen@BBCWorld if you mean the kind of self-development a person can do, we first need to develop a way to make software than can reason, and remember; two things none of our existing ML methods can include in their creations. And even when/if we do develop such a method, I we don’t know that it would be capable of developing any faster than a human baby
@HistoPol@simon@annaleen@BBCWorld AI is being put to use where we refuse to fund human work, for things like filtering applications for all kinds of things - jobs, schools, scholarships, asylum, etc - and these AIs are trained on the same human biases that made Microsoft’s Tay turn antisemitic, and makes ChatGPT say doctors and lawyers can’t be women. We’re doing this despite knowing the harm it’s amplifying, and buying in to the hype helps increase the harm
@HistoPol@simon@annaleen@BBCWorld existing AIs are already doing billions of dollars in harm to our society just by amplifying ongoing harms without getting anywhere near the capabilities that the hype says are dangerous. The real multibillion-dollar AI question is will we choose to stop using it to hurt ourselves?