"Not only are we not close to developing “artificial general intelligence”, we are not even far away from developing AGI, because we haven’t even found a path that could conceivably lead to AGI."
>>
"Not only are we not close to developing “artificial general intelligence”, we are not even far away from developing AGI, because we haven’t even found a path that could conceivably lead to AGI."
>>
"One thing that particularly seems to lead people astray is the way that ChatGPT gives the impression of “apologizing” in response to exterior challenges. OpenAI’s claim that ChatGPT will “admit its mistakes” is worded to suggest that the algorithm both understands that it has made an error and is in the active process of improving its understanding based on the dialogue in progress."
>>
"Making chatbots that seem to apologize is a choice. Giving them cartoon-human avatars and offering up “Hello! How can I help you today?” instead of a blank input box: choices. Making chatbots that talk about their nonexistent “feelings” and pepper their responses with facial emojis is another choice."
>>
"As a society, we’re going to have to radically rethink when and how and even if it makes sense to trust any information that either originates from, or is mediated by, any kind of machine-learning algorithm — which, if you think about it, currently encompasses nearly All The Things."
>>
"Rather than meet societal obligations to invest in education and physical and mental care, AI’s advocates risk creating a two-tier system where artificial facsimiles will be deemed good enough for those without means—but the well-to-do will hire actual humans."
https://techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/
New from me & @alex on Tech Policy Press
“This isn’t fixable,” me @mattoyeah of @apnews “It’s inherent in the mismatch between the technology and the proposed use cases.”
I worry some that the headline here misplaces the burden of proof. It's on those claiming that there is actually a path from very large Magic 8 Balls to reliable information access systems to show that there is.
>>
Most distressing thing in the article:
"The Associated Press is also exploring use of the technology as part of a partnership with OpenAI, which is paying to use part of AP’s text archive to improve its AI systems."
Please, AP, don't use this.
>>
This article by @willknight has some good points in it, but also some real howlers.
Starting with the good: A clear take-down of the ridiculous interactive fiction session at the UN's "AI for Good" (ugh) conference where people "spoke with" robots.
Knight writes: "But despite the well-known limitations of such bots, the robots’ replies were reported as if they were the meaningful opinions of autonomous, intelligent entities."
>>
https://www.wired.com/story/fast-forward-dont-ask-dumb-robots-whether-ai-will-destroy-humanity/?
However, after listing pointers to work on the real harms of so-called "AI", he also adds: "Leading AI experts worry that the pace of progress may produce algorithms that are difficult to control in a matter of years." with a link to coverage of Hinton's AI doomerism media tour.
>>
But, after listing pointers to work on the real harms of so-called "AI", he also adds: "Leading AI experts worry that the pace of progress may produce algorithms that are difficult to control in a matter of years." with a link to coverage of Hinton's AI doomerism media tour.
>>
And then he describes LLMs this way: "It may be best to think of them as preternaturally knowledgeable and gifted mimics that, although capable of surprisingly sophisticated reasoning, are deeply flawed and have only a limited “knowledge” of the world."
>>
The assertion that LLMs are "capable of surprisingly sophisticated reasoning" is supported with a link an article @willknight wrote on the "Sparks of AGI" paper + criticism of it.
Extruding synthetic text is not reasoning. If the extruded text looks like something sensible, it is because we have made sense of it. I find it dismaying that even critical journalists like @willknight feel a need to repeat these tropes.
Mystery AI Hype Theater 3000 ep 6 is up!
Join me and @alex for a roast of Meta's short-lived LLM "Galactica" and how it was misleadingly marketed.
https://www.buzzsprout.com/2126417/13237698-episode-6-stochastic-parrot-galactica-november-23-2022
Have a listen (or read the transcript) and sit for a while with three of the workers who did content moderation for ChatGPT. Thank you @karenhao for this important window into the human impact of this work.
>>
Seeking recommendations for a podcast app to replace the soon to be discontinued Stitcher. I would like:
Android compatible
Opml import
Ability to group podcasts
No ads displayed on the app (willing to pay once, but not a subscription)
What's good?
This is great news from ACL 2023:
https://2023.aclweb.org/registration/discounted_virtual_registration/
Discounted virtual registration ($0-100) for people attending from regions where ACL registration fees are out of reach.
Please spread the word! Applications due June 28.
Mystery AI Hype Theater is now available in podcast form!
https://www.buzzsprout.com/2126417
@alex and I started this project as a one-off, trying out a new way of responding to and deflating AI hype... and then surprised ourselves by turning it into a series.
Until now, back episodes had only been available as videos, but with expert production assistance from Christie Taylor, you can now listen to us as a podcast!
The first three episodes (from Aug-Sept 2022) are up, with more to come.
From a live tweet of the proceedings around the lawyer caught using ChatGPT:
"I thought ChatGPT was a search engine".
It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard.
Language model-driven chatbots are not suitable for information access.
>>
Honestly, this lawyer is actually lucky, bc he is working within a system that was able to catch his misstep.
When MSFT, GOOG & others present their chatbots as a replacement for search, they are setting people up for similar fails, typically in much less regulated spaces.
>>
Prof. Emily M. Bender(she/her)
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.