That was fun. 😁
Conversation
Notices
-
Embed this notice
Flock of Nazguls (flockofnazguls@mastodon.flockofnazguls.com)'s status on Saturday, 10-Aug-2024 01:36:56 JST Flock of Nazguls -
Embed this notice
BeAware :fediverse: (beaware@social.beaware.live)'s status on Saturday, 10-Aug-2024 01:36:53 JST BeAware :fediverse: @flockofnazguls well, if people would stop using it this way, which is not how it's supposed to be used, that'd be great.😬
It's not knowledgeable. It doesn't "know" things. It can take a small set of information and summarize it. It can correct grammar mistakes. It can guess what could possibly come next in a sentence by word probability.
However, it *cannot* give factual information. That's not how these things work.
The faster that everyone figures this out, the better.
-
Embed this notice
Flock of Nazguls (flockofnazguls@mastodon.flockofnazguls.com)'s status on Saturday, 10-Aug-2024 01:36:55 JST Flock of Nazguls I’ve been ruminating on this exchange, and it reveals something that goes contrary to the #AI narrative: that is, that AI only ‘hallucinates’ when it doesn’t have an answer yet is compelled to provide one. Here, ChatGPT had *zero reason* to state that ELIZA passed the Turing Test. It lied, pure and simple, and even admitted it.
This stochastic parroting tech is fun for generating goofy images and stuff, but it sure isn’t appropriate for mission-critical purposes yet (if ever).
BeAware :fediverse: repeated this.
-
Embed this notice