For the first time in my lifetime, I am being asked to pay 15 USD or 20 USD per month for Large Language Model (LLM) based software services which unambiguously state the following in their terms and conditions.
1. Output may not always be accurate. 2. The output must be evaluated for accuracy and appropriateness i.e. double-checking of responses for accuracy is mandatory. 3. Output may be incomplete, incorrect, or offensive i.e. may contain misleading information and/or factual inaccuracies. 4. Output may be due to hallucination. In this context, hallucination refers to the creation of erroneous or misleading information.
Similar to large language based software services, there are image recognition, recommendation systems which can also produce incorrect, inaccurate and offensive output. How many users are willing to pay a monthly subscription of 15-20 USD for such software products and services? Most of us pay for expert-assisted medical diagnosis, which requires the collaboration of healthcare professionals and advanced technologies. If one is unfortunate, such a diagnosis may be wrong or inaccurate. However, medical diagnosis is not the widely adopted, scalable solution that Large Language Model (LLM)-based software service providers are hoping for.
It was not the students' use of a #ChatBot that was the problem, but they were using material found on the internet that *itself* was created by a hallucinating ChatBot and published without verification!
This is a type of model collapse we will be dealing with not just at universities in the near future.
Embed this noticeAnthony (abucci@buc.ci)'s status on Saturday, 20-Jan-2024 19:17:09 JST
AnthonyRegarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.
...kann sich diesbezüglich kaum ein Staat der Welt noch leisten.
Im Hinblick auf meine Frage hinsichtlich der umzurüstenden Traktoren bin ich hinsichtlich der Nucht-#Schmalspurtraktoren für #Deutschland übrigens einen grossen Schrit weiter:
Man jagt einen Bilderkennungsalgorithmus über alle geposteten Fotos und Videos der Bauernproteste und lässt z.B. #ChatGPT die Traktoren zählen und Clustern.
Es sind die, die noch gut fahren (und auch etwas größere...
You’ve probably heard about The New York Times lawsuit against OpenAI. But the details are impressive: The NY Times provided 100 exhibits of ChatGPT completing articles almost word for word, and the suit seeks the deletion of all GPT models.
Could it succeed, and what would be the consequences? I spoke with general counsel Cecilia Ziniti and Techdirt editor @mmasnick about its chances.
I just saw a post that referred to ChatGPT as "Mansplaining as a service", and it is so wonderfully correct - instant generation of superficially plausible yet totally fabricated nonsense presented with unflagging confidence, regardless of topic, without concern, regard, or even awareness of the expertise of its audience :D #chatgpt#mansplaining#GenerativeAiIsGoingGreat
I think I get why #AI is "the next big thing" for #Microsoft. It's about Search money.
I think they think #ChatGPT is the next Google search. Instead of a list of sites, you get what you are looking for in those sites. Instead of image searching & getting a bunch of crap & porn, it makes the image you are looking for. Instead of searching for how to write a cover letter, it will compose it for you.
It's not about being Siri on roids. It's about being the next Google.
Who are these people that #ChatGPT says co-founded #Lucire with me? Iʼve never heard of them. Is ChatGPTʼs programming so racist it canʼt handle a person of colour as a sole founder so it adds white-sounding names to the story?
FYI in case you were blocking #ChatGPT GPTBot via network blocks, they changed on 30 November to a different set of addresses https://openai.com/gptbot.json
#ChatGPT Plus not only can OCR (extract text) from the image/meme but also it can answer the question:
"The question in the image is a common riddle. It states: "A farmer had 15 sheep, and all but 8 died. How many are left?" The answer is 8 because the riddle is playing on the wording. It's not asking for the number of sheep that died, but rather it says "all but 8" meaning all except for 8 died. So 8 sheep are left."
KI in Schulen: Auch Rheinland-Pfalz kauft Fobizz-Lizenzen für Lehrkräfte
Nach Mecklenburg-Vorpommern folgt Rheinland-Pfalz beim Kauf von Fobizz-Lizenzen für seine Lehrkräfte. Dadurch werden KI-Tools und Fortbildungen verfügbar.
For non-Germans: #AxelSpringer publishing, particularly the #BILD, is essentially the German print equivalent of #FoxNews. They are well-known for spreading false information. Adding oil to fires of hate, fear, and envy sells their papers.
If this is the quality of sources #ChatGPT is built on, it's bound to be a far-right stochastic shit mill.