For the first time in my lifetime, I am being asked to pay 15 USD or 20 USD per month for Large Language Model (LLM) based software services which unambiguously state the following in their terms and conditions.
1. Output may not always be accurate.
2. The output must be evaluated for accuracy and appropriateness i.e. double-checking of responses for accuracy is mandatory.
3. Output may be incomplete, incorrect, or offensive i.e. may contain misleading information and/or factual inaccuracies.
4. Output may be due to hallucination. In this context, hallucination refers to the creation of erroneous or misleading information.
Similar to large language based software services, there are image recognition, recommendation systems which can also produce incorrect, inaccurate and offensive output. How many users are willing to pay a monthly subscription of 15-20 USD for such software products and services? Most of us pay for expert-assisted medical diagnosis, which requires the collaboration of healthcare professionals and advanced technologies. If one is unfortunate, such a diagnosis may be wrong or inaccurate. However, medical diagnosis is not the widely adopted, scalable solution that Large Language Model (LLM)-based software service providers are hoping for.
References:
1. https://openai.com/policies/terms-of-use
2. https://brave.com/leo/
#Bullshit #Chatbots #LLM #ZeroTrustIInformarion #ChatGPT #LargeLanguageModels #LargeLanguageModel
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
Srijit Kumar Bhadra (srijit@shonk.social)'s status on Saturday, 27-Jan-2024 15:36:55 JSTSrijit Kumar Bhadra