@LianSirenia@kitsunes.club I don't like the word "hallucinate" in the context of machine learning. It implies that machine learning models can actually think, can make mistakes and some kind of mental process similar to a living being. This is obviously not true.
I do like "word-association soupifier", it is a bit more formal than the "bullshit-generator" that I often like to use.
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
SuperDicq (superdicq@minidisc.tokyo)'s status on Wednesday, 07-May-2025 18:39:48 JST SuperDicq