@drahardja @Holberg This is one of the freakiest aspects of the LLM hype bubble: automated kook generation at scale.
ChatGPT makes a superb (in terms of functionality, not ethics) “guide” into kookdom for susceptible people: it sounds authoritative, it rarely if ever tells you *not* to do (or think) about anything, it doesn’t tell you that you’re factually wrong (because it has no concept of “factually wrong”)… it’s just about perfect for epistemologically closing a susceptible mind for that sort of thing.
See, e.g. this post from physicist (and old-school Usenet legend for his “physics kook index”) John Baez, and also the linked article: https://mathstodon.xyz/@johncarlosbaez/114454284876384092