If you're writing a technical deep dive blog post about LLMs and don't start with the fact that there's still no evidence as to their utility and the fact that there's still no ethical way to train them, you've so fundamentally undercut your own post that there's really no point to it other than building hype.
Conversation
Notices
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Thursday, 02-Jan-2025 21:35:43 JST Cassandra Granade 🏳️⚧️
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Thursday, 02-Jan-2025 21:37:28 JST Cassandra Granade 🏳️⚧️
If you want to claim that a technology is useful that either takes empirical evidence or some theory that would predict utility. Neither exists for LLMs.
Everything written about them that's not critical is just optimistic fluff at *best*.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Thursday, 02-Jan-2025 21:42:07 JST Cassandra Granade 🏳️⚧️
I swear, 2025 is going to have to be when I stop being even a little bit nice about "AI." We've had almost three years of this hype cycle, and we've got fucking nothing to show for it but theft of artistic labor at truly staggering scales, massive increases in energy usage, and mind-boggling wealth transfer to an end times cult.
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Thursday, 02-Jan-2025 21:45:40 JST Cassandra Granade 🏳️⚧️
If you're a tech influencer, and you're still using your platform to push AI in 2025, you've had every chance to change course. I absolutely do not take a positive impression from the choice to keep pushing that bullshit after *years* of debunking.
:debian: 𝚜𝚎𝚕𝚎𝚊 :opensuse: repeated this. -
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Thursday, 02-Jan-2025 21:48:51 JST Cassandra Granade 🏳️⚧️
Writing positive shit about AI in 2025 shows at a *bare minimum* that you haven't done the research before you start trying to sell your expertise. More realistically, it often means you don't care about the truth of what you write.
Either way, the choice to push AI hype in 2025 undermines everything else that you say and do.
-
Embed this notice
yuhasz01 (yuhasz01@mastodon.social)'s status on Friday, 03-Jan-2025 07:18:13 JST yuhasz01
@xgranade AI generative models and tools are very, very primitive at this point and of suspect accuracy. They rely on possible illegal/privacy violation training data sets. And, the algorithms have terribly biased designs built into them as well. May be realistic,for now, to focus upon automation, robotic tasks and fast information processing use cases.
For anything more the science, the math, complex models, agnostic algorithms, and more comprehensive data sets are not yet developed. -
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Friday, 03-Jan-2025 07:18:13 JST Cassandra Granade 🏳️⚧️
@yuhasz01 The trouble is "at this point." There's no a priori reason to think that the problems with LLMs can be solved by more development and effort into LLMs. Rather, the best available evidence points to that those problems are inherent to using LLMs at all, and that a completely different approach would be needed to provide any hope of factual accuracy.
-
Embed this notice
Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) (lxo@gnusocial.jp)'s status on Friday, 03-Jan-2025 10:16:09 JST Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br)
I was with you when you were writing LLM, and "AI", but when you removed the quotes, that triggered me, FWIW. LLM and AI are not even in the same field. but ultimately you're right, there's no perspective whatsoever of AI in 2025.
-
Embed this notice