I love making hard things easy, using abstractions to automate complex tasks. But that's not what AI is. I can't peel back that complexity, I can't use the output of an AI to generalize to unanticipated examples of complexity. The boilerplate code emitted by an LLM will always be boilerplate instead of a code generator that abstracts out what makes that code boilerplate in the first place.
Conversation
Notices
-
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Friday, 22-Nov-2024 05:33:26 JST Cassandra Granade 🏳️⚧️ -
Embed this notice
Cassandra Granade 🏳️⚧️ (xgranade@wandering.shop)'s status on Friday, 22-Nov-2024 05:33:28 JST Cassandra Granade 🏳️⚧️ By way of agreement, while I'm not a professional programmer any more, I have been one for much of my life, and in the year 2024, I'm not even curious about AI.
In addition to all the obvious reasons and moral issues, AI doesn't offer any meaningful abstractions over any problem domains. Abstractions are nice because they allow you to manage complexity: they leak, they can be undone and peeled back to see that complexity again.
AI offers a lie: hard things are easy.
alcinnz repeated this.
-
Embed this notice