Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@arcana @anemone @georgia @lucy LLMs are incapable of thought and reasoning, I feel like people often forget that all they do is autocomplete text.
There are inherent limits to how 'good' outputs, that aren't close-ish to some training data, can be, and it shows. LLMs can't do maths, Tasking an LLM to generate vertices for a 3d model that you describe in text would just go wrong, or even simpler, LLMs can't really do ascii art.
And even when they should be close to something in the training set, it can go terribly wrong if it's nothing that has been reiterated times and times again.
I tried using copilot when i wrote a small helper library to encode x86 instructions, and it was the biggest nightmare, the code it generated looked reasonable, but it was riddled with small bugs that took me more time to debug than if i had just wrote it myself.
If someone's job was receiving instructions such as in files [list of files], "center all divs with the id 'foo'" and fulfilling that, they'd be in danger, but that's not what software engineering looks like.
Real AGI could eventually replace engineers and reshape things in a way you describe, but AGI is still pretty far away, LLMs are certainly not intelligent.