@suzannealdrich
Yep. Imagine how much more useful LLM coding assistance would be if they didn’t require us humans to constantly, actively remind ourselves, “don’t trust any of this, it looks like a definitive answer but it’s not, verify everything.“ there’s very much a social aspect to how these systems present their output to us.
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 17-May-2024 01:59:26 JSTPaul Cantrell