@filippo That's missing the point. Your colleagues understand there are consequences to fucking up, avoid doing it, and work to make things right if they do. The slop extruder just digs in and feeds you more slop.
Conversation
Notices
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Monday, 16-Feb-2026 23:44:08 JST
Rich Felker
-
Embed this notice
Filippo Valsorda (filippo@abyssdomain.expert)'s status on Monday, 16-Feb-2026 23:44:10 JST
Filippo Valsorda
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
Rich Felker repeated this. -
Embed this notice
Carlos Fdez Llamas (sirikon@mastodon.social)'s status on Monday, 16-Feb-2026 23:44:36 JST
Carlos Fdez Llamas
@filippo A colleague is responsible for the output even when I'm the reviewer, AI is not.
A colleague is expected to learn from its mistakes and grow in responsibilities, AI only improves if the big tech firm decides to retrain.
Colleagues are very different from each other and each one has their own flaws and strengths when you try to trick them into doing something. There are like 5 AIs sharing 90% of the work and can be tricked by asking them to write a haiku.
-
Embed this notice