The inability to understand WHAT generative models are genuinely good at should be studied as its own cognitive bias network I stg. It's like people are looking for the places where we most obviously have existing solutions.
Reading the designs of these attempts reveals so many foolish mental models from the HUMAN that read just pure software bro to me: e.g., arbitrary limits that feel efficient or something (limiting # of characters considered by the model in the contract)