Yet another thing solved by AgentV3N's prompt routing.
We're moving closer and closer to using 1B models for the first stage of prompt evaluation. A simple "thank you" could be met with an agent-level prepared response, or even just an emoji reaction.
Meanwhile, ChatGPT by default uses the last model you used in the conversation. Imagine blowing o1 reasoning on "thanks mate". Wouldn't be me.