@EricCarroll @noyes @jik It really is, especially considering that LLMs are gigantic neural networks that get trained to say things that match a certain pattern. Deviate from that pattern, and the model parameters get updated so it’s less likely to deviate from it next time.
That’s pretty much what they do, isn’t it? For the guys who write the checks?