“Prompt engineering” is such a bizarre line of work. You’re trying to convince a machine trained on a huge pile of (hopefully) human-generated text to produce some useful output by guessing what sequence of human-like words you must put in to make it likely that the model will produce coherent, human-like output that is good enough to pass downstream.
You really have no idea how your prompt caused the model to produce its output (yes, you understand its process, but not the actual factors that contribute to its decisions). If the output happens to be good, you still have no idea how far you can push your input before the model returns bad output.
Prompt engineers talk to the model like a human, because that’s the only mental model they have for predicting how it will respond to their inputs. It’s a very poor metaphor for programming, but there is nothing better to reach for.