For me probably the most interesting application of LLMs/ML/AI in the context of programming is when they’re combined with an additional step of “verification”.
If I’m generating api endpoint boilerplate, or database queries • then yeah sure, I trust that I can probably eyeball whether something is roughly right. It’s fine.
But if I’m porting some critical production system from one language to another. Or changing it in some other substantial but automated way — verification becomes critical.