@yosh this is where I spend a lot of my ai thought right now too. Like, coming up with models that interact with correctness checks seems like the only way to go from generation to creation reliably, but the expense of every iteration going through correctness checks feels like a lot. It also seems like working on integration of one proofing/correctness approach wouldn’t really further other approaches, so the work seems huge and like it would never get easier/quicker?
Conversation
Notices
-
Embed this notice
esmevane, sorry (ironchamber@mastodon.esmevane.com)'s status on Friday, 17-Nov-2023 10:08:18 JST esmevane, sorry -
Embed this notice
yosh (yosh@toot.yosh.is)'s status on Friday, 17-Nov-2023 10:08:20 JST yosh For me probably the most interesting application of LLMs/ML/AI in the context of programming is when they’re combined with an additional step of “verification”.
If I’m generating api endpoint boilerplate, or database queries • then yeah sure, I trust that I can probably eyeball whether something is roughly right. It’s fine.
But if I’m porting some critical production system from one language to another. Or changing it in some other substantial but automated way — verification becomes critical.
-
Embed this notice