What I don't see in the code, yet, is graceful error handling, but I guess that needs more understanding of the intention the code has.
I think I spotted the intent to do some sanitation and I think that this is something that LLM _could_ learn in the future - if only we provide it with good examples.
Which we don't :-)
So LLM will only generate code that is, at a maximum, as good as the code (and examples) programmers of specific languages write.
I'll definitely have to look into #copilot..
Conversation
Notices
-
Embed this notice
chris@strafpla.net (chris@mstdn.strafpla.net)'s status on Thursday, 09-Nov-2023 18:46:59 JST chris@strafpla.net -
Embed this notice
chris@strafpla.net (chris@mstdn.strafpla.net)'s status on Thursday, 09-Nov-2023 18:47:00 JST chris@strafpla.net I wrote two python scripts using #LLM:
1) search for something in my IMAP mail and send the result to a well documented target API that has a lot of examples online.
The resulting script was useful and did nearly run from the start.
2) call a well documented API that has little example code. This was more difficult, #bing / #chatgpt made up methods that could have existed but didn't.
Still it saved time, it felt like debugging code of a newb. The code is spaghetti of course and to be refactored.GreenSkyOverMe (Monika) repeated this.
-
Embed this notice