@thomasfuchs@hachyderm.io this for real, if you aren't painstakingly checking every line of anything output by these systems (be it code or anything else frankly) you damn well better be willing to take responsibility for whatever wild shit you cosign.
I've seen so much stuff where people go "wow so amazing" and I look at it, scratch the surface even a little and it's trash.
Best use case I can see for these things as they are is maybe brainstorming, taking advantage of the hallucination effect for out of left field ideas, but please for the love of god don't ship/publish the raw output of these things!!!
Conversation
Notices
-
Embed this notice
Haru 春:pride_verify: (harueb@blahaj.zone)'s status on Tuesday, 01-Aug-2023 00:29:25 JST Haru 春:pride_verify: -
Embed this notice
Brett Flippin (bflipp@vmst.io)'s status on Tuesday, 01-Aug-2023 00:35:20 JST Brett Flippin @thomasfuchs They probably will be able to eventually but it’s going to be more like how they use the computer in Star Trek. Person is doing the reasoning and problem solving, computer is doing the work. But we’re decades away from that even now.
-
Embed this notice
embix (brezelradar@norden.social)'s status on Tuesday, 01-Aug-2023 00:37:51 JST embix @thomasfuchs you seem to use Luddite like a slur. Why?
-
Embed this notice
Justin 🌻 (onyxraven@hachyderm.io)'s status on Tuesday, 01-Aug-2023 00:44:54 JST Justin 🌻 @thomasfuchs all the most 'successful' code uses I've seen are as easily solved with the right abstraction library to eliminate boilerplate, or copypasting examples from the source or other users. Because that's exactly what the LLM is doing. I'll give it the benefit that it might be able to fit more folks learning/discovery models, and maybe compose answers a tad easier. EG: asking it to write regex, awk, or jq? its fine. not great, but has unstuck me.
-
Embed this notice
keeri :blobfoxuno: (keeri@pawb.fun)'s status on Tuesday, 01-Aug-2023 01:00:01 JST keeri :blobfoxuno: @thomasfuchs I recommend a little more research :blobfox_w:
-
Embed this notice
Alex Coventry (alx@mastodon.mit.edu)'s status on Tuesday, 01-Aug-2023 01:22:07 JST Alex Coventry @thomasfuchs It's also a category error to equate contemporary LLMs with all possible future AI systems. I'm with you on your skepticism regarding the reasoning capabilities of contemporary LLMs, though.
-
Embed this notice
Alex Coventry (alx@mastodon.mit.edu)'s status on Tuesday, 01-Aug-2023 01:33:17 JST Alex Coventry @thomasfuchs OK, but "It's a category error to assume AI will ever be able to write complex software" is going to look a bit silly if a system capable of writing complex software comes along. 🙂
-
Embed this notice
Jyrgen N (jyrgenn@mas.to)'s status on Tuesday, 01-Aug-2023 01:34:53 JST Jyrgen N @thomasfuchs Right now my employer is investigating candidates to replace the current hospital information system (HIS) we are running. This will in any event be a horrendously complex software product, likely pieced together from pre-existing components of some supplier with lots of glue layers and adaptations and interfaces to other systems.
>> -
Embed this notice
Jyrgen N (jyrgenn@mas.to)'s status on Tuesday, 01-Aug-2023 01:34:53 JST Jyrgen N @thomasfuchs A HIS like this is, I think, on the more complex side as software goes.
Wouldn’t it be great fun to take these tons of binders with specifications, and requirements, and experience with the existing system, give them to the “AI can program” proponents, and let have AI have its way with them? I imagine that would be a real fun train wreck to watch.
-
Embed this notice
Stuart McHattie (sdjmchattie@hachyderm.io)'s status on Wednesday, 02-Aug-2023 03:33:39 JST Stuart McHattie @thomasfuchs it’s quite common for someone to believe something that they don’t understand and appears to do useful things has no limits, or we’re close to solving the limits. When something is sufficiently inseparable from magic, it might as well be magic.
-
Embed this notice