In my experience, the utility of AI increases with the expansion beyond your comfort boundaries...
... On the other hand there is a goldilocks zone there, because if you go too far out of your comfort zone... You will not be able to distinguish fact from fiction.
Case in point: The attached image is from a 3D Demo I threw together in about 2-3 hours. It's in 3D python based framework called Visier. Spinny 3D shit hosted on the web. I know NOTHING about Python, Visier or hosting 3D frameworks on servers. Had I to do it the hard way, it would take me 3 weeks minimum, not 3 hours. I think THAT is true power of #vibecoding.
@alex@janl i was using auto-complete mode and the amount of subtle mistakes it inserted meant it would consistently cost me more in debug+review cycles than it would cost me to pay slightly more attention in the first place
@whitequark@janl in auto-complete mode, no surpries, it is somewhat usable, especially when it’s a lot of boilerplate. But as “coding agent” it’s, uh, /suboptimal/
@alex@david_chisnall@janl if you can automate predictions for something to the point where an LLM can do it semi-reliably, in almost every case you could, and i will argue should, define an abstraction that does it deterministically
I read that a lot, but to me 'a lot of boilerplate' is code smell. A tool that makes it easy to write a lot of boilerplate is a bad thing because it removes incentives to remove the boilerplate.
If you need to duplicate a lot of code across different applications, you've introduced some fragility. It's hard to change the underlying APIs because everyone has copied and pasted the same thing (with or without an LLM). That's a problem for long-term evolution of a set of APIs. An LLM here just makes it easy to ship things that make everyone downstream accumulate technical debt faster.
Say I have event driven system and I want isolate each even type in its own “container” (class, module, whatever). Consequently there will be a lot of structural similarity between “containers”, because they will expose same interface.
Some languages are verbose by design like HTML. If you are building a form you’ll have to repeat same things over and over again, and it’s… okay? Overall all UI/graphics code is very verbose because you have to setup a lot of things. And when you take component approach you’ll end up with case above.
LLM autocomplete predictions are just more context aware than plain autocomplete (intellisense or whatever) and can save you some time on typing, because it can just spit out pre-filled method call, with all arguments filled with values from the current context.
No wonder that thingie made to auto-complete based on a context does decent job as context-aware auto-complete!
@alex@david_chisnall@janl people have been writing abstractions over HTML for almost as long as HTML existed. you can go ahead and use .jsx/.tsx in almost any environment today; the abstractions have won
I am not 'scared'. I have been a developer of a sort of another for 35 years.
I know exactly what I am capable of. It took me 6 hours to learn how to back up and restore a WP site. As you get older, tasks get harder.
When I was 21, I could pull a 72 hour coding marathon fueled by 20 liters of Pepsi Max and a couple of short naps.
Today, I am lucky to get a script running in a day.
I actually do not like reading HTML tutorials, they are written very poorly, contain poor examples, and as anyone who used #StackOverflow will testify, half the 'solutions' are wrong.
With AI, you just ask "Given these parameters, and these outcomes, how do I do blah?"
@n_dimension@ignaloidas people say this shit about getting older but i'm significantly more productive today than when i was in my early 20s. i aim for a bigger scope, i have a higher bar for quality, and i finish projects earlier anyway
@alex@david_chisnall@janl once more, the moment you are reaching for a snippet is an excellent moment to consider whether it could have been a function
@whitequark@david_chisnall@janl and to make jsx/tsx components you have to setup said components and all the code around it is kinda boilerplate. Hardly any IDE have enough snippets to automate that.
But I totally agree that with LLM you trade speed for attention, especially if it’s non statically typed code so your IDE can’t catch bullshit on the spot.
I’m not defending LLMs: they are overhyped and they don’t live to their promise of the utility. For me it’s somewhat time saving in some scenarios, but overall it’s meh. I don’t believe that current architecture can achieve anything other than ruining the knowledge storages we had before
@doragasu the experiments i ran were all on code so mind-numbingly boring i was procrastinating doing it because it made me feel like my brain is fossilizing
@whitequark Not defending AI, I think it should be burned in flames along with the companies training them, but I think you are one of the most competent hackers out there. If an AI would match your level, it would definitely be something.