Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
pistolero :thispersondoesnotexist: (p@freespeechextremist.com)'s status on Tuesday, 21-Nov-2023 02:22:24 JSTpistolero :thispersondoesnotexist: @Re_L @hackernews
> Current thing AI is a grift, but tech innovation does progress in bursts.
Thus the "call me when OpenAI fires all their programmers". That was my point: ChatGPT is not worth panicking about any more than Github Copilot was. These are incremental steps and they're in line with hardware advances rather than any qualitative change in approach.
> Even though pets.com isn't worth a billion the effect of the technology as a whole has been massive.
Ha, you're dating yourself a bit, maybe; I'm old enough to know what it was (preceded my entry into the industry but I did work with one of their Perl hackers a long time back), but that's a gamble nowadays. Gotta use WeWork, maybe. I don't know, a lot of hype-trains have pulled into the station.
> there is a concerned effort to turn the software developer into a line worker,
Sure, Calcanis the Land-Retard was up on some stage last year saying that coders get paid way too much, all of the other VCs nodded solemnly, and it's been like that since the 90s at least. Nobody has replaced mathematicians with calculators yet, and as for the less computer-sciencey work, it's not any more likely to happen with software engineers than with industrial engineers or mechanical engineers.
> There is quite a large number of people who just do api integrations after a 6 month bootcamp and they can be replaced by something like this or just drop the bootcamp requirement entirely.
The hard part isn't mashing the code into the machine. Tech that acts like a force-multiplier for people that are skilled ends up just raising the bar: the guys that perform ten percent better than mean end up ten times more productive. CPU layouts are mostly done by machines rather than humans but this doesn't mean that you can hire an idiot to lay out your CPU. That's the shape of it, that's how it shakes out.
But the thing about programming is that it's different from this kind of activity. You lay out transistors by hand and then a machine starts laying them out and you're making larger architectural decisions: with software, when you abstract something, that ends up part of the language runtime. Hardware moves, garbage collection gets cheap enough to be practical; we move from standing in line to get some time on the one machine up to fully automated machine provisioning across a network, and very few coders have to count microseconds. Once we can abstract something, we incorporate it.
> AI assisted borrow checker,
We have that, it's called "Any HLL other than Rust". I don't (for once!) intend to denigrate Rust here, just that if Rust requires something of you, and that thing can be automated, the automation of that thing will get incorporated one step up.
That is to say, if we could make a machine do a better job of it than existing GCs, we'd just incorporate that into existing GCs rather than automating the process of writing Rust. The second we're able to make a machine do it as well as the median feature-factory coder, we incorporate that into the big ball of mud. This is one of the issues with IDEs: they do this process backwards, so you're communicating with the IDE rather than using a language that is expressive enough, so your "program" is a series of IDE manipulations, it's no longer what's on the page. An IDE that does enough folding is more like a preprocessing step than an editor. This is the issue with AI tooling: it improves that process, and that's not the process that needs help. The meaningful things that are irreducible are the decisions and design trade-offs, not the literal bytes in the file, so time is more effectively spend producing a better notation for those decisions (by improving the language or the runtime) than it is putting the coder on rails.
So you communicate to the compiler "For each of these strings in this array, call this function and put its integer value into this other array of integers, sort those integers, and for each swap in the integer array, perform a swap in the original array, and then from the other array, print the first three values", and the compiler emits instructions to request memory from the OS and then populate those arrays and then perform the swaps, etc. Your intent is to list the top three of something by some metric, maybe the three longest names, whatever. You don't need to worry about generating that code, the compiler does it. One step up from that, you have a language that abstracts all of that, you don't even tell the compiler, you just say `SELECT name FROM table ORDER BY length(name) LIMIT 3` or you say `names.sort_by(&:length)[0,3]` or `cat names | awk '{print length($0), $0}' | sort -n | sed 3q` and for any of those languages, if the machine-code representation of the actions matters, then you have a facility to drop down and speak to the computer in those terms, but for the most part, we don't write that sort of thing any more, a machine does it. From COBOL to Cucumber, the "Automate the process of telling the language $x, but don't incorporate that into a language $y" approach has fallen over, and that's all AI autocomplete is.
> AI assisted Test Driven Software,
AI-based fuzz testing, done in a more accessible way, actually would be very cool. I mean, what you do now, right, you take production data, you train up a Markov chain of things a user (whether the user is a human or a machine) is likely to do next, then you give it a means of generating test data, and it's all bespoke. "Here are the logs, here is the staging server or a build of the code or whatever we ship, find some bugs" would be a very useful case for AI, and even better would be "Here's a feed of the logs, here's the staging server or a build of the code or whatever, generate the minimal test case that reproduces that bug, check the failing test into the codebase, and then profile the code to figure out what paths are involved in the error, annotate those, then email a coder". "Dereferenced a null pointer" is the very base level of analysis for a bug: the actual bug (assuming the overwhelmingly likely case that it's in your code rather than in the compiler or in the OS) is either "failed to account for something in a decision tree" or "failure to properly convey that set of decisions to the machine".