my favorite kind of AI/LLM criticism media is when the person begins with a huge hedge like "I'm not an AI hater. *of course* AI is a useful technology and it will doubtlessly have transformative effects on the way we work" and then they spend the next 8k words/45 minutes showing how AI isn't actually useful and won't actually transform the way we work. it's like... it's okay to just say that AI sucks
@cwebber i can only read that statement as an outrageous troll. or at least i hope it's an outrageous troll, because otherwise it's the most misguided thing i've ever read about AI and programming. "we also dislike AI slop. which is why we use AI to generate choreography, not dancing" "which is why we use AI to generate sheet music, not musical performance" "which is why we use AI to generate blueprints, not buildings" "which is why we use AI to generate plots, not novels" etc
ai slop sucks! but don't worry, this machine, which we're making out of slop, and which we're feeding lots of slop into, definitely won't produce more slop. i am the chief innovation officer
re: this that has been making the rounds https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ i'm always struck by sentences like "the technical barrier went up" that don't attribute what happened to any cause in particular. technical barriers are not agents and they do not go up on their own (nor for that matter are "technical barriers" one monolithic thing that move in a single direction). if you're going to make a plan of action, you have to figure out *who and what* changed (the perception of) "technical barriers"
i think you could make a good case that the "technical barriers went up" in web dev in particular due to the web becoming commercialized: when you're worrying about click throughs and seo and conversion rates, and moving at capital pace, you make code and use frameworks that sacrifice legibility for extraction and dev velocity. view source is useless nowadays because of the buildup of cruft related to those goals (at least partially, imo)
ios user interfaces have become truly nihilistic. buttons on top of buttons. text on top of text. multiple inscrutable hamburgers. nothing has any meaning and all human action is futile
the frustrating thing is that literally anyone who has thought about language and technology for fifteen consecutive seconds could have told you that autocomplete and other writing tools influence beliefs (and *have* been telling you this, over and over, for decades). the other frustrating thing is that slop-pushers *brag about their ability to do this* and right-wing actors are actively exploiting it, but in polite company everyone pretends that's not the case https://mathstodon.xyz/@gregeganSF/116219772468880168
i felt like i needed to try it, mostly so i could understand what my students will likely be expected to be able to do when/if they get programming jobs. i used cursor, which i think is the only service offering a free tier that includes access to a CLI code "agent." i ran the agent in the directory of a hobby compiler project and was initially impressed with its ability to summarize the code—until i realized it was parroting my own docs back at me (1/n)
in faith as good as i can muster, i once again checked to see if a chatbot can actually solve the kinds of technical problems i come up against every day. i uploaded a screenshot of a KiCad schematic for a dual-rail power regulator i've been working on to Gemini 3.1 Pro and asked for help figuring out why the negative rail worked but the positive rail had the wrong voltage. after twenty minutes of back and forth, it finally gives me this
@cwebber i don't know that i'd trust these models for summarization or navigation. even when the outputs are technically correct, they can leave out certain information or frame the information in a misleading way, papering over whatever makes the code unique and materially suited for the task at hand
@cwebber (this is actually my main concern about llms. i think people really underestimate how much llms reproduce the values and expectations in their corpus, their reinforcement learning tasks, their explicit engineering, and their product design. and they underestimate the effects that this will have on their understanding of code and the horizon of what's possible to do with code)
@cwebber i have to say that my personal (skeptical) threat model of AI apocalypse failed to account for how eager people are to put their bank account details in their config files
i'm paranoid that communities of practice with names like "responsible AI" and "critical AI" and especially "ethical AI" (even when done in good faith) mostly serve as permission structures/fig leaves for big commercial AI ("yes, the tech is harmful, but we have people working on addressing that"), and at worst run the chance of entrenching commercial AI by creating a category of professionals whose careers depend on its continued adoption—so they have something to criticize