Writing a little side project iOS app. It’s been a minute, and this is my first time using the combo of SwiftUI + SwiftData.
Lots of little frictions and frustrations, gaps and bumps in the tools, but the big headline is this:
It’s really good.
1/
Writing a little side project iOS app. It’s been a minute, and this is my first time using the combo of SwiftUI + SwiftData.
Lots of little frictions and frustrations, gaps and bumps in the tools, but the big headline is this:
It’s really good.
1/
“Really good” as in implementing the same thing using the Objective-C and UIKit of 2010 would easily have taken 2x the time and effort — and probably more like 4x or 5x.
That’s a big number, but I don’t think it’s an exaggeration. The problems the tools solve, the simplicity with which the pieces fit together, the whole classes of developer errors which are now statically checked or simply ruled out altogether…it’s just worlds away from those first iOS apps I wrote back in 2009 or whenever the heck it was. (Still miss you, MyCast Weather!)
2/
That 4–5x developer speedup is in the same realm as some of the claims people are now making about LLM coding.
I find claims those extravagant claims about LLMs dubious. •Highly• dubious. I just haven’t yet seen evidence that makes me believe there are gains on that order from LLMs for anything other than “this dev tool is so unfamiliar I don’t even know where to start” or “I have a highly menial, bog-standard coding task where correctness is not crucial.”
3/
I’m not even convinced the net productivity change from LLM coding is •positive• if we’re talking about bringing code all the way to production, much less maintaining it. Maybe this time the silver bullet is real, unlike aaaall the past times, but I sure am skeptical.
Yet…here I really •am• seeing those kinds of many-fold gains, gains that (unlike LLMs!) come with •increased• code reliability and maintainability — and from what?
From the slow, hard engineering work of tool improvement we’ve doing all along:
4/
@inthehands the only thing that's keeping our codebases on the rails in the AI era is the rigorous system of checks and balances provided by our tooling and CI system.
Language improvements. Library improvements. Incremental fixes. Drastic redesigns. UX and design thinking applied to dev tools (cf “clarity at the point of use”). Dragging good research that arduous last mile into production-readiness (e.g. ML family’s Option / Maybe types sugared into ergonomic usability as Swift optionals; functional reactive programming becoming declarative rendering). The arduous push toward memory safety, type safety, data race safety. The arduous work of getting a community on board with all these changes.
Better abstractions. Better tools.
5/
I’m once again finding my way to the same train of Fred Brooks laid out in “No Silver Bullet” in 1986. He didn’t quite say it this way, but his essay basically makes the case that the real silver bullet is time, creativity, and hard work.
That ought to be obvious, but in this age of off-the-chart AI marketing hype, it feels like a radical thing to say.
6/
In this rush to whip investors into a frenzy
— and let’s be clear, it’s •investors• this whole AI frenzy is targeting; developers, executives, and even whole companies are merely pawns in the ploy to create fantastical narratives of future growth —
I’m not just concerned about the unprecedented ocean of slop code that’s going to make its way into production. I’m concerned about our ability to keep doing the arduous, gradual engineer work that •does• truly help us develop software more efficiently and more effectively.
@inthehands Things which ought to be obvious bear repeating, to check that our premises haven't changed.
Brooks' No Silver Bullet and Mythical Man-Month remain relevant critiques and mostly unlearned lessons; an indictment of the information technology field as practiced in the world.
NSB is absolutely worth checking. If you make the "how" as easy as you can, you're grappling with the "what".
Opinions vary on the division between "how" and "what". ("dev" and "product"?)
If better abstractions are what bring the state of the art forward, what happens when we replace the process of abstraction-making with automated slop-filling?
In the most wildly optimistic versions of what LLM-generated code can do (optimistic from the AI vendors’ POV, anyway), in a future where we’re all just vibe coding, it’s hard for me to imagine any scenario where underlying engineering improvements don’t just grind to a halt.
We make better abstractions by •thinking about our problems•.
8/
A thought experiment: what would code look like today if we’d had the best AI of today, but only the programming languages of 1955? Would it even be •possible• to build an iPhone??
And what if the coming Vibe Coding Future is (as I believe) preposterously oversold? Then we have a generation of developers who’ve avoided doing the kind of wrestling with problems one has to do to find one’s way to engineering improvements.
9/
Either way, this doesn’t feel like a path to a better engineering future. Vendors of AI coding tools aren’t inviting us to use their pattern-repeating tools to •augment• our ability to imagine and engineer better things for humans; they’re asking us to •lose• our ability. They’re asking us to cannibalize ourselves. In this AI-driven dystopia, our engineering brains end up like the humans in Wall-E: unable to stand on two feet.
10/
But it’s worse than Wall-E, because at least in the movie their anti-grav chairs moved them around.
Remember, all LLMs can do is recognize and repeat patterns in their training data. Tool innovation doesn’t magically emerge anywhere in the ouroboros of LLMs training on LLM-generated code trained on LLM-generated code.
To embrace the AI vendors’ future is to take the current engineering state of the art, the spot on the road where we stand now, and brick ourselves in.
11/
I hope to heavens this AI bubble bursts so that we can finally start looking at these tools as they actually are, figuring what if anything they’re truly good for, approaching them with the curious and critical eye their vendors desperately want us not to apply right now.
In the meantime, we’re in for some rocky years: wild cycles of hype-driven layoffs followed by panicked toxic waste code cleanup jobs. As if the worldwide march of authoritarianism weren’t enough, we have to deal with this.
/end
@lesley
I think a lot about what I should be doing for my students to prepare them for the LLMs they’re going to encounter. “You must use the tool” is not a thought that crosses my mind. I’ve had them study and critique the output to good effect on a few occasions. I make a point of nudging (or shoving) them to learn the alternatives (e.g. reading documentation instead of expecting GPT to magically answer questions), and trying to learn to be active and curious and not soak up the passive mode LLMs encourage.
I think I need to have them spend more time studying, understanding, and fixing code that’s wrong. •That• is going to be a useful skill.
@inthehands I was forced by my university courses to use those tools 😬. Putting my own aversion aside, I think they are useful to setting up boilerplates, and also helpful from time to time when I am stuck. However, anything "agent" where LLM performs tasks automatically for you is usually more of a time sink at the end
@va2lam
It’s a slow trickle out of the lab, sometimes it takes decades, but research really can pay off!
@inthehands super interesting take to hear for someone who designs research prototype tools, thanks!
@jmeowmeow
I’m of a mind that the “what” necessarily extends deep into dev, and attempts to make product purely own the “what” are an ill-guided MBA fantasy rooted in a lack of understanding of what software really is and how it happens. The answer isn’t to create a how/what wall, but rather to improve communication and mutual understanding along the whole chain and between all the roles people play.
@dpnash
My first language was Applesoft BASIC, and I’m chucking grimly to myself imagining someone trying to vibe an LLM into building a modern phone out of mounds of code in that language.
@inthehands I can’t comment directly on 1955, but can on 30 years later, in 1985-ish.
Not a chance of an iPhone (or, at least, a rough 1985 equivalent). At best, I think you *might* get some fairly nice software on a single sort-of widely used platform. But only on one, at a time when there were far more platforms, with numerous differences that mattered a lot, compared to now.
Just thinking of my own early programming experiences: There were something like 6-8 *actively used* flavors of BASIC (still being used occasionally in the early-mid 80s for commercial software, amazing as it might seem now), each one having very different ways to do mundane things like clear the screen or do pixel-by pixel graphics. Porting a graphics-heavy program from, say, Apple II or Atari BASIC to IBM PC was obnoxious at best.
Pascal was more consistent across systems, but I remember some fairly significant differences between the Apple (IIe/old Mac) versions I learned as a kid vs. the VAX/VMS version I saw in intro CS in college.
C was also more consistent, at least on Unix boxes, but there was still an awful lot of shoot-from-the-hip coding there. The version of K&R I had in the mid-1990s (so, 10 years later or so) still had notorious buffer overflow sources like gets() in its sample code, and this wouldn’t change much till the internet and widespread publicly accessible networking raised the danger level on those a lot. An actually good AI *now* would be aware of the scope of possible problems there, but in 1985, I’m much less confident it would have been.
I don’t know nearly as much about the big corporate/research systems that mostly ran FORTRAN or COBOL, and I *suppose* that those environments might have been more consistent and thus a bit better for an AI project like that, but I have my doubts.
ADDENDUM:
This Susan Sontag quote feels relevant. Imagine for a moment that she’s talking about engineering in the age of AI hype. She isn’t, but imagine for a moment that she is.
@inthehands I suspect that answering the question "why is it the _what_, _how_, and _why_ of development are at odds" heads straight into the way human systems get stuck in bad states.
Having the words for the intentions, expectations, and qualities of the work can help people at least not talk past each other.
The field in practice has shipped a million leaky prototypes that demo'ed well. Time to market? Time to capture funders' attention? Those bring their own pressures and problems.
I think of Kingsbury's Jepsen project as a model of asking whether the "what" was actually delivered by testing the fundamentals.
@inthehands We've had some fairly deep conversations at work recently about coding AI and the main talking point is: how do we hire graduates/juniors in the AI age? We understand that it helps productivity for very senior engineers who may want to automate parts of their jobs but how do you balance that with someone who is going to constantly generate tech debt, not being able to critically think and not be able to review code. We're worried that there will be some cutoff point circa 2023 where devs stopped thinking.
@inthehands The one report of LLM-generated / LLM-using code that I found most striking was from Jon Udell, related to generating an app for converting a photograph of an event flier into a useful calendar entry.
Partly it was striking because Udell has been a long-term advocate and a developer of community-run local information resources including calendaring, which have mostly been absorbed by Big Social Media.
The app combines small deployment scale and low cost of error (hand-fix the calendar) with toil reduction around composing APIs -- and makes something possible by making it easy. Not an app I'd want to maintain for others, but a technique for bringing tedious things in reach, so they happen on a relevant time scale.
@inthehands this! So much.
Re this from @rotnroll666, Java is a great example of what I’m talking about upthread:
https://mastodon.social/@rotnroll666/114374427369696039
I was writing Java as early as 1997 and as recently as 2024. Observations:
(1) Modern Java can feel •very• different from the language of 1997, or even 2010. You can do really nice work with the language.
(2) The language is still weighed down by some essential design decisions that would be •very• hard to unroll at this point. It’s also weighed down somewhat by culture. (Ask me about either if you care.)
(3) Its long-term stability is unparalleled. Code that takes longer to write but still runs without modification 20 years later? That’s a •fantastic• tradeoff for a lot of projects out there!
Achieving (1) while navigating the constraints of (2) and holding on to (3)? That’s a real achievement — and the kind of work that just chokes up and dies in the sea of information pollution of a “vibe-coded” future.
@inthehands I love this entire thread. Thoughtful and precise. (Also happy that you’re having such a pleasant dev experience right now).
I hate that they stole “vibe coding”. Vibes are the seemingly arcane quick architectural decisions you can make after you’ve gone through the Aristototelian process of letting a code base come to rest in your soul. Vibes are when you can *feel* what needs to be done because you *know* the stuff. Give us back our real vibes.
@serpentroots
Not if this computer science instructor can help it!
@inthehands
"What will be left when OpenAI burns is infrastructure that players like Palantir will use because their problems fit the hardware and their business model can create the necessary money from governments all over the planet.
The AI crash won’t leave us with infrastructures that are useful to democratic and humane societies, with useful tools to do something productive with, but with infrastructures tailor-made to suppress it."
@tante
https://tante.cc/2025/04/15/these-are-not-the-same/
@lritter
I think it exists primarily because there’s a •massive• marketing effort behind it, targeting investors on the idea of preposterous future growth.
@inthehands indeed preposterously oversold. it's one of these things where i'm confident it's not my age that biases me but the lack of quality in the tech. i'm not against the idea of "vibe coding" but i don't see that we're there yet. we ain't there by a long shot.
it probably only exists because a lot of us are lonely, and a simulated companion is still a companion.
@andy_warb
As someone who teaches these people who recently were kids are were raised on phone and tablets: yes, a lot of them don’t really know what a file system is! There are things I have to teach like “you should pay attention to •where• you saved the file” that I didn’t used to have to teach.
But also: the doors to computer are open wider now than they were 30 years ago, and students can and do learn once they’re through the doors. This hypothetical vibe coding future feels different to me: not just a lack of knowledge, but a lack of acquiring the •ways• of thinking and seeing that are the foundation of engineering. LLMs or not, we have to work on fostering those.
@inthehands it’ll be similar to what we’re seeing with computers in general - kids being raised on tablets don’t know how to use a real computer, they don’t understand basics like file systems because everything is an app!
Really appreciate this from @aurorus. A broken liability regime is a huge part of the sales pitch for AI-generated coding. We could fix myriad problems by firmly establishing that every company is responsible for what it does, no matter to whom or to what it delegates its decisions. (See the classic “computer can never make management decisions” image)
@ShadSterling I think that past you describe largely mythology. It’s easy to turn up people in the 1960s complaining about all the bad code running around in business when there were only a tiny handful of people in the world who even had access to a computer.
@inthehands didn’t we already have a problem with more and more programmers avoiding that kind of wrestling? Between the hype about high paying jobs and the economic context squeezing everyone harder to the point where an ordinary job can barely support a single person, my sense of it has been that what was once a small group of people inclined to focus on that part has been overtaken by a much larger group more interested in a better wage, whether by necessity or by disposition
@faassen @dpnash
I think that’s half true, but it ignores the extent to which getting from “very good idea” to “practical” is a massive, difficult, genuinely deep problem. Swift’s optionals are my go-to example (pun intended): the ML-style Option / Maybe was there fully materialized in…what, early 80s? And Swift optionals are fully isomorphic to it in every meaningful way. But they are so, so, so, so much more practical to work with. Why? Better sugar, better docs, better work on bringing library APIs up to speed, better ecosystem, etc etc. Curmudgeons out there who say “ML has done this forever!” but fail to ask •why• if that’s true it took 30+ years to enter the mainstream are experiencing a failure of curiosity.
So yes, hardware. But lots of other things too.
@inthehands
What we have seen in the modern era to enable the iphone I would argue is far more capable hardware. But I think capable languages and, in part, libraries existed back then, just were not common on home computers.
(I think networking and i18n made great strides forward since then)
@inthehands
Smalltalk and Lisp existed in 1985. Important ML style languages like CAML and Miranda appeared in 1985.
It wasn't so much that the languages didn't exist but their ideas were not widely distributed. If the hardware was up to running them efficiently in this thought experiment, which it would have to be to be fair, and we ignore historical platform differences too as we can for an iphone, I totally think the languages would have been up to the job. But few people used them
@zenkat @venite They’ve been stealing the good words since forever. Vibes. Unicorn. Iterative. Hacker.
@venite @inthehands Their original sin was stealing "unicorn".
Real unicorns want nothing to do with your Blockchain AI startup, bro.
@inthehands @aurorus A couple of footnotes on the CRA and liability.
If you put a piece of software on the EU market commercially, you're responsible for due diligence re vulnerabilities *and* for managing any vulnerabilities that get reported to you (with a list of things you must do about reporting, mitigation, etc.). There are some exemptions, such as noncommercial open source software and software for specific sectors - for example, products for marine use are exempted from the CRA, because they're already covered by the (stricter) E27 regulations.
What if the vulnerability is in a component you brought in from elsewhere (that is, the component itself and not your integration of it)? Well, there are several possibilities:
1. If it's a noncommercial open-source project you've put in a commercial product, that's *your* responsibility. You have to provide a fix or mitigation. You can ask the maintainer to do so, but they're under no legal obligation. And hey, it's open source - you have the code.
2. If it's a software product that is itself commercially placed on the EU market, then the developers of that are bound by the CRA too, so the responsibility moves to them. You can breathe a sigh of relief for now, though you're obligated to distribute the fix when you get it.
3. If it's a software product that isn't commercially placed on the EU market (eg. something you bought from a non-EU entity which isn't offering the product generally on the EU market), then it's *your* responsibility. Sucks to be you if it isn't open source.
It doesn't really matter whether the code was written by a human, a bespoke code generator, an LLM or a space alien: If you signed off on it and put it on the EU market, *you* are responsible for its security.
@kittylyst @rotnroll666
Value types, generic type erasure, and null are three of the things on my list of probably-intractible problems. Sounds very interesting!
@inthehands @rotnroll666 If you aren't already familiar with it, Project Valhalla is an OpenJDK project to take a second bite at some of those fundamental design decisions (e.g. split between primitives and reference types, generics, null-safety) by tackling them as a unified problem space and trying to reimagine them all at once.
Worth a deep dive on, IMO. It's been a long time in the making, but it could be the most profound change in Java's history.
@inthehands @aurorus (Straight from a CRA compliance workshop organized by the EU commission and my country's standards agency two weeks ago. I attended right before going on easter break. :) )
@faassen @dpnash > the question is whether it's primarily a product of improvement to programming language and environments rather than hardware
I’m arguing that yes, absolutely, to a very large extent that’s exactly what it is. There’s nothing about Swift optionals that couldn’t have run on 1985 hardware.
> it became successful using Objective C, which goes back to the 80s
ref unc: what is “it” here??
> I do believe language innovation is empowering, for me personally more than llms
Same, 100%
Incremental improvement of language and ecosystem plays a part. I mean, of course the iphone is a product of cumulative improvement.
But the question is whether it's primarily a product of improvement to programming language and environments rather than hardware. And well, it became successful using Objective C, which goes back to the 80s. (but also refined from nextstep for a long time)
I do believe language innovation is empowering, for me personally more than llms
@inthehands
Thanks for putting your findings up front!
@dpnash
Like was garbage collection only refined and mature enough for mainstream adoption when Java came among? Were option types really unusably cumbersome in Haskell until Swift came along?
@faassen @dpnash
> Were option types really unusably cumbersome in Haskell until Swift came along?
YES.
I also think that an important factor certain ideas didn't spread faster is not because the implementations weren't good enough but because programmers don't want to throw out everything for something new that they don't yet understand or might not want to understand. Plus big corp backing a new language ecosystem can make a huge difference in adoption.
But for me personally certain programming languages have been more empowering than llms.
@faassen
> Your argument about Swift optionals assumes Swift's optionals, which I have no experience with
Then I am telling you there is something for you here to learn, which it is not my job to teach you. Go study up, use them a bit, and ask: “What problem did they solve that ML did not that finally got this excellent idea into the mainstream?” (Hint: it had nothing to do with hardware speed.) If you refuse to believe that there was a real problem, you’ll stay stuck in being angry at the world for not being as enlightened as you. If you believe that there •was• still a problem to be solved, you can be a part of improving the world.
@dpnash
iphone became successful on the back of Objective C.
Your argument about Swift optionals assumes Swift's optionals, which I have no experience with, is the implementation that finally made them usable. I think that hasn't been demonstrated. Other factors like massive backing by Apple may have played a part, say.
@inthehands @rotnroll666 I can't enjoy Java for a bit. But I can absolutely get behind everything you just said.
It's an amazing achievement.
@inthehands
I will hope though that you will come to understand why your comment is unfair and makes me feel sad. Evidently I made you angry enough for you to reach for such a rhetorical hammer. I am sorry I pissed you off; that was not my intent, I enjoyed thinking about this topic even if we disagree.
I will bow out of this thread.
@faassen
I'm sorry, Martijn, I do not want to make you feel sad. Your replies had become a bit combative — “that hasn't been demonstrated” etc — and it made me a bit angry to learn that you were arguing with me about something you hadn’t even used.
@inthehands
This is not a kind comment and makes assumptions.
I have used the option type for a few years in Rust now and that's the first time I used them extensively. Don't mistake my awareness of their history as a pretense at hipster enlightenment.
If you are going to make a broad historical argument don't start lecturing people who quibble. You could thank them instead for helping you refine it.
I read the Swift docs during this discussion, which is the best I could at short notice.
@isaackuo
Imagine it on a 6502 then.
Could an LLM assemble increasingly large balls of assembly into increasingly complex applications? Maybe.
Would it have led to the layers of abstraction we now use to shape and reason about modern software? I don’t think so.
@inthehands I find this thought experiment interesting, but I'm not familiar enough with 1955 computer hardware architecture and programming languages to really say.
I am familiar with how early microprocessors did things, though, and I think LLM could do a job if there was sufficient (human generated) training data for it to copy/paste an existing human written implementation of a smart phone.
Which is where I ditch AI and get sidetracked into pondering 8 bit smart phone design ...
@inthehands @kittylyst @rotnroll666 One can mostly avoid null, but type erasure for sure is a fundamental pain that puts heavy restrictions on what the type system can ever evolve to. The problem isn't even with Java the language, but with the JVM resp. Java byte code, thus limiting _all_ JVM-based languages!
@OmegaPolice @kittylyst @rotnroll666
The trouble with null is APIs: a variant type approach instead of null is extremely unpleasant with APIs that weren’t designed for it from the start and make you deal with nulls that can’t actually happen (as Swift 1.0 found out).
@inthehands @kittylyst @rotnroll666 True.
Does "switch to Kotlin for such parts of your code" count as workaround? 😬
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.