@inthehands I can’t comment directly on 1955, but can on 30 years later, in 1985-ish.
Not a chance of an iPhone (or, at least, a rough 1985 equivalent). At best, I think you *might* get some fairly nice software on a single sort-of widely used platform. But only on one, at a time when there were far more platforms, with numerous differences that mattered a lot, compared to now.
Just thinking of my own early programming experiences: There were something like 6-8 *actively used* flavors of BASIC (still being used occasionally in the early-mid 80s for commercial software, amazing as it might seem now), each one having very different ways to do mundane things like clear the screen or do pixel-by pixel graphics. Porting a graphics-heavy program from, say, Apple II or Atari BASIC to IBM PC was obnoxious at best.
Pascal was more consistent across systems, but I remember some fairly significant differences between the Apple (IIe/old Mac) versions I learned as a kid vs. the VAX/VMS version I saw in intro CS in college.
C was also more consistent, at least on Unix boxes, but there was still an awful lot of shoot-from-the-hip coding there. The version of K&R I had in the mid-1990s (so, 10 years later or so) still had notorious buffer overflow sources like gets() in its sample code, and this wouldn’t change much till the internet and widespread publicly accessible networking raised the danger level on those a lot. An actually good AI *now* would be aware of the scope of possible problems there, but in 1985, I’m much less confident it would have been.
I don’t know nearly as much about the big corporate/research systems that mostly ran FORTRAN or COBOL, and I *suppose* that those environments might have been more consistent and thus a bit better for an AI project like that, but I have my doubts.