@jae@fsebugoutzone.org I might be behind the game here, but… FFS, real? 😲 Sad indeed! Well TUI gomuks still works for chats — in fact, it works better than iamb, a similar TUI Matrix thing in Rust.
@jae@fsebugoutzone.org I'm not going beyond 1.20 with Go, but 1.20 builds gomuks just fine — and it runs even on my Pentium-M ThinkPads. Not certain about 486 though, gc could never target anything older than i686 to my knowledge, gcc-go would have trouble building it, besides, libgo probably won't work anyway…
@jae@fsebugoutzone.org I'm not giving up on it yet, Wiki-wikipedia claims 486 is still supported… VESA framebuffer for the win! We can still load colour emoji font and use gomuks, profanity, amfora, w3m, chawan, tut, cmus… on this wonderful machine! 🤩
BTW "To add to its multimedia capabilities, IBM commissioned Canon to manufacture an optional webcam that connects to the PC 110 via the PC Card slot" A webcam, can you imagine that? Processing any kind of video today on such a CPU is unconceivable. I remember using my ThinkPad T40 for Skype calls — it didn't have a webcam, but the one on the other end might, and it all worked beautifully. Nowadays it would probably shit its pants just attempting to load any WebRTC-cabable application 😩
@m0xEE@jae@TeaTootler I ran X on a 486SX just fine. I think you'd need to use swap if you wanted to get much done on it. I don't know of many distros that fit in 260MB, though; might need to get a larger CF card. If I ask DistroWatch for active distros with a 386/486 option, it suggests Damn Small Linux, Tiny Core, and SliTaz, but SliTaz says it's "designed to run speedily on hardware with 256 MB of RAM". For something that small, you would probably be better off rolling your own. I tried to boot hal91 (`qemu-system-i386 -m 4 -vga std -enable-kvm -fda images/hal91-0.4.5.img`) and the first thing it does is try to create a few 4MB ramdisks, which it fails to do. It gives you a shell if you toss it 20MB of RAM instead of 4, though. But that's Linux 2.0.39.
You'd probably get more mileage out of it if you put Plan 9 or Inferno onto it. Inferno specifically can work okay with 2MB of RAM, even $current_year Inferno (last time I checked), and although it's Unix-*like* rather than Unix, you've got a shell and a compiler and sed and grep and network utilities and whatnot. (I do not know how well the JIT works without hard-float but I also think 4MB might not be enough memory to make use of the JIT. :dracula:)
There's also always "Just put FreeDOS on it" and a lot of weird OSs use DOS as a bootloader. colorFORTH environments, etc. hal91-0.4.5.img
@TeaTootler@poa.st Void supports 32-bit Intel albeit i686, building the base system for i386 probably won't be a problem, I still wonder how Linux kernel of today alone would fare on just 20 megabytes — no one's targeting such systems anymore, even embedded devices and routers have more RAM than that 😅 Also, AFAIR 486 SX can only address 16-bit address space so that is likely to be a problem too. Or was it the case with 386 SX and DX? 🤔
@TeaTootler@poa.st You're right about 486 and FPU! In case with 386 it meant 16-bit data bus though: https://en.wikipedia.org/wiki/I386#80386SX "The 16-bit bus simplified designs but hampered performance. Only 24 pins were connected to the address bus, therefore limiting addressing to 16 MB"
I do remember there being a caveat to it — their naming is fscked up, never without a surprise! 🤪
@zero@TeaTootler@jae@m0xEE Oh, yeah, I mean, use my RISC-V DevTerm a lot: single-core, 1GHz, 1GB RAM. I rolled a Slackware image for it. Except that the only browsers I have installed are w3m and netsurf, everything else that I do works just fine. I figure even with Debian, I could get by all right on an old G4.
I think, give it maybe ten years, there'll be more RISC-V than ARM on the low end and it'll be creeping up to the middle, maybe some of the high end. It's got the same characteristics that made x86 eat the world.
> we dont need secret computers running inside the main computer.
@RedTechEngineer@jae@p@m0xEE@TeaTootler 4MB of RAM is probably doable with a kernel config, but 260MB of storage is much harder since the stage3 tarball is only a little bit smaller than that. And the limiting factor is probably python.
@phnt@jae@p@m0xEE@TeaTootler maybe cheating. But you could sidestep the storage issue by temporarily mounting a network volume. If you need more memory just mount a Google drive volume or floppy disk for SWAP :D
I had a little iPAQ and when Compaq ported Linux to it, I went for it. One of the things I wanted to do was be able to write programs, and there was just 32MB of storage, so gcc didn't fit, Perl didn't fit. I had an assembler, and there were packages for two other languages that fit in a couple of megs: Python and Ruby. I read a little of the Pickaxe Book and I was basically horrified by the typical OO "dog is a kind of animal" tutorial shit in it so I looked at a Python tutorial and saw the syntactically significant whitespace and that is how I learned Ruby.
I think, though, give it ten years. I love ARM, but since ARMs are all made by licensing, ARM's main strength in the marketplace is holding its IP. But now here's an ISA that is also low-power RISC without any IP encumbrance and with plenty of manufacturers, it runs Linux, it does embedded systems, etc.
@waff@phnt@RedTechEngineer@TeaTootler@jae@m0xEE Or just...not Linux. I know this idea shocks a lot of people but if you are on specialized hardware where regular Linux isn't going to work anyway and you've got to use a heavily modified system, you might have an easier time using something that isn't Linux. $current_year glibc doesn't work on old-timey 2.x Linux.
That having been said, there are semi-maintained forks already. I don't know where to find them because I wouldn't buy an ancient machine just to run $current_year Linux slop on it.
@phnt@RedTechEngineer@jae@p@m0xEE@TeaTootler this is why we need more forks of legacy linux kernels, ones that have no rust, ones that are 2.x with patches, etc. Itd immensely help compatibility and size restrictions
@p@RedTechEngineer@phnt@jae@m0xEE Heh meanwhile these days it's perl eating ~36MB, python eating ~22MB, ruby eating ~16MB. And lua eating just ~420KB. (Using Alpine x86_64 packages for the sizes, without forgetting libruby and liblua)
@p@RedTechEngineer@jae@m0xEE@phnt Ooh tcl is ~4.1MB, still seems pretty big (busybox is ~800KB, and even bash is 1.3MB) but compared to perl/python/ruby it's pretty good.
@lanodan@RedTechEngineer@jae@m0xEE@phnt If I do `pkginfo -l tcl`, it includes a lot of headers, sqlite3 library, some internationalization stuff, a massive number of man pages, etc. I imagine that the ipk file for it wouldn't include most of that stuff.
@lanodan@RedTechEngineer@jae@m0xEE@phnt That's weird, yeah. Tcl is one of those languages where you can do a lot of application programming or glue code and get, like, 90% of what anyone wants at 90% of the speed in less than 1% of the space (and probably less than 10% of the effort), same as sqlite3. I think TinyCore uses Tcl/Tk for a lot of its GUI (or used to).
@p@fsebugoutzone.org I've just given up on Go at this point — not only the implementation, but the language itself keeps changing too fast, with Rust no one promised something stable and even Rust isn't that bad. God knows, I tried keeping up with it — 32-bit PowerPC is no longer supported, but I've built the latest version of gcc-go that works on the machine this instance is hosted on, which is Go 1.10. I've backported Bloat to it and it works fine on this machine, but beyond it — backporting is too much effort. And it's not about essential things, it's no longer the language that "Go Programming Language" book is about, it's something else entirely. Like "Ya-ay, let's re-do iterators" — and no, some internal unification in the standard library does not sound like a good enough justification to me. But Go community seems to like it that way, go to HNews and you'll see articles like "I redid everything with new iterators so now you can't build my module with older toolchain". WTF did you do that for, just because you could? And it seems a lot of people weren't keeping up, so they have added an option to add a line to go.mod that forces a particular version of toolchain which… gets downloaded from the Internet — just like that. No, Google, it's not supposed to work like that, so fuck you! Just fuck you! In a lot of environments you can't do things like that and I can't do it like that because you have dropped support for the machines I want to run my tiny piece of software on. I'm not a corporate samurai, who builds only for the latest Intel and 64-bit ARM, hoping that it might also work on something else. So I'm out!
Go 1.20 might seem chosen almost arbitrarily, but it can build all the software in Go that I still use. I'm not relying on any other software in Go and not using it for my personal needs because it's cancer — Google has killed it for me.
@m0xEE@p@jae@TeaTootler >But Go community seems to like it that way, go to HNews and you'll see articles like "I redid everything with new iterators so now you can't build my module with older toolchain". WTF did you do that for, just because you could? Gitea in a nutshell. Unbuildable piece of software unless you are using latest-2 and sometimes not even that.
@m0xEE@TeaTootler@jae@p I thought that Forgejo would at least stop doing that. Which they did with their LTS versions, but they also didn't fix annoying bugs in them, so it became nearly unusable.
>A few packages in the standard library provide iterator-based APIs The standard library is about APIs now. Even the presentation regarding iterators is horrible, with the new [yield] keyword/function type (which is a Python name btw) being buried mid article with minimal explanation.
@phnt@TeaTootler@m0xEE@p they're also shoving mcp into the system now for all the .ai worshippers. gitea has really went commercial. forgejo isn't much better but im not sure what you mean its nearly unusable. ive got forgejo running fine for about a year. ci/cd is nearly always building and deploying for our our group.
we tend to work on latest go so version bump and new patterns aren't big problems.
@jae@laurel@TeaTootler@m0xEE Pretty sure Ruby's yield predates Python's and came from Smalltalk (or I remember it being in Perl, I think), but I also think it's a little out of place in Go.
@p@TeaTootler@laurel@m0xEE i couldn't remember when it was introduced. it was really handwavy thingy in python. then again ive only written .rb since ~2011 so maybe i glazed over it for years.
@RedTechEngineer@jae@p@m0xEE@TeaTootler I tried :arch:32 on a 1.6GHz Celeron netbook about a year ago and there were some packages that weren't rebuilt in months - the python toolchain? my memory is a bit hazy but I think it caused a few issues with AUR packages
@waff@RedTechEngineer@phnt@jae@m0xEE@TeaTootler Seriously: serial terminal (19200 8n1), Linux machine, drawterm -G into the Plan 9 box, fse/bbs to get to the ssh interface, and I am posting from a 1983 computer.
>they're also shoving mcp into the system now for all the .ai worshippers Imagine not knowing how to use git.
>forgejo isn't much better but im not sure what you mean its nearly unusable. ive got forgejo running fine for about a year. ci/cd is nearly always building and deploying for our our group. It's a collection of small bug fixes they didn't backport from upstream that drove me crazy.
LDAP auth was broken for more than a month with a fix already in Gitea (maybe still isn't fixed). At work we depended on this, so I had to build custom images after every update.
Mirrors with LFS objects would seemingly randomly balloon in size. This one annoyed me for months until it got fixed in some release.
And lastly, after the addition of Forgejo actions, mirrors would start to have failing pipelines that got enabled automatically even though zero runners were configured. The solution was either manually go through 30 mirrors and disable actions manually for every repo, or completely disable actions in settings. The failing actions would be that big of a deal, if Forgejo cleaned them up properly from the DB. A fix for this with `gitea doctor` has been in upstream for 5 months and it still has not reached Forgejo v7.X which is the first affected version I think.
One day, I got fed up with Forgejo thanks Chinese scrapers downloading bundles and tarballs which created 30GB of orphaned archives, compared the Forgejo v7 and Gitea 1.21 (base for Forgejo v7) schemas and manually reverted the migrations. Did the same few days later at work and that solved all our problems I had with Forgejo. Plus the whole relicensing thing with Forgejo rubbed me the wrong way. They tried LibreOffice EEE and failed. Companies still prefer Gitea or even GitLab (we use both for reasons).
> It's a collection of small bug fixes they didn't backport from upstream that drove me crazy.
do you have a list besides what you listed here? i maintain my own fork and can bring them forward.
> LDAP auth was broken for more than a month with a fix already in Gitea (maybe still isn't fixed). At work we depended on this, so I had to build custom images after every update.
interesting that people use this for work. i think forgejo is a bit behind since they are volunteers and not paid coders
> Mirrors with LFS objects would seemingly randomly balloon in size. This one annoyed me for months until it got fixed in some release.
i saw this too, but i only mirror one repo so it wasn't a big problem.
> And lastly, after the addition of Forgejo actions, mirrors would start to have failing pipelines that got enabled automatically even though zero runners were configured.
it sounds like default behavior is to pull in repo attrs like actions_enabled: true is that what you mean? i disabled actions on my system since it's not very mature. i used woodpecker which is fine atm.
> One day, I got fed up with Forgejo thanks Chinese scrapers downloading bundles and tarballs which created 30GB of orphaned archives, compared the Forgejo v7 and Gitea 1.21 (base for Forgejo v7) schemas and manually reverted the migrations. Did the same few days later at work and that solved all our problems I had with Forgejo. Plus the whole relicensing thing with Forgejo rubbed me the wrong way. They tried LibreOffice EEE and failed. Companies still prefer Gitea or even GitLab (we use both for reasons).
i can understand how this might be an issue for work/business. my business is just get shit done and hack everything. i have the luxury of laughing at licensing.
> And completely changes the service from "Store these blobs" to "execute arbitrary code".
the system doesn't exec arbitrary code. it handles git operations, drops builds, runs linters, sast/sca, full test-harness with regression suite, then deployments where applicable. it's all very intentional.
ace theoretically could occur in the ci system (not related to forgejo), however if the operator is running a ci system that allows elevated privilege to the host operating system that the runners kick on, that is problematic and poor choice to make. i do not have that problem.
and the scraper issue people were hemming/hawing about is a non issue when you have a waf at the edge, which we do. (servo has ascended)
i've got a three node kubernetes cluster running on e-waste. it's nice e-waste, but e-waste nonetheless. i also gifted some to another kubernetes operator for the cost of shipping
@jae@RedTechEngineer@phnt@p@m0xEE@TeaTootler people who pay top dollar are suckers when most dying machines can be easily be taken from e waste facilities, old offices being renovated, libraries, etc.
@waff@RedTechEngineer@TeaTootler@m0xEE@p@phnt good find! had a stack of sparc10s before giving them away when i moved. i refuse to buy new unless it's something that is a requirement for an important project or it's a legitimate need (only .001% of things fit that category)
@TeaTootler@RedTechEngineer@phnt@p@m0xEE i was on juniper poe switch but it started overheating, went with unifi so i can be cool. router is industrial grade quad core machine passive cooled running opnsense. i've had it for 5 years now and it's like a timex.
@jae@RedTechEngineer@phnt@p@m0xEE i ran pfSense on a dell dimension pentium 4 desktop with 2 3com NICs for over 4-5 years before someone took pity on me and bought a unifi router
they're great if you have single-line. where i'm at we have 1xfiber 1xcopper 1xlte 1xsat connections so having 6 ports is a requirement (two for trunk to core switch)