@p I noticed you are still using that DevTerm (very nice piece of kit) and you mentioned that you are looking into Risc-V.
I'm waiting for my VisionFive2 to come (it's taking it's sweet time) and I really do think that in the next two years that shit is going to pop off, at least in foreign markets.
Right now I'm just trying to figure how the support for Risc-V is looking, and see if I can learn more about the architecture.
Cheapest one they were selling when I ordered the uConsole back in October 2022, and it arrived a couple of months ago finally. I might not have picked it up had it not been both weird and cheap, because I have not been excited about the hardware in the past. One of my big objections, though, was that no one was making them and it remained to be seen if it'd work out, my suspicion was that it was a wank, overspecified pre-silicon that was driven by wishful thinking instead of the practical realities. Even if it worked, I expected Pentium f00f bugs and whatnot and I expected flakiness, weird shit, I expected X to crash randomly and the kernel to throw "illegal instruction" errors and the thing to hard-lock. I've been using the thing for a while and it's delightful, it's solid, the compiler doesn't get weird on me, no heatsink or fan and it never gets too hot to touch.
> since it's open, and there's no licensing involved.
> The possibilities with specialized instruction sets for different things (kind of like MMX or SSE) could get really whacky and fun.
Well, the RISC-V Foundation seems to be keeping a handle on extensions. I read an interview with one of the designers, interesting stuff, he wants to revive some of Cray's ideas and do vector instructions instead of multiple-dispatch and I'm kinda excited about that.
I mean, see attached for some grounding (hardware should be boring), but something cool might be happening. Cautiously optimistic.
> I'm thankful to hear that;
Oh, yeah, everything's easy to work with, etc. It's beautiful.
> I'm a little weary
:reeEEE:
> I remember getting into ARM when it was fairly new and getting no support for most shit for over like 2 years. (this was VERY early on)
Oh, yeah, it was a little rough. (I don't know how early "early" is. Acorn desktops in the 90s? iPAQ? GBA?) ARM is still a little rough. It's actually easier to work with RISC-V already. And part of that is, like, they pushed some stuff into the ISA that might be in the ABI otherwise, like which registers are caller-save versus callee-save, and that alone simplifies a lot. something_new_and_stupid.png
@p > Have RISC-V core for DevTerm, very pleased. It is slow but it feels nice. Absurdly high battery life, too.
Oh sick (maybe I misread the other post), it's going to be fairly sluggish in the early days, just since it's only recently becoming even affordable to buy any chipset with it. What I'm really curious about is since it's open, and there's no licensing involved. The possibilities with specialized instruction sets for different things (kind of like MMX or SSE) could get really whacky and fun.
> It's absurd. It's easier to get shit working on RISC-V than on ARM.
I'm thankful to hear that; i was hesitant but that's kind of what I've been reading on the forums and shit. It seems like the software support for the arch has experienced unprecedented growth. I'm a little weary since I remember getting into ARM when it was fairly new and getting no support for most shit for over like 2 years. (this was VERY early on)
> One of my big objections, though, was that no one was making them and it remained to be seen if it'd work out, my suspicion was that it was a wank, overspecified pre-silicon that was driven by wishful thinking instead of the practical realities.
A fair objection; up until about a few months ago that was my thoughts almost exactly.
> I've been using the thing for a while and it's delightful, it's solid, the compiler doesn't get weird on me, no heatsink or fan and it never gets too hot to touch.
That's comforting, I've read a lot about some peeps experiences and it's hard to really figure if it's just a "them problem" or something deeper. Though I know you are quite good about this sort of thing which gives me hope :blobcateyes:
> he wants to revive some of Cray's ideas and do vector instructions instead of multiple-dispatch and I'm kinda excited about that.
As the current meta seems to be, "spend as many cycles as humanly possible" I think that's a solid idea. Main line software is only going to get more bloated, but I think that heavy optimizations/extensions should make it quite nice for the rest of us who like running light. The vector processing is such a cool idea, but I understand why, especially back during it's time; it was a bit too forward thinking. From my understanding, most developers, and their associated companies, were more interested in procedural, and graphics and other things weren't so advanced to the point where something like that was high-priority way back in the day. Although I think we are slowly seeing a shift. Even in systems where GPUs are the crowned king of performance (when it comes to hashing, shit like that) they are often bottlenecked by pipelined instructions from the CPU, increasing latency across the board as the CPU is still the "brain", whereas GPU's, memory controllers, etc. are all just limbs. Even packaging the information to send to a GPU is paramount to it's success. One of the big reasons ASICs are so big in mining, from my understanding, is a closer relationship between what's being requested, and an optimized path for those instructions (which are limited to one use case)
> I mean, see attached for some grounding (hardware should be boring), but something cool might be happening. Cautiously optimistic.
He's right; however I can't help but get a little hopeful :02_laugh:
> Oh, yeah, everything's easy to work with, etc. It's beautiful.
:blobcatheart:
> Oh, yeah, it was a little rough. (I don't know how early "early" is. Acorn desktops in the 90s? iPAQ? GBA?) ARM is still a little rough. It's actually easier to work with RISC-V already. And part of that is, like, they pushed some stuff into the ISA that might be in the ABI otherwise, like which registers are caller-save versus callee-save, and that alone simplifies a lot.
Oh lol, I done screwed up there. I meant a bit early for modern applications (outside of phones), with a decent amount of interest in this latest "wave" of RISC adoption, I just didn't communicate that bit. Yeah it's had a long and hard road. I think ARM took a major hit just because the goals of most companies weren't optimized battery life and super light little clients. To do "real work" you usually would need a fat client running on your system which, to do things "optimally" you were restricted to the partial truth of CISC >>> RISC. The goals of [current year] has departed in many ways from those in the 90s. It also doesn't help that processor speeds were menial so doing more in one instruction, 20 cycles or so, was a better proposition than 3-4 instructions of varying cycle counts per. But that's just how one dude sees it.
Wrong place, wrong time. Also to normies it was considered an emergent technology that doesn't have practical use. Though thanks to ARM being more widely adopted in the past few years, and support from major vendors, I hope that people will be more open to RISC-V once it matures a bit more.
> A fair objection; up until about a few months ago that was my thoughts almost exactly.
It is rare that I am this pleased to be proven wrong.
> I've read a lot about some peeps experiences and it's hard to really figure if it's just a "them problem" or something deeper.
The only bad things I have heard about it are people being disappointed that this specific chip is underpowered. If anyone is having a bad time with it, that's news to me.
> Main line software is only going to get more bloated,
That can't hold up. The plan is to make it computationally infeasible to run your own infra.
> The vector processing is such a cool idea, but I understand why, especially back during it's time; it was a bit too forward thinking.
:cray:
> One of the big reasons ASICs are so big in mining, from my understanding, is a closer relationship between what's being requested, and an optimized path for those instructions
Well, if you don't have to have something that's generally programmable and it's got one task, then you don't have to fetch instructions from memory, decode them, pipeline them, catch an interrupt from the SATA controller, wait for the memory bus.
> He's right; however I can't help but get a little hopeful :02_laugh:
He's almost always right. :linus:
In general, the cautious perspective is almost always right and all advances happen during rare moments when it's completely wrong.
> I hope that people will be more open to RISC-V once it matures a bit more.
Right now, while it's new but things are working, this is where the opportunity is.
@p Note: I'm aware i'm speaking very generally about the 90s. There was also a lot of interest in hobbyist spaces for parallel computing as well as other neat things like RISC. However I'm only talking about mainstream goals and the bigger manufacturers/publishers like MS, IBM, shit like that.
Also fuck Apple for leaving PowerPC to die, when they moved to Intel in later years that was pretty fucked up, they went backwards.
> There was also a lot of interest in hobbyist spaces for parallel computing as well as other neat things like RISC.
Industry kind of expected that MIPS and SPARC were going to stay where they were, no one expected x86 to eat the world. Coming into the 90s, it was m68k and then all the Serious Business computing was RISC.
> Also fuck Apple for leaving PowerPC to die,
IBM continues get a lot of mileage out of it with Blue Gene et al, it's just not something people put on the desktop. Looks like, per https://www.top500.org/lists/top500/2023/11/ , it's EPYC, Xeon, Xeon, A64FX, EPYC, Xeon, and then #7 is a POWER9.
Unfortunatly A spark box back in the day would cost a pretty penny. it was a rad move on Sun's part to release opensolaris...Oracle can suck a dick for killing it
> Unfortunatly A spark box back in the day would cost a pretty penny.
Yeah, I mean, the equivalent high-end workstation would be pricey now, just we entered a strange "CPU monoculture" for about 20 years and we're sort of slowly coming out. It does turn out, though, that in the tail end of the 90s or the early 00s, if you hit up surplus stores and dumpsters near the aerospace engineering companies (the right parts of, roughly, Manhattan Beach through Torrance) you could get an entire Beowulf cluster for cheap/free...then blow out the savings trying to get weird cables to hook them all up. "Badass UltraSPARC, 27" CRT, amazing...I have a Linux floppy that boots to minicom to actually use the thing, though because I can't find the weird monitor cable anywhere but I found an adapter for the serial port."
> it was a rad move on Sun's part to release opensolaris...
Part rent-seeking and part attempting to preserve some means of control, and the proportions (as well as what they're attempting to control) vary by organization.
> will be "super effective", unless the only target is the normal population. Which they themselves don't seem to be all that interested in running their own infra anyway.
I don't know who the fuck Ebin Moglen has working for him, but the Freedom Box has taken maybe 15 or 20 years and there's still no turnkey infra for normies and you'd think that, at worst, it should take a year of full-time effort by about two guys. yunohost grew out of the effort (I *think*), which would make yunohost the Linux to the Freedom Box's HURD. I think people would be glad to have something like that if they could just pick it up, but I also think that they are somewhat unaware and the vendors deliberately obscure the implications of "someone else's computer". I mean, people were delighted to have home computers that they could use to compute rather than just dialing into some other machine.
> though I question it's effectiveness.
Well, look at mail servers: you run your own mail server and GMail will fuck with you and if GMail won't deliver your mail, that can hose you.
> If we found a sustainable way to increase cache sizes, it would be a band-aid, but a damn good one.
I don't know; I think cache infrastructure in a CPU is too complicated nowadays. It's reliable but it's bulky. I don't see a way around it without slowing down memory (e.g., widen the bus more, better throughput and worse RA in the RAM).
Intel was working on something interesting: embedding small SIMD CPUs in the RAM, so you send a little code across the memory bus and the transformations are done in-place rather than making the data cross the bus, get changed, and cross the bus the other way again. (Joe Armstrong famously noted something that should have been obvious: the program is smaller than the data it operates on, so it's cheaper to send the program to where the data is than vice versa.)
> Rust in the kernel.
It was over as soon as they shoved their CoC down his throat.
> "redundant checks" slowing perf.
You'd think their much-touted compile-time checks would obviate the need for runtime checks.
> I think it's gonna be sick (still not trying to over-hype myself tho)
In times like this, opportunities abound for people to become the person creating the sickness.
> Though that's interesting, what do you think was the primary drive for "x86 to eat the world" if industry was kinda expecting MIPS/SPARC to keep on going?
The same reason Unix ate the world, the same reason ARM ate the world, etc. Wide availability and it was good enough. Early 90s, getting Unix to run on one of those chips was kind of an achievement, but once it did, you had an environment like on the big machines. Not just Unix, but anything: there were not very many capabilities that were available on workstations or big iron but not available on commodity desktops, and those all happened to run x86. You take something people want, you make the 80% solution easy to get, and then once in a while someone will drop off another 1-2%, and the niche occupied by high-end workstations shrinks. Then along comes a company like Google that says "We don't even need cases, we just tape disks to the motherboard and shove it into a rack and it's cheap enough that if something fails, we replace it instead of fixing it."
> it's not like PowerPC is really difficult to support.
It's a bigger effort than you might think. If Apple doesn't own the chipfab plant and doesn't employ the chip designers, and they're shelling out to IBM, then they're kind of beholden. Not just that, but the switch was announced around the time multi-core x86-64 chips were announced. They have these two big G5s and the performance was not quite as nice as two big Xeons. Intel's got AMD to cope with, they have to support a broad range of applications, so there was external pressure for Intel and Apple didn't have to negotiate as hard. IBM keeps making PowerPC chips whether or not Apple keeps buying them, so Apple couldn't apply pressure to get what they want; with Intel, they didn't have to apply as much pressure because Intel was already making something closer to what they wanted, and it's easier for a big manufacturer to apply pressure. It's also easier for a big manufacturer to get other big manufacturers on board when there *are* other big manufacturers: the only PowerPC users were IBM, Apple, and Sony. If Apple wants something from Xeons that they're not getting, they can threaten to switch to AMD, they can get other OEMs to help apply pressure, they can plausibly affect Intel's bottom line. Now they've licensed ARM's ISA, they're back to controlling the manufacture of their own chips. Maybe Samsung or TSMC is actually fabricating it, but it's their own design, they hire the manufacturer rather than depending on a chip that is designed by someone else.
But is there a reason to support it? Like, what does it get you besides good feels by PowerPC fans? (Not to discount that: I have bought chips because of the feels, otherwise I wouldn't be constantly jazzed about ARM devices or this RISC-V CPU. But we're a niche, and a business niche can be large but a consumer niche is never lucrative enough.)
> That can't hold up. The plan is to make it computationally infeasible to run your own infra.
Why do you think that is? I see what you are saying, however with OSS and autists everywhere I can't imagine that will be "super effective", unless the only target is the normal population. Which they themselves don't seem to be all that interested in running their own infra anyway. Given economic constraints, the shameful coverage from ISPs in burgerland, as well as the unhinged need to jack up rent/gas/food to the point of financial suffocation, as well as a common limiter being that most don't seem to interested in learning about the technology they rely on or valuing their personal information. I see the appeal to eliminate self managed infra though I question it's effectiveness. Then again, i'm not completely incompetent and I haven't suffered severe brain-rot which is why I can't seem to figure why most businesses/govs/NGOs do what they do.
> Well, if you don't have to have something that's generally programmable and it's got one task, then you don't have to fetch instructions from memory, decode them, pipeline them, catch an interrupt from the SATA controller, wait for the memory bus.
I'll give you that. Even with improvements made to the CPU you'd still have to deal with memory (including the often shitty controllers) and drive considerations. If we found a sustainable way to increase cache sizes, it would be a band-aid, but a damn good one.
> He's almost always right. [Linus looking "smug"]
Almost always, though I'm still butthurt over his acceptance of Rust in the kernel. I don't know what the fuck he was thinking. I don't care for Rust, however I understand it's value; completely ignoring the politics, or the devs.. the assembly Rust generates is what I consider to be "questionable"; and dealing with their "safe" first code style reads to me as "redundant checks" slowing perf. Why not improve on what has been tried and true, vs. integrating a "newer" thing that has so many draw backs it's... questionable from what I see. Maybe I'm just ignorant (not the first time, nor the last), though I can't get around it.
> Right now, while it's new but things are working, this is where the opportunity is.
Which is why I'm all here for it :02_laugh: I think it's gonna be sick (still not trying to over-hype myself tho)
> Industry kind of expected that MIPS and SPARC were going to stay where they were, no one expected x86 to eat the world. Coming into the 90s, it was m68k and then all the Serious Business computing was RISC.
Hmmm, I'll have to look more into it. I was oft under the impression that it was a "crime" of opportunity that got CISC to where it is today. It's pretty hard to parse the info. since it both happened so long ago, and speculative recollections suffer with hindsight. Though that's interesting, what do you think was the primary drive for "x86 to eat the world" if industry was kinda expecting MIPS/SPARC to keep on going? What caused it to stagnate (not sure if that's the best word to describe it)?
> IBM continues get a lot of mileage out of [PowerPC] with Blue Gene et al, it's just not something people put on the desktop.
That's something I don't really get though, it's not like PowerPC is really difficult to support. Apple did it faithfully for years and every time I pull out a PowerPC Mac I find it very enjoyable and you can still find compilers for it. From my memory it just kind of worked for what supported it. I just don't get abandoning PowerPC only to then go back to RISC years later. I remember hearing something about contracts... thought it had something to do with manufacturing issues, though I wasn't paying all that much attention.
IBM seems to be an anomaly in the overall zeitgeist of whatever era they operate in, and in a way it's nice to see they still utilize PowerPC. I don't get it, but it's funny none the less.