Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@p
> One of my big objections, though, was that no one was making them and it remained to be seen if it'd work out, my suspicion was that it was a wank, overspecified pre-silicon that was driven by wishful thinking instead of the practical realities.
A fair objection; up until about a few months ago that was my thoughts almost exactly.
> I've been using the thing for a while and it's delightful, it's solid, the compiler doesn't get weird on me, no heatsink or fan and it never gets too hot to touch.
That's comforting, I've read a lot about some peeps experiences and it's hard to really figure if it's just a "them problem" or something deeper.
Though I know you are quite good about this sort of thing which gives me hope :blobcateyes:
> he wants to revive some of Cray's ideas and do vector instructions instead of multiple-dispatch and I'm kinda excited about that.
As the current meta seems to be, "spend as many cycles as humanly possible" I think that's a solid idea. Main line software is only going to get more bloated, but I think that heavy optimizations/extensions should make it quite nice for the rest of us who like running light. The vector processing is such a cool idea, but I understand why, especially back during it's time; it was a bit too forward thinking.
From my understanding, most developers, and their associated companies, were more interested in procedural, and graphics and other things weren't so advanced to the point where something like that was high-priority way back in the day. Although I think we are slowly seeing a shift. Even in systems where GPUs are the crowned king of performance (when it comes to hashing, shit like that) they are often bottlenecked by pipelined instructions from the CPU, increasing latency across the board as the CPU is still the "brain", whereas GPU's, memory controllers, etc. are all just limbs.
Even packaging the information to send to a GPU is paramount to it's success. One of the big reasons ASICs are so big in mining, from my understanding, is a closer relationship between what's being requested, and an optimized path for those instructions (which are limited to one use case)
> I mean, see attached for some grounding (hardware should be boring), but something cool might be happening. Cautiously optimistic.
He's right; however I can't help but get a little hopeful :02_laugh:
> Oh, yeah, everything's easy to work with, etc. It's beautiful.
:blobcatheart:
> Oh, yeah, it was a little rough. (I don't know how early "early" is. Acorn desktops in the 90s? iPAQ? GBA?) ARM is still a little rough. It's actually easier to work with RISC-V already. And part of that is, like, they pushed some stuff into the ISA that might be in the ABI otherwise, like which registers are caller-save versus callee-save, and that alone simplifies a lot.
Oh lol, I done screwed up there. I meant a bit early for modern applications (outside of phones), with a decent amount of interest in this latest "wave" of RISC adoption, I just didn't communicate that bit. Yeah it's had a long and hard road. I think ARM took a major hit just because the goals of most companies weren't optimized battery life and super light little clients. To do "real work" you usually would need a fat client running on your system which, to do things "optimally" you were restricted to the partial truth of CISC >>> RISC. The goals of [current year] has departed in many ways from those in the 90s. It also doesn't help that processor speeds were menial so doing more in one instruction, 20 cycles or so, was a better proposition than 3-4 instructions of varying cycle counts per.
But that's just how one dude sees it.
Wrong place, wrong time. Also to normies it was considered an emergent technology that doesn't have practical use. Though thanks to ARM being more widely adopted in the past few years, and support from major vendors, I hope that people will be more open to RISC-V once it matures a bit more.