@snacks i get it's a really weird mindset to have but I grew up with "don't download and run unrecognized code from the internet" and ever since like 2021 I've taken that to the extreme
@p@nyanide And a guy who I think is on Go cryptography team tried to convince me here on Fedi that reading about this stuff — "direct" being a special value, on some web page IS the proper way to document it 🤦
@m0xee@nyanide I forget what I was trying to look up but there was one instance of `go doc $something` that had some sparse information and a link to the web page in case you wanted more information. So I go to the web page and the information was still sparse...and ended with a suggestion to use `go doc` to get more information.
@p@nyanide Standard library is very well documented… Well, used to be, not sure it's still the case and they don't make you go online to figure things out.
@m0xee@nyanide Oh, yeah, I mostly look at that. The "Check webpage"/"Check go doc" loop was something in the standard library. (I wish I could remember what it was.)
On the other hand, it's generally nice (and also fast) enough that I feel like I'm complaining over something small. man pages have spoiled me.
@m0xee@nyanide Actually, this is fun, I keep this in the scratch space in acme:
:mycomputer: go doc `{cat /dev/snarf}
You could do the same on Linux (though you'd have to replace the cat with `xclip -o` or something), but if I were still using vim, I'd probably overload the man page key (`K`, which is really useful when writing shell scripts or C but not so much in other languages) to have it make a scratch buffer and fill it by calling `go doc` on the identifier under the cursor.
@p@nyanide AFAIR the online documentation also gives you link to the source so you can figure things out for yourself if something doesn't make sense, I'll give them that. It's also good for figuring out the differences between Go versions.
> it displays the output of "go doc" for the function inline as you type its name
The busier a UI is, the harder it is to actually pay attention to what you're doing. I don't know how people actually use computers that are full of flashing shit and scrolling text and 50 things happen in 50 different parts of the screen every time you push a single button. I spent hours figuring out how to disable the stupid evil twin cursor (`let loaded_matchparen = 1`...you can't stop it from happening, you can only tell it that it already happened, though it's changed since then and it's doing something else now) when vim first added it because I'm moving my cursor around and then suddenly there's another cursor moving in the opposite direction, it was maddening. I started thinking of vim like the Firefox of text editors.
@p@nyanide There's vim-go plugin or something like that — it brings in tons of dependencies, including things that you might never need and it's pretty slow as it's full LSP implementation, but it displays the output of "go doc" for the function inline as you type its name — quite handy unless you're doing it of a Raspberry Pi over ssh 😅
> Yeah, I'm not a fan of these things either — I sometimes run vim with an empty config to prevent it from loading plugins.
Yeah, if I am using vi, usually it's nvi or busybox vi; half the time I just use ed, though. It's nice to not have the editor demand the whole screen if you are doing sysadmin stuff, and that is usually what I'm doing if I am not using acme.
> Turns out it builds the project on every iteration to tell you what's wrong
@p@nyanide Yeah, I'm not a fan of these things either — I sometimes run vim with an empty config to prevent it from loading plugins. vim-go is still bearable — the most insane thing I've seen in this vein is Rust plugin, I tried running it on an old ThinkPad T43 once: suddenly everything slows down to a crawl and the fans are spinning up, I'm like "WTF is happening?!" Turns out it builds the project on every iteration to tell you what's wrong 🤦
@p@nyanide if only rsc was still involved. things will likely get full corporate bullshit now that he and the other old time go people aren't involved anymore. :blobcatgrimacing:
@bonifartius@nyanide@p To be fair, removing ppc64 support isn't a good idea. Despite how niche it is, it's still used a lot in the enterprise world. I personally know people that need that support.
Seems kinda pointless to get rid of it. Google can't keep a PowerPC lab running or lease some time on one of IBM's? I thought that kind of shit was the purported benefit of letting them touch a programming language.
> Despite how niche it is, it's still used a lot in the enterprise world.
IBM POWER9 is numbers 14 (125.71 petaflops/s), 72 (23.05), 75 (25.03), and 139 (11.03) on the TOP500 list, so it's still responsible for a lot of computes.
@phnt@p@nyanide almost nothing is tier 1 supported. bsds aren't, plan9 isn't, several architectures aren't. still they work fine (cf the dashboard) they just don't block releases.
@phnt@p@nyanide@bonifartius it's also the most powerful kind of computer you can buy that the FSF likes. (Buy yourself an 8 core 32 thread beast from raptor computing today!)
@RedTechEngineer@p@nyanide@bonifartius I would like to own a Talos workstation one day as the whole stack is interesting, but throwing $3K at them is something I can do.
@raphiel_shiraha_ainsworth@bonifartius@RedTechEngineer@phnt@nyanide There's always something around; I think a Milk-V would be cool, they seem to be the main ones doing interesting stuff at the moment. I have a RISC-V DevTerm, which I mainly got as a curiosity but it's a really fun system. I don't know of a really impressive $current_year one, I've been thinking of getting one of the Lychee cluster boards, but those are a little more than $20.
Reasonable. The TuringPi board has a couple of SATA ports and a couple of mini-PCIe connectors; mini-PCIe SATA controllers can be gotten cheap, but to fit it into a DevTerm, you'd have to solder it in and remove the printer.
> i don't really like to burn through sd cards all the time
Ah, yeah. No errors on the uSD currently in my DevTerm, which I have basically never turned off for two years. I think the durability has gotten better. On the other hand, I used older uSD cards for doing the builds of CRUX (for the A-06) and Slackware (for the RISC-V one) and two of them burned out pretty quickly.
> that would really help with hosting stuff at home.
Yeah; for hosting stuff at home, like, I used to just grab refurb servers, and my main server (mail, web, a bunch of Plan 9 VMs, etc.) still is a refurbished DL380 G7. You can get these things from Newegg or wherever in the ~$100-200 range. Like, they have a DL380 for $164 right now: https://www.newegg.com/hp-proliant-dl380-g9-rack/p/2NS-0006-31E21?Item=9SIAG1MKA76526 . The only problem is a refurb is a refurb; I never had any trouble until I got that giant one to run FSE on, and FSE was up and down all that time because the motherboard had some problem that I never ended up solving. (Had to be the motherboard because the hardware watchdog would lock up.)
The TuringPi2 is nice. Much lower power consumption, reasonably priced, aforementioned SATA ports. That's what FSE lives on right now; it's running on a single RK1 with an NVMe. No moving parts besides the fans.
@p@RedTechEngineer@phnt@nyanide@raphiel_shiraha_ainsworth what i'd really like was something inexpensive with good storage options, like two sata ports for a raid or something. i don't really like to burn through sd cards all the time :ultra_fast_parrot: that would really help with hosting stuff at home.
still would leave the problem that my connection has shit upload bandwith. maybe i could get a business account from the cable provider or starlink or whatever to fix that, but it's another topic.
>i don't really like to burn through sd cards all the time
Linux has some answers to that problem with filesystems like F2FS and JFFS2. They aren't that user-friendly as the normal ones, but it's still better than nothing and with some config changes that reduce write cycles, you can get a system that does barely any writes when idle (systemd can log to a ring buffer; same can be achieved with a more normal syslog setup and some ingenuity with logrotate and tmpfs). Some manufacturers even make uSD cards specifically made for these SBCs that have higher write endurance and more importantly aren't as slow.
@phnt@bonifartius@RedTechEngineer@nyanide@raphiel_shiraha_ainsworth Has F2FS improved much in the last few years? My main point of reference is some Phoronix benchmarch that demonstrated that Postgres is faster on ext4, but that was from before I set up the previous FSE box, so it's dated. (btrfs, unsurprisingly, performed the worst by an order of magnitude and actually exploded so there are no benchmark numbers for it on some of the SSDs.) In the mean time, ext4 got more SSD-friendly and presumably F2FS has been chugging along.
i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.
i'd really love an inexpensive arm board with many sata ports to build a small nas with. you don't need much cpu power or much ram do do this, only a decent network interface.
> i'd really love an inexpensive arm board with many sata ports to build a small nas with.
They've had kits for this ( https://www.hardkernel.com/shop/cloudshell-2-for-xu4/ ) but it's mostly DIY nowadays unless you spring for one of the boards that does have the m.2 already. Most of the RPi gear has a way to get at the PCIe bus nowadays. so you don't really need to worry about uSD cards much any more, except for portable systems. (Even then, though, like, the DevTerm/uConsole, people have tapped into the pins and shoved a "real" SSD inside. I use them as portable machines to talk to the bigger machines, though, so I don't mind treating the storage as disposable and I don't want to trade the battery life.)
You can sorta see the SATA ports next to the PSU on the TPi2 board; they're next to the power connector. (They're empty on FSE because the NVMe is slotted under the board.) IMG_9860.jpg
@p@RedTechEngineer@nyanide@raphiel_shiraha_ainsworth@bonifartius F2FS is still probably much slower than ext4, especially when running something that likes to do a lot of random I/O like a DBMS. It's probably not a good idea to use it on SSDs anyway as those fix a lot of the underlying issues with complex controllers in front of the NAND flash. Google has been using it as the default for both the ro and rw partitions on Android for 4 years. Mainline Linux is probably less stable than that due to a lower degree of testing.
>btrfs, unsurprisingly, performed the worst by an order of magnitude
Probably needs some FS tuning. ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.
> and actually exploded so there are no benchmark numbers for it on some of the SSDs.
Typical BTRFS experience. Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.
>ext4 got more SSD-friendly
There are two sides to this. One is pushing more performance out of the SSDs with more optimized I/O and scheduling (NAND is actually slow on small I/O queue depths and with a DRAM cache, it can perform much worse than spinning rust). The second side is wear-leveling and better managing for the raw flash. ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.
> Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.
I think sjw was running it for something at some point. I already didn't like it when I was `make menuconfig` and saw "Ooh, new filesystem!" and hit the question mark and it started by saying "It's supposed to be pronounced 'better FS'!" Anyway, the benchmark was more than four years ago (I think 2019) so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience. (Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.)
> ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.
ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.
> ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.
I think it *mostly* focused on stability. But it's more or less a 30-year-old codebase, you kind of expect stability. New benchmarks would be interesting but I don't know if anyone has bothered.
>i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.
Using ARM boards as desktops or servers is a relatively new concept and before that you didn't really need either of those. That's why. SATA needs a separate controller (usually on a PCIe bus), M.2 requires both of those and cheap-enough ARM chips with PCIe support came out only in the last few years.
With RISC-V it's the same story, but with even less traction and demand in the market.
>i'd really love an inexpensive arm board with many sata ports to build a small nas with
There are Raspberry Pi hats with ~4 SATA ports on them, if you want. But to me it feels like a hack instead of a proper solution. As p wrote before me, ODROID or TuringPi board are the more proper solution to that.
> Using ARM boards as desktops or servers is a relatively new concept
That was the original use of the Acorn RISC Machine ("ARM") CPU: https://en.wikipedia.org/wiki/Acorn_Computers . I had a couple of Genesi Efika MXs. (I have been a fan of ARM since the GBA.)
> With RISC-V it's the same story, but with even less traction and demand in the market.
>I think sjw was running it for something at some point.
He was also running NB on Arch for some time, if I remember correctly, so it doesn't really surprise me :D
>so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience.
It still can blow up when it looses free space and since they broke the free space reporting _intentionally_, a lot of userspace utilities that calculate free space before committing transactions will blow up with it. Unless they use custom code linked from libbtrfs that is. Probably one of the most braindead decisions one could make in filesystem design.
>Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.
I had ext4 survive a bad USB cable that created garbage data and deadlocked the HDD's controller multiple times. It only took 40GB of swap and a day of fsck.ext4 constantly complaining about something to fix it. In the end no data was lost.
>ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.
It acts like malware in the entire disks and I/O subsystem, sticking its fingers everywhere it can, but usually for a good reason. When it falls apart is applications trying to be too smart with I/O (1). One can only appreciate the whole DB-like design and extreme paranoia with everything I/O related, when they use it on a large disk array. Other than that, it's a bad filesystem to use on your daily-driver system. None of the benefits with all of the issues. Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.
(1) You can disable a lot of the "smart" features per every pool, so this problem usually only crops up in misconfigured environments.
Neither md, or LVM do parity checking on reads by default, so you'll encounter silent bitrot more frequently compared to almost zero on ZFS. And as a result you need to either run scrubs more frequently which can be annoying depending on the array size, or you need to configure dm-integrity with the not-so-great documentation. But if you need to run Linux in your storage server, it's still the best bet and even has good performance.
@p@phnt@nyanide@raphiel_shiraha_ainsworth@bonifartius I like btrfs for SBCs or other devices with little and slow storage. Transparent compression, COW, and data dedup on the filesystem makes things nice. Though it feels like btrfs has semi-stalled. I feel like encryption has been a planned feature for the better part of the decade and RAID is still broken, which seems strange for filesystem that likes virtual volumes.
> He was also running NB on Arch for some time, if I remember correctly
I think so; it was ubertuber by the time it was baest, though.
> Unless they use custom code linked from libbtrfs that is.
:alexjonesshiggy2:
> Probably one of the most braindead decisions one could make in filesystem design.
Well, there's a thing that makes no sense when designing a regular POSIX filesystem, and then there's a thing that makes no sense if your goal is a good filesystem but that makes perfect sense if you are trying to do lock-in so you can turn open-source into a closed ecosystem: this was the specific goal for RedHat at some point (and part of Lennart's pitch to his bosses about why they should push systemd), so it's not a huge surprise that they would try to force a new library down everyone's throats (given systemd and D-Bus and PulseAudio and Avahi and and and and and ad infinitum).
> but usually for a good reason.
Well, like, every thing in a shantytown has a good reason to be there, but the shantytown considered as a whole doesn't represent good engineering. "Oh, we don't trust the OS's I/O scheduler to do this optimally" is a good reason, but it's bad engineering.
> i didn't know about the sata stuff for rpis, for a while i was eyeing rockpro64 because it has two sata ports so it could do a raid.
Oh, yeah, there are a lot of options for that kind of thing nowadays.
> the turing board looks _really_ nice, thanks for the picture! i don't think i have the funds for the board and more than one compute module right now,
Yeah, it's cheap for what it is, but not cheap-cheap. But basically, all the stuff I crammed into that case, it was about $900, and the previous refurbished box with all the trouble was $1400. (And now it's all choked by the shitty net connection because of the circumstances surrounding :brucecampbell::callmesnake:, but it's beefy enough at least.)
the turing board looks _really_ nice, thanks for the picture! i don't think i have the funds for the board and more than one compute module right now, but it would likely solve all my server needs i have here :)
i will follow up the rpi-cm-sata lead, a first search seems promising
> Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.
iirc openzfs is now the same code base everywhere. i never had problems with the linux port. what i like with zfs is that the tools have a pretty good user interface, like that "zpool status" is providing sane descriptions about what is broken and how to fix it.
@dcc@RedTechEngineer@bonifartius@nyanide@phnt@raphiel_shiraha_ainsworth Hm. My display glitches a little since I had to take that trip in January. (Basically slept in the car, hoodie kept me warm but I think some of my devices went below freezing.) It goes away after it warms up (past ~28 degrees communist); it's minor so I haven't tried to figure out which component it was. Tried swapping the core out?
i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.
I'm not a electrical engineer either, @bonifartius, but I'd guess it's summat to do with power delivery. Not that it's impossible, but with lower constraints on total power usage less things can go wrong. An NMVe could easily have higher peak wattage than the rest of the SBC and guess how I learned that!
@p@dsm@RedTechEngineer@phnt@nyanide flat mate once had removed the bottom cover of a shitty lenovo e series placing the heat sink onto ice packs to reinstall win7. during installation some drivers were missing so the fans didn't spin up and the thing would overhead from copying files.
@p@RedTechEngineer@phnt@nyanide@raphiel_shiraha_ainsworth the standard rpi cm 4 baseboard has a pcie port (haven't found one for cm5 with pcie port yet) and there are four-port sata boards made for it, guess that should work fine for my purposes.