My (vintage early 2018) home desktop has now locked up mysteriously while idle twice in less than a week (early morning of September 16th and now around 11am today). This could be (Linux) software, or motherboard/CPU hardware, or case/PSU/whatever; it's hard to tell. Dear desktop: I would like to not buy a replacement yet. Please stop doing that.
@lanodan I draw a distinction between 'open source' as a general thing, which (as you note) is a cockroach driven by people's urge to share and show cool things, and open source as a useful thing, more or less something you can confidently build on or use generally and which feels a bit precarious. 'I threw this thing out into the world' is eternal, 'I develop and keep maintaining this useful thing' maybe not so much, especially when it's big and important.
@jannem As someone who works in academia (a sysadmin in a computer science department), I have to cough sadly. Most research code is not shared and also not usable, partly because it's not what graduate students and researchers are measured on and care about.
(Grad students are measured on graduating with their degree and publications, researchers on publications and grants. Polishing software for release takes away from both sides of this. It happens, but.)
Some days, I wonder how many more years open source will last as a useful thing. My perception is that maintainers are aging, burdens are increasing, and liability pressures are going to show up sooner or later to put the final knife in.
In ten years, will the Linux kernel, gcc, and so on be maintained almost entirely by people paid by their employer to do so? (Maybe this is already the case.)
@stevecheckoway@regehr As far as the central shared machines in the department go, they're Unix machines. You can install whatever you want yourself if it works without privileges, or ask for stuff that's available in and supported by the Linux distribution to be installed system-wide. (Generally the answer is 'yes' but there can be things that aren't safe and low-impact even though they're packaged.)
@stevecheckoway@regehr I'm a sysadmin at UofToronto CompSci, and locally our answer is mostly 'yes you have admin access', with the qualification that if you want to NFS mount central filesystems on a machine, it has to be run by your local technical staff (who you fund and get to tell what to do) with you having no direct admin/root access.
Research servers often wind up run by your funded tech staff with NFS mounts. Laptops and desktops are usually self-administered.
@offby1@waider I've certainly noticed this too, so I got curious and it turns out bicycles.stackexchange has some plausible answers[1]. In general, my intuition is that noise and inefficiency when freewheeling mostly don't matter to the race crowd because they expect to be pedaling (hard) almost all of the time. "Those" cyclists may well not be racing (or fast), but they sure do like buying the race gearsets with all the bling.
@lanodan According to what I looked up, pin-8 (grey) is only valid when the ATX PSU is officially on; it's a signal that what the PSU is supplying to the system is good and you can start using it. Pin 9 (purple) is +5V standby power and is probably the primary 'do we have AC' signal that things use (well, or power themselves from it). On desktops, all the power management (including 'power up or not after AC on') seems to be in the ICH/PCH chipset stuff, or the equivalent for AMD.
@lanodan Are PC power switches wired directly to the PSU? I thought they all went through the motherboard these days, which is how things like 'hold for four seconds to really power off' and 'just pressing briefly sends a signal to the OS' worked. That means there's something on the motherboard that interprets them, even when the system is nominally powered off. (And then pulls PSU pins to actually deliver (more) power.)
Today I was reminded that not writing output at all is appreciably faster than writing output to /dev/null. Well, at least it is in Go.
(This doesn't normally come up because my little netcat-like program is normally used in situations where I do care about the output. Quick and dirty network bandwidth testing is not one of them.)
@0x0ddc0ffee@drscriptt Per https://tacobelllabs.net/@arrjay/113105614521811597 desktops handle this in the PCH. I suspect the core handling is there even on servers with a BMC and the BMC talks to the PCH over the 1x PCI lane shown in the diagram. I think BMC management via shared NICs is lower level (and weirder) than the PCH, but really you want dedicated and thus separately powered management NICs.
(Powering the BMC portion of the board must also be fun times. Hopefully it's very low power.)
@0x0ddc0ffee@drscriptt In theory I guess you could have the (soft) power switch wired to the BMC as basically a GPIO pin and then the BMC controlling the 'power switch' wired to the PCH or wherever it would go, but that seems more indirect and failure prone than just giving the BMC direct PCI access to the PCH to control chipset/platform level power management.
@lanodan I was thinking more things like how the BIOS setting for 'when power is restored, stay off/turn on/return to last state' is actually implemented. But apparently that is probably entwined with the S3/etc sleep logic, which I guess I shouldn't be surprised by.
Today's interest: just how does a modern PC motherboard implement soft ATX power control? Presumably the main CPU isn't running all the time, although parts of the motherboard are powered. Is there a separate little always-on SOC that implements the logic? Something more clever?
The information is probably somewhere on the Internet but my search luck on this is 'lol no', probably since I don't know the right technical terms to look for.
It struck me recently that my phone is older than my desktop, and my desktop isn't exactly young (fall 2016 vs early 2018). On the one hand, this feels unusually old for both. On the other hand, I feel that it shouldn't be. There's no particular reason why a lot of people couldn't keep using the same phones and computers for years, if only they would keep getting supported and etc.
@lanodan Generally I don't want past history in new shell sessions. There are limited environments where I use it, mostly to save turning regularly used command lines into scripts, but otherwise I want new shells to start from scratch.
(One of the problems with merging past history from now-exited shells is that it necessarily creates a linear history of commands that does not actually exist in reality. I find this confusing whenever I wind up in an environment that does it.)
Today I've once again been reminded that different people have all sorts of different views of how they want their shell history to work. I am firmly in the camp of isolated, per-shell history; if I cursor up, I want to reliably get the previous commands I entered in that specific window, the ones I may be able to see right there.
(Am I a sysadmin who works across multiple contexts with interruptions and all sorts of terminal windows and shells on the go at once? Yes, yes I am.)
In re JSON causing problems, I would rather deal with JSON than yet another bespoke 'simpler' format. I have plenty of tools that can deal with JSON in generally straightforward ways and approximately none that work on your specific new simpler format. Awk may let me build a tool, depending on what your format is, and Python definitely will, but I don't want to.
@lanodan The latest case for me was SLURM running a job; SLURM propagates the initial $PWD, but the job was a script that changed directories before running the Go build process.
@irenes@glyph@mcc@b0rk The kernel tracks the current working directory as an internal object reference, but Linux usually has a name for it. Shells often separately track a name for it (often visible as $PWD), and sometimes perform operations like 'cd ..' by manipulating the name. So if /u/cks is a symbolic link to /h/281/cks and you do 'cd /u/cks', such a shell may implement 'cd ..' so you wind up in /u, instead of /h/281. For many people this better matches their expectations.