Me, still an innocent: "Surely paying people must be a known thing, how many professor FTEs do we have?" Admin person: snaps open a large multi-page PDF with the circles and arrows (tm Arlo Guthrie).
Me, an innocent: "So, how many professors are there in our university department?" Admin person with a thousand yard stare: "Well, it depends on what you mean by 'professor', 'in', and 'department." <unfolds large and complicated chart>
One thing I heard a long time ago and that has stuck with me ever since is that when you make a process difficult to get through, the people you get at the ends are the ones that are most passionate and motivated, and this passion and motivation is not necessarily for good. I originally heard this as applied to bug reporting systems, but colour me unsurprised that this applies to email anti-spam measures too, where the spammers have the motivation and energy.
I am apparently going to need a Linux 'secret(s) provider' (something that handles org.freedesktop.secrets). Gnome-keyring is not what I want, in fact I would like a 'secret provider' that stores secrets in memory only and throws them away almost immediately. Most options are heavyweight, but there's pass-secrets¹ which uses pass², except that pass wants to use gpg and this is my face. Life is too short to wrestle with the greased gpg pig.
This is my face as I'm trying to find an X program that will show me some text in a given XFT font family and size, so I can see just how big it is. xterm will do fine for monospaced fonts, but if I want to see what 'Sans-11' actually looks like, apparently this is a bit non-obvious.
(Am I going to have to write this myself in Tcl/TK or Python/TK?)
@mhoye I think all security warnings are an extremely hard problem, because they're almost always false positives (most people aren't getting attacked, thank goodness). It's really hard to be sure it's not a false positive so you throw up the alert, but then people get alert fatigue and etc etc.
(We sort of went through the same thing with browser HTTPS warnings until browsers made it really, really hard to get past them and everyone accepted that sites shouldn't screw up certs.)
@b0rk I ticked off 'the shell' in my set of answers because of technical knowledge: a shell with readline handling is handling Ctrl+C itself when you're editing a command line. Many shells/readline environments will react to Ctrl+C this way by 'interrupting' the command line you're editing and giving you a new top level prompt (which is handy if eg you're writing a multi-line 'for' or 'while' or etc and change your mind; you can Ctrl+C to throw the entire thing away).
@atoponce I look forward with dread to discovering all of the little quirks of GNU coreutils behavior that are not actually duplicated in uutils. ('100% compatibility' is nice in theory but I don't believe it's going to be achieved in the first major release, not even by 26.04 LTS, and I don't think Canonical will care.)
@mos_8502@NanoRaptor My university did select SGI over Sun in the mid 90s for a servers + basic colour workstations RFP (that's how I wound up using an Indy for a while), but I don't know how important the workstation price was and there turned out to be extra factors (... which is a story in itself, about how salespeople will sometimes lie a *lot* and how useless contracts are in practice).
@gray17@wollman@b0rk I think there may be a collection of reasons on Unix's original very small machines: * general context switch overhead between the kernel and user processes. * user programs could get line-based input so only had to wake up infrequently, not every character for even lower overhead. * there were no shared libraries, so basic line editing took less overall code space in the kernel than a copy in every application. * the kernel's a central point so everyone did it the same.
@mos_8502 I wonder how much of this behavior in GTK and QT came from them being developed at a time when I think the general attitude everywhere was 'if you need things on screen to be bigger, get a bigger display' (at least in the Windows world, I think). My somewhat vague memory is we spent a long time with (maximum) achievable CRT resolution basically fixed, little to no GUI scaling for various reasons (eg bitmaps everywhere), and varying sized CRTs.
Thesis: most desktop GUIs are not opinionated about how you interact with things, and this is why there are so many GUI toolkits and they make so little difference to programs, and also why the browser is a perfectly good cross-platform GUI (and why cross-platform GUIs in general).
Some GUIs are quite opinionated (eg Plan 9's Acme) but most are basically the same. Which isn't necessarily a bad thing but it creates a sameness.
(Custom GUIs are good for frequent users, bad for occasional ones.)
@drscriptt I want my uncommitted change to be added to an existing local commit (which in git rebase terms is a fixup of a prior commit). In theory git-absorb is good for this, in practice it didn't go.
@jmc Yeah, that's the harder manual magic to do everything. git-absorb (in theory) automates basically the 'git commit --fixup=<find the right commit>' bit and will run the git rebase for you.
@mos_8502 My view is that the modern web is a marvel that is ruined by most of the uses of it. It's a relatively universal display system and application environment that provides massive power and easy of use (along with relative privacy compared to the alternatives).
Eg, I may snark on Grafana the company but Grafana the dashboard system is a cross-platform marvel that wouldn't have existed before the modern web. And at work we deliver forms via the web that would be (slow) email otherwise.
@mos_8502 As a minority platform person (Unix/X), cross-platform software means that I get it at all. If Grafana was a program, it would probably exist only on Windows, maybe Mac as a distant second. Or cost a lot for Unix 'enterprise' stuff.
(I've been around my work long enough that I saw our paper account request forms, although I never had to process any. That too is 'cross platform' in a very basic sense of 'forms that can be filled out online on any computer of your choice'.)
If you're a government considering whether it's worth ripping up IP rules, I think two of your questions are how long before the US comes back with another trade deal you want and what will the US do to you to retaliate for you ripping up IP rules. (There will be retaliation. Look at the US today and tell me there won't be retaliation for everything.)
That's why I think the US has to be basically gone for good before this is realistic.
If you're a company in a country that (hypothetically) has ripped up IP rules with the US and you're thinking of taking advantage of that, the question is: how long before the IP rules come back? If it's only a few years, this probably leaves you up the creek. You need them gone long enough for you to be solidly established with real power to stop them coming back, or to no longer need the lack of IP rules.
Given that the US is ripping up trade deals, it's nice from a certain perspective to imagine other countries also ripping up the onerous IP rules that are part of them. But this is probably not likely right away. Other countries (and potential industries) have to play a long game, where it's only worth ripping up those laws if a normal US administration isn't going to come back in N years and the US is basically gone from trade/etc for a generation+.