It's been more than 15 years and the language still doesn't have a stable standard library with specs set in stone, no ABI for other languages, requires experimental builds to be functional in a kernel and Rust Linux devs will still complain that someone doesn't take them seriously.
Of course not. C++ was more standardized and functional when it was introduced to Linux for a short period of time. Rust will eventually be the same. It's too hard for hardware-level programming, because the language purposefully makes it harder in the name of memory safety which is only half-true.
@mia Language specific package managers are an another can of worms. There are only 3 languages in existence that didn't make me hate them after dealing with dependencies. Erlang, Elixir and Go.
With that said, Go has almost zero backwards compatibility with older versions, because devs almost always insist on using the "new and hottest" features in the new version and it insists on statically linking it's libraries for some reason.
@phnt as a matter of principle i don’t take any language seriously that solves the lack of API/ABI discipline by inventing its own package management with version pinning
@phnt@mia Interestingly I kind of hate those three, although Erlang and Elixir libs can somewhat be packaged (with pain, like erlang rollbacks typically means ABI break).
Go… as long as the particular application dev uses it like C (handful of cherry-picked dependencies) it's okay, otherwise /dev/mordor it goes. Interestingly same stuff for npm, but barely anyone cherry-picks dependencies in that ecosystem, hence why it gets vulns all day.
@lucy@phnt yes because if you try to relocate the entire dependency tree you end up tracking like 40 repositories before your project does its first Hello World
For me it's system packages and PYTHON_PATH. No pip, never, otherwise you'll get bullshit binaries with sometimes no source code at all.
> Perl I think that one might be the best of the scripting languages. At least Perl programs are typically C-like in terms of amount of dependencies and packaging metadata has been JSON (so no code to execute) for so long your distro probably has a good package generator which can fetch CPAN in case of a missing dependency. And it also means using the cpan command itself to fetch dependencies is safe (tarballs aren't even unpacked), unlike with pip.
@lanodan@mia I chose those three because the options are basically Python, JS, Java, Kotlin and Rust for the main ones and Perl, Ruby for those which I very rarely use. All of which I don't like because of package management.
There's not really good competition in the "languages with package managers" space and Go just happens to be the least annoying for me and Erlang/Elixir always "just worked" for me.
Python is also fine as long as you only use the standard library. The moment you start adding other dependencies you are in venv hell.
either your distro has its shit together or it still deploys releases like it’s 2002
i know that narrows it down to like one or two but i’m being serious here: rhel, sles, centos, debian etc. have absolutely no place in the modern linux ecosystem. they are too slow and create too many problems for upstream, for packagers, and for end users.
@mia@lanodan The issue I mostly run into is when I want to run something newer on older more server focused distributions (Invidious IP switcher comes to mind and isn't work related). Half of the time the Python dependencies aren't packaged and in the case of rhel8 even pipx isn't.
At this point I have custom packages for pipx and some of the more used Python packages, because it's unmaintainable to do it any other way.
@phnt@lanodan the only python projects where i use venv is stuff related to machine learning (because that’s all terrible, impossible to package, and you have to pick hardware-specific versions too)
for everything else i just use (or create and submit if necessary) distro packages. granted, i’m on a distro where these are usually up-to-date, but breakage is extremely rare
@mia@phnt For me Debian/CentOS/… are purely to satisfy proprietary Enterprise software and awful packaging tools which can't automate rebuilds on ABI breaks.
@lanodan@mia CentOS Stream is a joke nobody sane uses in a production environment and RHEL is there just for the support contracts on critical production machines.
@phnt@mia Yeah, I'd expect most to move to Docker (or compatible) in the next few years, specially because they're likely already using it due to modern languages having heavy churn. I expect it to be even more of a disaster for them in terms of security.
@lanodan@mia Those don't exist anymore, RH killed them.
You have Rocky which has no future as it pulls from sources that can become broken anytime and insists on 100% bug compatibility. And you have Alma that has diverged from RHEL and pulls from the same sources like Rocky while also being independent of them and only ensures ABI compatibility.
The whole ecosystem is basically on life support and I expect it dying in the next few years.
@phnt@mia I mean more of a disaster than a freezing-distro where you at least get some security fixes applied. Docker and the like as a distribution methods are designed to be disasters. You want *developers* to manage security fixes? Are you mad? They casually stick to 10+ years old releases of libs whenever they do any vendoring.
@lanodan@mia It already is a disaster when it comes to security. Not enough people realized it yet though.
Almost everybody leans on the side of "it's secure because containers are compartmentalized", but almost nobody has realized that the bundled dependencies in images can also be a problem. Almost nobody checks for updates on Docker frequently enough and there's no easy way to just apt upgrade vuln-lib and being sure that it is patched.
@lispi314@phnt@mia Point of PYTHON_PATH is to point at things like the dev repos and have it not mess with the rest of your stuff (and yes also not have the versions thing as dev can need to test multiple versions). I do anything slightly permanent / global via packages (which is yet another reason why I avoid Debian).
Yes, you'll have to symlink/hardlink configuration for multiple versions, because someone thought it was a good idea to make it version-dependent without any other option.
@lanodan@phnt docker is just glorified chroot jails with 10 layers of syscall filter duct tape that just complicates things and absolutely wrecks i/o performance due to overhead that is several orders of magnitude worse than with a kvm guest (nobody talks about that of course)
@mia@phnt Also it fucks with your firewall config (as if that wouldn't be annoying enough), which is why for me gitlab-runner is on separated machines/VMs.
Or any distro which isn't Debian really, like I've done that fine on Gentoo/Alpine/OpenSuSE/SailfishOS/Arch/… Just one where you can casually write a good enough package and that's the vast majority of them.
@lanodan@mia@phnt Ah, I've used usersite to handle dependencies I want in some qubes but not in the template (which removes the global installation option).
@lispi314@phnt@mia > If your setup requires you not to trust the user, you shouldn't give the user access to things to start with.
Which is exactly what least privilege is about, you give specific permissions to system users (part of those being via groups, firewalls can also make use of user-separation). Also I think you're thinking about human users, very different kind of concern, personally only very few people could ever get shell access to my machines, even a "restricted" kind.
> Programs should be limited by capabilities
Linux doesn't have proper capabilities, well except the ones that nearly made it into POSIX and are so deeply flawed it's not even funny as like half of them trivially allow to gain root privileges.
@lanodan@mia@phnt If your setup requires you not to trust the user, you shouldn't give the user access to things to start with.
They should be assumed to have control over their endpoints (and truly should have it too).
Possession of the hardware breaks essentially all the security guarantees you might otherwise have, anyway.
(Yes, multi-user systems are fundamentally problematic as far as security goes. Hardware vulnerabilities mean no amount of formal proofing & verification of the system suffices.)
Programs should be limited by capabilities (so should their addressing, they should have no access to raw memory), and users should be able to grant them as necessary. Due to the hardware vulnerability problem still existing, this whitelist approach /still/ means the user has to make sure the programs they use are not malicious because otherwise all the other security properties of the system may be defeated by the first convenient hardware vulnerability to be found & exploited (yes, this is antithetical to blackboxes, proprietary or otherwise).
@lispi314@phnt@mia Sure it's flawed but it's the current state of things. Specially as Linux also severely screwed up namespaces (I've accidentally escaped them so many times it's not even funny) so you can't actually have something more precise.
And essentially nobody does third-party software exclusively for non-Linux such as BSDs, only one would maybe be embedded dev and that's an entirely different field from things like servers.
@lanodan@mia@phnt > Also I think you're thinking about human users, very different kind of concern, personally only very few people could ever get shell access to my machines, even a "restricted" kind.
By user I meant a self-aware entity using a computer system.
I did not mean the flawed abstraction that is presented by abstracted multiuser systems.
That should just be done away with, since abstracted multiuser systems are designed for the case of multiple self-aware users (with all the security tradeoffs inherent in this).
> Linux doesn't have proper capabilities, well except the ones that nearly made it into POSIX and are so deeply flawed it's not even funny as like half of them trivially allow to gain root privileges.
@lispi314@lanodan@phnt it truly is. opensuse ships somewhat hardened systemd units (namespace isolation, syscall restrictions and so on and so forth) plus apparmor by default, and it has a yast module for some rudimentary privilege tweaking/hardening. it can also work as a transactional system where every change is done to a new copy-on-write snapshot of the filesystem.
but the way i see it all this crap is no better than windows users and their real-time antiviruses in that these are measures to limit the damage that can be done to a system that simply lacks secure user-space APIs and on the kernel level has never seriously been designed to provide a secure environment (which would make it incompatible with UNIX/POSIX)
@phnt@mia@lispi314 Yeah seccomp had to be written out of pure spite by a massive sadist.
But from trying it a bit, landlock seems pretty good and somewhat comparable to pledge/unveil and I think close enough even in design that the libraries implementing pledge/unveil via landlock calls makes sense. Opinion might change over time though, and personally I more wish for OS-level security than program/process hardening, specially due to the nature of Unix where a lot of things are scripts and combinations of arbitrary programs.
@mia@lanodan@lispi314 MACs like Apparmor and SELinux are either essentially useless, or are extremely annoying. Apparmor does almost nothing when there's no policy for the service (most of the time unless packaged by distro) and SELinux is paranoid to the point that is annoying and writing policies for it is also annoying.
As you said, it's just duct tape to make something insecure seem like something that is at least somewhat secure.
It also doesn't help that attempts like seccomp and landlock are to complicated compared to something simple and yet effective like pledge and unveil from OpenBSD.