@danderson@valpackett@nogweii@bitprophet btw, we have flatpak working just fine ;) (as well as podman/containerd and various other stuff you can use to spin third party containers)
@danderson@valpackett@nogweii@bitprophet i'm a little curious about what myths there supposedly are in the FAQ, as it was written to provide an entirely neutral stance
@danderson@valpackett@nogweii@bitprophet it's not really the same point though; that part of the FAQ talks specifically about how the individual components are all tied into libbasic/libshared, which is a kitchen sink of all sorts of functionality and entirely interwoven (every part of it directly or indirectly includes half of the rest), which makes isolating individual programs super difficult (i spent roughly a month isolating https://github.com/chimera-linux/sd-tools and it was not a fun time)
@danderson@valpackett@nogweii@bitprophet additionally some of the components would really benefit from being properly buildable standalone; for instance we have to carry like half the musl patches for systemd just to build udev, alongside a large bunch of build system hacks, because it forces you to build systemd core no matter what (which we discard)
@lanodan i promise it will actually report desktop properly if you launch it properly as the sole thing (modern applications should launch properly too)
i will probably package the rest of it and push it to user/ repo so people can have some fun (but of course, this is an ancient and very insecure codebase and i had to disable all toolchain hardening and more to get it to work... old C/C++ is "fun" and i don't entirely recommend running this as a regular thing)
@lanodan i couldn't be bothered to actually log out of my primary desktop, so i just startx'ed on another tty (and of course, elogind and dbus don't entirely like it :))
the switch has introduced another problem and that's a build-time depcycle (xz->gettext->libxml2->xz) - which is the primary reason we have not switched (pregenerated autotools files solve that)
that said, there is no difference otherwise, since the malicious condition does *not* trigger even with the upstream release tarball
the positive part is that we are not affected (several compile-time preconditions are not met for the backdoor to even get compiled in, such as gcc compiler, gnu linker, ifunc support, linux-gnu triple) and neither is our infrastructure (there are a couple debian servers, still on xz 5.4)
that said, everyone check their systems (whatever they are) and stay safe :)
@lanodan@Gottox@ariadne what something else? there *wasn't* anything else they could have realistically used at the time (to a degree there still isn't)
@lanodan@Gottox@ariadne the latter is the "traditional" way and it was bound to happen that as soon as there was something better somebody would jump on it
unfortunately we still have to live with the latter (gnome is nice enough that they still maintain vast majority of the old paths even though they have no obligation to) though in a few months we'll be ready to switch to a better architecture
@lanodan@Gottox@ariadne even if that was the case it's not right to blame gnome here; their lives would be much easier with just systemd as right now there are two process launch architectures simultaneously maintained (desktops consist of a lot of session-wide processes, and gnome can now either launch them with systemd which means being able to do so cleanly, on-demand, in a supervised manner and with dependencies, or in a crummy "just launch everything and hope it does not crash" way)
@ariadne@Gottox perhaps not blessed by RH for RHEL, but still developed by RH people and driven by fedora
i don't think it's reasonable to expect the developer of the service management system to drive your distro integration; for one it's an entirely different job compared to actually developing the thing, for two it's way too much to do at once
first there needs to be a real initiative and goals, then some initial work, *then* (or during) you come to @ska or whoever for help
@ariadne@Gottox that's not the point, the point is that it was an effort done *within fedora* so they were working on something specific and not just a service manager without any feedback from real world use cases
there is only so much you can think about and consider when you're only backed by theory
@ariadne@Gottox the other way to do it is for a distribution to actually adopt it and collaborate with the service manager upstream (and have somebody working on the surrounding tooling a well)
that's what we're doing with dinit, and obviously it takes time, but there is no other way
what happens when you just take the service manager and shove it in a distro without a second thought or proper integration: you get an artix, and that seriously sucks
@ariadne@Gottox fwiw the whole idea that creating a service manager is some kind of magical solution to service management problems is flawed, in reality it's only a part of it
sure, you need a solid base, but the integration work and tooling around it is another massive part, and makes up the majority of the actual UX
you can do it like systemd and provide all that stuff along with it (RH had it easy though since they have an in-house OS) which takes away power from the distribution, or...