I'm wondering what Linux distributions would be most secure or least affected by a massive cyber-war. If there was ongoing cyber-war that targeted banks and other critical infrastructure, so much so that people would be left with no money and possibly unable to heat their homes for months, what Linux distribution would survive the best under those circumstances? I've been using Ubuntu, but if Canonical has to cut its workforce substantially then they might not push out the fastest security patches and updates, which would be critical in that kind of environment. I was thinking Arch Linux might be better suited, because it's more community driven and rolling release.
I want to know because if large websites go down, then we (Server Admins) will need to be there to supply critical cyber infrastructure to those who can still get on the Internet. Lines of communication are always key in wars.
> what Linux distribution would survive the best under those circumstances?
It's style of operation, not distro. "Principle of Least Access" (take advantage of user- and process-segmentation to make sure that programs can't exceed their station, and don't give anyone access to the box unless they need it, and don't give them more access than they need), good monitoring (so you can see when something bad happens), relevant alerts (either it's important or you shouldn't be alerted), doesn't hurt to know how to do a bit of numerical analysis (rolling averages and standard deviation). Don't be a bigger target than you have to be: don't keep data you don't need. More moving parts means a bigger surface which means more holes: have as few holes as possible by installing as little as you can. Figure out the threat model, figure out what you need, gut everything else without mercy (it's a server, not a dev box or a desktop machine), and then make sure you understand everything that you have left on the box. What's doing disk I/O in the middle of the night? You should know if something is and you should know what triggers it to do disk I/O and you should know what it means if it's doing disk I/O in the middle of the night. nmap your own box to see exactly what's open and what people can see from the outside.
So, "what distro?" is the wrong question. Whatever distro fits that model is the right distro, but no distro is going to do your thinking for you, and it's never going to be great out of the box unless you roll your own box.
That having been said, I'd avoid Ubuntu/Debian/etc. but a lot of sysadmins like it: it ships without things I need (strace, iotop, iftop, a lot of network diagnostics tools) and then ships a bunch of things I don't need or want (which are potential holes at best and liabilities at worst). Ubuntu specifically doesn't give you a lot of flexibility in terms of what actually gets installed, so you have to spend more time gutting bullshit. If you are drawing a big corporate salary to run a farm of boxes, maybe you can afford the time to analyze all the packages and bash out ansible scripts; I don't work as a sysadmin so I just go with whatever doesn't do anything I don't expect. FSE runs on Slackware and CRUX (but will run on Plan 9 before it turns five). I hear very good things about OpenBSD and Theo's cool but I have not used his operating system.
> I want to know because if large websites go down, then we (Server Admins) will need to be there to supply critical cyber infrastructure to those who can still get on the Internet.
I don't know how likely that is to happen, but if Secret Hackers hit Amazon, that's not just a lot of big sites, it's also most mobile apps and a big chunk of fedi is on EC2. hackedbychinese.gif
@p@tyler@Lance@gabriel@matty@parker@graf@Aldis@Big_Diggity A lot of admins here don't care for containers (understandable, they're complex, and complexity often invites security issues), but there's a reason they're getting so popular. Podman allows running containers in userspace and has an emphasis on security, unlike Docker. It can be set to run containers on startup, and all the Linux system capabilities (SYSCAP) can be tweaked or taken away from a given container as needed. The book Podman in Action is a good intro to how it works.
So if you want a reasonably "secure system" with some measure of defense in depth, you might consider a tiny OS whose only purpose is to run containers and have a proxy like Nginx as the frontend to forward requests by hostname to their respective container ports. The downside is the hassle to configure it all...Better take good notes when setting things up.
"If it works on your machine, you can just send people your machine. Let's give up on reliable builds. The OS is so balky and the libraries are so fragile and nothing is self-contained so we may as well put another OS in the OS. At least the kernel's stable." Tack on a ridiculous hype train and that's the reason people are spinning up EC2 instances (a container that Amazon provides in the form of a VM) and then using it to run cgroups-based containers, 99% of the use-case being equivalent to a chroot but with a routing table and a bunch of unreproducible blobs (often of unknown provenance), hardly ever useful and almost never necessary given that process- and user-isolation have been present in Unix since almost the beginning and if I keep going, I will end up pissing everyone off, so I won't. If you are spinning up single-purpose VMs, you don't need containers: it's in a container.
:ken: "We have persistent objects. They're called 'files'." :kenbw:
Anyway, I haven't heard of Podman but checking out their repo required 244MB of space to check out, it was developed at and is owned by RedHat, and podman.io advertises a coloring book. The last item in that list gives a strong hint about who this software is designed for. what_the_fuck_is_this_bullshit.png
I remember Vagrant trying and failing to get traction in places besides cut-rate code camps. Then along came Docker and it's the same shit. And Docker (and Docker-alikes) just look to me like someone fluoridated LxC. It's designed for startup feature factories where maintenance is not even on the priority list and you are MOVEFASTBREAKTHINGS DISRUPTING THE HOCKEYSTICK KPIs and you just pray it doesn't break. "Let's add another entire OS's worth of moving parts to the OS." God *damn*. "Let's add a series of container-managers to contain the containers!" It's strictly worse than shipping around zip files: the problems of containers in containers is a strict subset of the problems you get just shipping around zip files. People don't want to use iptables to do a firewall so they build an internal goddamn LAN inside a computer and then...they have to route the traffic to containers. The hardest part of programming is debugging and this is shit that makes debugging harder. You want your shit to only run on Ubuntu? SEND A SUBSET OF UBUNTU IN A 2GB DISK IMAGE FILE. NO, ACTUALLY, LET'S JUST USE 20 OVERLAYS! I HAVE NO PROBLEM DOWNLOADING A BLOB FULL OF BINARIES PUBLISHED BY UBER AND MICROSOFT AND SOME RANDOM GUY ON GITHUB AND ALSO SOME OF OUR COMPETITORS. OH, IT INTEGRATES WITH VSCODE? WONDERFUL ken-yshl.jpg
work with containers daily they are fine if you have kontrol of full supply and build chain. most do not nor do they understand how it works. but even then no true reproducibility. work with this stuff daily and conduct sec ops for hyperscalar clusters (think 200+ node multi-regjon k8s and nomad clusters). all ov it horribly complex.
but yes they are mainly for feature factory shipit™ companies
mitch rolled a turd with vagrant. will not touch on that.
vms in general can be subject to same supply chain vectors unless you have a way of ensuring upstream + downstream chains are in your custody (not realistic) and you have cluepon.
this is why nixos is useful for me. makes controlling supply and build chains nicer, i can achieve reproducibility up to ~98% every time and everything can be audited end to end via cryptography. if i hand you a nix flake (builds manifest) to build a vm the sha and outputs will be identical on my machine as yours. but problem with nix is it invalidates all modern tooling for orchestration and configuration management. as well, it breaks convention of lsb-fhs but tradeoff is immutability and path isolation which had some benefits. but using in production environment likely will not be adopted due to cost of tearing down the abyss of shvt container systems.
> work with this stuff daily and conduct sec ops for hyperscalar clusters (think 200+ node multi-regjon k8s and nomad clusters). all ov it horribly complex.
I think you have to have a dedicated guy. If you have a dedicated guy, it's not as much of a mess but also does not strike me so much as useful.
> vms in general can be subject to same supply chain vectors unless you have a way of ensuring upstream + downstream chains are in your custody (not realistic) and you have cluepon.
Yeah, but you have that problem with any OS. At least you don't have that problem twice if you're not downloading containers from Docker.
> nixos
Reproducibility is nice; I don't like how they did it.
See, when I said I’d piss everyone off, I figured that would do it. Welcome to hellthread! :helllife:
never mess up with me, reverso. you’re wrong. i’m smiling. this is the most interesting thread in ~6.mo. besides i needed break from trying to get my fbi agent to send me nudes
I think you have to have a dedicated guy. If you have a dedicated guy, it’s not as much of a mess but also does not strike me so much as useful.
dedicated operator is necessary for this shvt. if it were easy to manage mk would be running faang-corps. it’s only useful in the sense that many corps don’t understand the concept of simplicity and distributed computing. mostly they shovel container bodies onto the burnpile and yolo-deploy all day long.
the schedulers (k8s/nomad/etc) fundamentally are simple in design. it’s when you layer on abstraction after abstraction of lo-code/no-code dogshvt the problem becomes complex because nobody can troubleshoot 9 layers of helltrash.
Yeah, but you have that problem with any OS. At least you don’t have that problem twice if you’re not downloading containers from Docker.
you do have a point there. the weirdest thing i’ve seen is.
metal-host(insert os here) --> vm(insert vm host os here) --> docker(insert container os artifacts here) --> app stack --> hello_world
maddening shvt all ov it.
Reproducibility is nice; I don’t like how they did it.
nix has lots of problems, maybe i will sideload a chat with you as to what you don’t like as not to start the fist_shake.
thanks for good thread, komrade. :cupofcoffee: time!
> I'm not pissed off, I'm actually kind of ashamed that the thing I recommend has a coloring book.
nothing wrong with that. we can download colouring book and make ad-hoc rorschach tests
> Red Hat really is full of faggots.
faegots are everywhere it's irrelevant, redhat's light has dimmed for years. it seems their focus is corporate morons who won't take time to learn fundamentals of komputing
You are "minimizing attack vectors" by using a service like Tor, not knowing how many exit nodes are compromised, then decreasing accessibility to it. Your bandwidth over Tor is going to be ass. It already takes a long time to load a single website over Tor, now add videos to it. I understand the desire, but at some point you're getting diminishing returns on anonymity versus practicality.