me, trying to sleep: "surely nothing of interest is happening at this hour so I cna try to sleep"
the safe c++ post about to be blasted into my DMs: "allow me to introduce myself"
me, trying to sleep: "surely nothing of interest is happening at this hour so I cna try to sleep"
the safe c++ post about to be blasted into my DMs: "allow me to introduce myself"
@feld Atleast on Linux an option going forward is systemd.socket, which can do TCP socket listening for a service and pass it when starting it. That eliminates the need for requiring root, systemd is root and opens the socket, the service does not open it onyl handle it.
On BSD/Alpine there'd be the option of inetd with priv dropping so systemd isn't required.
Apple MacOS.
Still, it seems atleast the part that is being RCE'd is not running as root and can only write to /tmp as non-root, so it's "FINE" (load bearing quotes).
@feld low effort code execution will do that, even if it's "just LPD user" gated. Write access to /tmp as LPD in theory isn't bad but what software checks if it's /tmp files were written by the correct user? Could in theory abuse that to smuggle some other stuff in. Deploying a remote terminal shouldn't be difficult either.
Running code on a system as system user but not root is still a place of massive opportunities (seen more than a share of code that assumes any user ID with three digits or less is privileged and should be allowed to do things with other things).
@feld that said, any system still running parts of CUPS as root should undo that and make sure it doesn't do that. Shouldn't be any good reason to run CUPS as root these days.
@feld the component being affected is cups-browsed, on my nixos setup that runs as LPD user, in the exploit PoC video recording it also ran as LPD user when running exploit code.
looks like the new spooky 9.9 CVE is just a bug in CUPS that lets a remote attacker emulate the experience of installing an HP color printer driver on your linux system.
Granted, this is tragic, but also CUPS isn't exactly a pretty system to begin with, so not sure where to go from here.
also, as amendment, the gold standard for exploit PoC is to open calc on the user.
The exploit video on the disclosure shows that the extend of it is being able to write a file in /tmp while being the LP/LPD user, with a 700 umask. It's not exactly frightening, other than the threat of a DoS. LP user doesn't have that many permissions on the system.
It IS a code execution with a relatively low effort barrier, but it's still not getting anywhere without running CUPS as root.
Watching Dexter is just watching House MD but House isn't addicted to vicodin and the medicine turns out to be murder every time.
Despite this I can imagine the two having a hot and sweaty makeout session.
@feld RAIDZ comes with so many footnotes about what it does to a pool it's not even funny.
@feld Oh also superblock upgrades, which despite ZFS being usually fairly careful about feature upgrades, I've had cause a zpool to become a irreversible migration. In that case an old Solaris host having it's zpool moved to a linux machine and the pool imported read-only caused it to become unmountable by any installation of Solaris we had at the company.
@feld honestly, I'm not sure what the problem here is. The fact ZFS is just one of the better behaving filesystems doesn't make it good. The bar is just very low in filesystems aiming for recoverability and resistance to malicious attacks.
The rest of ZFS being a mess of a design in terms of modern filesystem architecture is just part of it.
@feld just google for HBA/SC manuals? mdadm has a multipath flag for operating past controller failure in case the hardware driver can't do it on it's own but atleast can disconnect the HBA.
Heck, even Windows Server supports this, LBFO/MPIO under Storage Spaces can handle controller failure in a context of HyperV setups.
@feld No that's entirely common and reasonable, it's trivial to setup on Linux/Solaris? HBA/SC Failover isn't black magic, but if it fails in the right way, the FS driver will crash the system because the FS assumes that the HBA replying is something meaningful, when it's not.
@feld oh I've had these problems on Solaris too, this isn't a Linux exclusive!
edit: To be fair, the Fsync issue is a linux exclusive, ZFS being badly written isn't a Linux exclusive, that's just the code base. I've had one Solaris machine deaedlock without even any kind of console output after a disk controller on the storage fabric stopped working.
I wiped one of my older systems for reasons, I do love watching nwipe while it's deleting it's own root FS. See some services begin to fail as they are unable to continue.
Root FS was ZFS, it managed 11 minutes before an error was reported after I issued blkdiscard to the entire disk and then started nwipe on it. After that it crashed on a kernel panic within a minute.
On the one hand, understandable. But also for a filesystem touting it's safety and stability, I don't think it should kernel panic that easily.
BUt that's honestly part of my experience, I've sysadmin'ed ZFS for 5 years, it's only stable for common failure modes. If a controller breaks or disks do fun stuff like "return all zeroes and discard writes" then ZFS will crash your computer just as badly as the other filesystems will.
Soapboxing a tiny bit, we should write modern filesystems in a way that we assume that a malicious actor is gonna be messing with our ability to IO with it. That also includes assuming "the device is discarding writes and returning zeroes without error". ZFS is great if you limit yourself to common disk failures (ie, where errors are reported or disconnects). If the controller is faulty or the disk behaves in non-error ways, good chance ZFS will trash the pool.
ext4 and btrfs mostly differ in that they take longer to notice things wrong or the corruption is more extensive without notice. ZFS just crashes faster.
Also this is the annual reminder that last time anyone did a survey on how good filesystems are at reporting up write errors, ZFS only qualified on reporting some errors and only common ones. Btrfs and ext4 both mostly swallowed write errors.
Part of that is infra, in the modern async "issue write and OK it to the process before the device OKs it" world, the FS can't reliable report such things.
That's where we got the postgresql "Fsync considered unreliable" from, a bug that persists on linux and can cause data loss on any DB setting other than "Fsync every write or do O_DIRECT"
AMD and ARM (including Apple) keep winning on the CPU market by... checks notes ... Making CPUs that do not explode.
This post is sponsored by the latest Intel CPUs, which explode.
@confusomu They explode. A gamedev studio alleges a 100% failure rate over time; https://alderongames.com/intel-crashes
it should be a crime to post a big CVE announcement, give it a cool name and NOT give me a gadget to try it out with.
LEMME BREAK MY STUFFFFFFFF
"can you move a 32bit register into a 16 bit?"
"technically if the 32bit register has a value that fits within 16bits the answer is yes"
CULTPONY :verified: :verified:
Programming and stuffbtw I use arch :bisper:??
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.