@domi Well CVEs are supposed to be just for vulnerabilities, quite like how tickets are for all kinds of bugs, anyone going "It's a CVE, it's bad!" needs to be ignored or even put on the village stocks for the same reason we'd treat "Oh no, there's a lot of tickets, they *all* need to be closed" as something completely braindead.
@lanodan@domi I think part of the problem with CVEs is that they serve two purposes without people realizing.
Similar to how issue trackers ended up serving as both a todo list / task management database, and as a defect database, leading to the shitshow called stalebot.
In case of CVEs, that'd be a difference between "this can definitely be exploited" and "this might be exploitable so apply the bugfix just in case" - IMO both are needed but for different purposes.
@wolf480pl@domi Except, there is exploit databases, there's https://www.exploit-db.com/ and metasploit modules. Which is why "It's a CVE, it's bad!" is from fucking clowns.
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io The real use for US-gov-backed CVE public databases is to scare business and project management people into allocating resources for maintenance of critical already running systems. You bring them a fancy PDF report with a formatted list of 50 CVEs, 7 of which are "Critical" severity (make sure to color them red), links to scary official-looking .gov website, and tell them it must be fixed now or they get to take responsibility for what happens if they say no. Make sure to CC the whole accessible chain of command to keep records.
This necessary evil saves many organizations filled with idiots from getting hacked, and leaking confidential medical (or otherwise) data for the stupidest things. Especially when you discover, while going over their system patching CVEs one by one, that they have a publicly exposed database with a password of (concat organization-name "abc"). This "tactic" is used (and misused) in countless organizations and government agencies the world all over.
@rozenglass@wolf480pl@domi Pretty much yeah, which is why it needs to *not* be gatekeep by exploits, as otherwise it would go fast from safety to disaster recovery.
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io Running at zero published CVEs should be a really low-bar to clear, unless the project has 1200 NPM dependencies, none of which care about backward compatibility, so patching them up implies a dozen breaking changes, and thus code changes, to make it compatible. Of course, most projects nowadays seem to be built up of 32 micro-services each of which has a 1000 NPM transitive dependencies, or 500 Pip packages for aRtIfIcIaL iNtElLiGeNcE, or at least 200 NuGet packages for "Enterprise Integration", plus 3 different types of databases, and two queuing / message busses platforms (throw one extra an external "PaaS"), and four different OSes, and two Kubernetes cluster providers, and 50 docker container images with 13 different bases...
CVE reports will not save them, nothing will save them. No system is safe.
Hand-writing plain assembly is safer at that point.
@lanodan@domi@rozenglass at my $dayjob, if I showed management such a list, they'd be like "this list is too long, we'll never be able fix all those issues, and 90% of them are probably bullshit, try making a shorter, more relevant list"
But well, the stuff we do is fairly unimportant. If it fails, nobody dies, just some people won't see ads...
@rozenglass@wolf480pl@domi I don't think it's that low a bar but it should be doable, if it's not well… architecture/engineering problem right there. Or just pure economical problem of "Other companies cut costs by getting away with ignoring security, why not us too?" which is why then regulations like the Cyber Resiliency Act have to happen.
@wolf480pl@domi@rozenglass Somewhat modern (Schema 5.0+, been a thing since October 2022 in the wild) CVEs as published by linux kernel has affected/fixed versions metadata in them, and linux already does backports to a bunch of LTS.
@lanodan@domi Ok so @rozenglass you're saying running at zero CVEs published should be a really low bar
But consider that Greg K-H just said:
> Given the news of the potential disruption of the CVE main server, I've reserved 1000 or so ids for the kernel now, which should last us a few weeks.
That's just Linux kernel. Who is going to have the resources to review 1000 CVEs in every few weeks?
@wolf480pl@domi@rozenglass Well effectively the CVEs in kernel either mean: Use an LTS, or get a team for managing a fork (where then the CVEs can be useful to not miss an important backport).
And given the massive size of the Linux kernel, it makes full sense. Same problem as like forking chromium (in full or grabbing bits).
@lanodan@domi@rozenglass IIRC Linux intentionally files a CVE for every bugfix backported to stable that fixes anything related to memory safety, permission checks, etc.
They amount of bugfixes is too high for kernel devs to evaluate exoloitability of each of them, therefore it's also too high for you. They don't want you to read the CVEs. They want you to blindly update to latest patch release in your LTS branch.
@wolf480pl@ignaloidas@domi@rozenglass There's a bunch of RTOS out there though, and I think we could do with less Linux in devices that are extremely tied to a single-purpose (WiFi APs being a good example of that). That said for embedded, you should be able to have a small kernel config for your device and so just filter out CVEs for subsystems that aren't used.
and I'm honestly not sure that linux has a lot of longevity either, I don't think the maintainership culture is healthy and I doubt it would really handle a bunch of new people upstreaming stuff in the way those people need it to.
in my mind Linux is something that I'm kinda looking to jump ship from long-term, not because it's bad, but because I don't think it has a long future. There's not yet something I could jump onto, but if I feel like that changes, I'll likely switch fairly quickly
@wolf480pl@ignaloidas@domi@rozenglass Heh the fairy tale of the GPL. How is NVidia/PowerVR/… doing? Similarly Android devices have been full of driver blobs for ages, there's even libhybris which always reminds me of ndiswrapper.
@ignaloidas@lanodan@domi@rozenglass is there any development model other than Linux's that'd ensure we get enough source code that we can build custom firmware for these devices?
if the concern is FSF style freedom, then I don't truly care? It's not like silicon vendors haven't done a whole metric ton of bullshit to prevent you building custom firmware anyways - and IMO the tide is slowly, but surely turning on the silicon vendors, and more and more openness is coming out of them.
@wolf480pl@domi@rozenglass@ignaloidas Well basically no device can be usable with 100% open-source puritanism (and a fun one is open-source excludes pre-generated code, hello autoconf).
But on Android side of the spectrum you're stuck with the vendor random fork of the kernel, can't choose your libc (annoying given how anemic bionic is), and so far haven't seen really significant userland changes (like say ssh daemon and other Unixy things we're used to). Which is quite why to me Lineage is just modding.
@domi@donotsta.re@wolf480pl@mstdn.io@lanodan@queer.hacktivis.me cut costs by getting away with ignoring security, why not us too? Because the competitor can then pay hackers to attack us, and take us out of the market, and it would be almost impossible for us to prove they did it. I used that argument before. That business guy's response was "can we hire attacks against our competitors then? we've been DDoSed before, maybe it was them!" lol
Had to remind him it's illegal, and might come back to bite us, remember to fear God or Satan or whatever something you worship, dear brother! I thought I came here to make a case for a better security posture, not to teach basic morality.
Thankfully, as far as I know, we didn't end up attacking anyone.
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io I'm not saying someone must review the CVEs, I'm saying someone must ensure they're fixed in important running systems. The easy bar I'm talking about is doing "apt upgrade" on Debian stable, or "npm audit fix" in a Node project.
CVEs are not for me to review in detail, they are a communication system for the developers and upstream maintainers, who write them, to notify me (downstream), and perhaps tell me their expert opinion about the issue, to help prioritize my work. I assume package X maintainers know their package well enough, so when my RSS feed says they published a severity 9 remote-execution CVE, I immediately contact every client I have to coordinate sessions for reviewing their situation and whether it affects them, and to coordinate upgrading their package X installations as needed. But if a CVE has a severity 2 of barely anything, I don't have to even look at it until next maintenance cycle, if ever really. I just assume it will be fixed next time I upgrade X whenever. It is not a perfect system, a severity 2 might actually cause damage, and a severity 9 might actually not apply in a specific context. But it is an important tool nonetheless.
In this context though, I don't care much about the US gov CVE central DBs. Those are, like I said, are just to scare business people. What I care about is upstream security advisories, published when important issues crop up, I RSS/Atom/curl-script subscribe to them, to get notifications when things require my attention. Automatic scanners can be helpful though, when we get thrown into a project that doesn't have due process in place already, and automatic scanners depend on said central CVE database sometimes.
As for the Linux Kernel, no, it does not need thousands of CVEs for a few weeks. There are a few hundreds published per-year[1], most years, and a much smaller number demand immediate emergency attention. Still, for my current clients and personal needs at least, listening for Linux Kernel advisories from Debian / Gentoo / Slackware is enough. Those come usually in batches, unless something major came up.