@Suiseiseki@freesoftwareextremist.com Likely mmap. Mapping a range of addresses is not really allocating the memory. For example, using the LMDB C library, if you create a 20GiB database file, it appears that your processes is using 20GiB of memory, but that's just virtual memory addresses, being backed by a file on disk, not RAM.
20TiB still sounds ridiculous, it's usually a case of "I'll map 1TiB here, because we'll never need 1TiB of addresses", and then that thing gets called 20 times, and you get 20TiBs of virtual memory.
@p@fsebugoutzone.org You're saying that the end of Eternal September can be brought about, theoretically, by having a few centralized global monopolies here and there, that sweep up the invader swarms back out of our beautiful wounded wired and into massive sandbox prisons where they can be happily stupid together? Simply ingenious. And even better, perhaps, inevitable!
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io I'm not saying someone must review the CVEs, I'm saying someone must ensure they're fixed in important running systems. The easy bar I'm talking about is doing "apt upgrade" on Debian stable, or "npm audit fix" in a Node project.
CVEs are not for me to review in detail, they are a communication system for the developers and upstream maintainers, who write them, to notify me (downstream), and perhaps tell me their expert opinion about the issue, to help prioritize my work. I assume package X maintainers know their package well enough, so when my RSS feed says they published a severity 9 remote-execution CVE, I immediately contact every client I have to coordinate sessions for reviewing their situation and whether it affects them, and to coordinate upgrading their package X installations as needed. But if a CVE has a severity 2 of barely anything, I don't have to even look at it until next maintenance cycle, if ever really. I just assume it will be fixed next time I upgrade X whenever. It is not a perfect system, a severity 2 might actually cause damage, and a severity 9 might actually not apply in a specific context. But it is an important tool nonetheless.
In this context though, I don't care much about the US gov CVE central DBs. Those are, like I said, are just to scare business people. What I care about is upstream security advisories, published when important issues crop up, I RSS/Atom/curl-script subscribe to them, to get notifications when things require my attention. Automatic scanners can be helpful though, when we get thrown into a project that doesn't have due process in place already, and automatic scanners depend on said central CVE database sometimes.
As for the Linux Kernel, no, it does not need thousands of CVEs for a few weeks. There are a few hundreds published per-year[1], most years, and a much smaller number demand immediate emergency attention. Still, for my current clients and personal needs at least, listening for Linux Kernel advisories from Debian / Gentoo / Slackware is enough. Those come usually in batches, unless something major came up.
@domi@donotsta.re@wolf480pl@mstdn.io@lanodan@queer.hacktivis.me cut costs by getting away with ignoring security, why not us too? Because the competitor can then pay hackers to attack us, and take us out of the market, and it would be almost impossible for us to prove they did it. I used that argument before. That business guy's response was "can we hire attacks against our competitors then? we've been DDoSed before, maybe it was them!" lol
Had to remind him it's illegal, and might come back to bite us, remember to fear God or Satan or whatever something you worship, dear brother! I thought I came here to make a case for a better security posture, not to teach basic morality.
Thankfully, as far as I know, we didn't end up attacking anyone.
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io Running at zero published CVEs should be a really low-bar to clear, unless the project has 1200 NPM dependencies, none of which care about backward compatibility, so patching them up implies a dozen breaking changes, and thus code changes, to make it compatible. Of course, most projects nowadays seem to be built up of 32 micro-services each of which has a 1000 NPM transitive dependencies, or 500 Pip packages for aRtIfIcIaL iNtElLiGeNcE, or at least 200 NuGet packages for "Enterprise Integration", plus 3 different types of databases, and two queuing / message busses platforms (throw one extra an external "PaaS"), and four different OSes, and two Kubernetes cluster providers, and 50 docker container images with 13 different bases...
CVE reports will not save them, nothing will save them. No system is safe.
Hand-writing plain assembly is safer at that point.
@lanodan@queer.hacktivis.me@domi@donotsta.re@wolf480pl@mstdn.io The real use for US-gov-backed CVE public databases is to scare business and project management people into allocating resources for maintenance of critical already running systems. You bring them a fancy PDF report with a formatted list of 50 CVEs, 7 of which are "Critical" severity (make sure to color them red), links to scary official-looking .gov website, and tell them it must be fixed now or they get to take responsibility for what happens if they say no. Make sure to CC the whole accessible chain of command to keep records.
This necessary evil saves many organizations filled with idiots from getting hacked, and leaking confidential medical (or otherwise) data for the stupidest things. Especially when you discover, while going over their system patching CVEs one by one, that they have a publicly exposed database with a password of (concat organization-name "abc"). This "tactic" is used (and misused) in countless organizations and government agencies the world all over.
-i | -include Output in C include file style. A complete static array defini‐ tion is written (named after the input file), unless xxd reads from stdin. For example, you can add this to your build pipeline, to turn a string to an embedded array of characters. Just #include printf-template.html.h in your code. If printf-template.html contains printf control sequences, like %s, they do work if you pass them to printf(). You can have an HTML file parametrized with printf() for example.
I assume you can embed anything this way; strings, images, 3D models, etc. This seems cross-platform enough. I'm starting to wonder if the new #embed C standard proposal is really needed, or just bloat now. (Even parameterized #embed sounds like something replaceable with some UNIX piping before you pass the data to xxd -i).
@menherahair@eientei.org I use fakeroot to build as a normal user, almost all SlackBuilds work when built this way, in my experience. Alternatively, sometimes I use a build chroot, check the results, then copy and install the package to the target machine.
@nyanide@lab.nyanide.com I know many people don't read SlackBuilds, especially beginners using helper package management tools. I do read almost all SlackBuilds I install. At the very least, I read the info and README and such, but do a quick glance at the SlackBuild itself.
@Cocoa@nekosat.work Yes. 99% of my init needs are met by running daemon[1] from rc.local at boot; the default method on Slackware. With three lines of shell script I get a named daemon, with PID file tracking, executed by its own user, optionally in a chroot, and with stdout and stderr redirected to log files.
And for the 1% when I want something crazy, like running multiple network namespaces for different services hosted in them, connecting over specific Wireguard networks each, and isolated from seeing the actual network hardware directly, I can modify the simple networking init scripts, and make it happen myself in a couple of hours. I don't even want to imagine what it would be like to try to modify systemd and NetworkManager code to add such a feature.
My experience with systemd was very buggy, in production, causing extended down-times, from bugs left unfixed for years. My experience with OpenRC was alright, but does not spark joy. Only caused production downtime once, due to a buggy interaction with consolekit2, but that's a track record roughly %600 times better than systemd already.
upgradepkg --terse --install-new /mnt/usb/*.t?z This applies equally to official Slackware packages, and any custom packages, from SBo or otherwise.
Of course on Debian you can do the same, with apt install /mnt/usb/*.deb. I think if you do it this way, the dependency resolver of apt won't go crazy, because it can order the .deb installation to keep dependencies satisfied. Do not try to install the packages one by one though, because you will have to do it in the correct order then.
Unrelated recommendation, if you're thinking about Debian, check out Devuan, for a more sane community, and a more sane init system. Devuan is still down-stream of Debian, so Debian shenanigans might impact them, but at least you have a layer of sanity-checks between you and the Debian project.
@nyanide@lab.nyanide.com With respect to the offline mirror, I see that as very possible. If you want faster setup of machines too, pkgtools (installpkg et al.) support a --tagfile option, in which you can tell them to basically just install a specific set of packages. I have tagfiles for tiny light weight chroots, for minimal secure web servers, and other arrangements. All of this is a bit advanced of course, considering you have to know your way around manual dependency management. Some variation of ldd * | grep 'not found' can be your best friend :)
@nyanide@lab.nyanide.com I have a handful of Slackware 15.0 machines running grub, it works for me, including in complex setups where I boot a custom initrd from a software RAID disk, then SSH into the initrd to unlock a LUKS partitions and set up the rest of the RAID arrays, and then boot the actual system. The upcoming Slackware 15.1 will encourage grub as the default configuration instead of elilo / lilo, but 15.0 definitely works well with it in my experience.
@nyanide@lab.nyanide.com I mirror SBo using git. My package management functions are basically some ease-of-life around the core of:
# As normal user: cat *.info slack-desc README # read about the package . *.info # source its metadata wget -c $DOWNLOAD # download the source md5sum filename.tar.gz # check-sum the downloaded file echo $MD5SUM # compare to the expected value. fakeroot sh *.SlackBuild # build the package.
# As root: updatepkg --install-new /tmp/*_SBo.t?z # install the built package. chown root:root /tmp/*_SBo.tgz # make it owned by root mv /tmp/*_SBo.tgz /var/cache/packages/slackbuilds/ # store it in my package store. I also share the pre-built packages between my machines in general, by mirroring them to one of my servers, and downloading them to other machines from there.
@dcc@annihilation.social@nyanide@lab.nyanide.com The point of Slackware, in my mind, is that it is so simple and unchanging that you can actually own your system over time. My primary reason for using Slackware is that I can fork the whole distro if I really had to, because it is a one-person-sized system. People complain that "Slackware is bloat, it installs hundreds of packages by default", but that is actually the simple approach, because it means Slackware doesn't have repos with 60,000 packages, with unlimited potential combinations. Slackware is rock solid for that. There's only one authoritative configuration, and it is developed and tested for half a decade before each release.
CRUX is very good too, but, I strongly disagree with their "only English" and "delete all docs from packages" mindset. I think CRUX is more invasive philosophically than Slackware in trivial things, yet somehow more disorganized and haphazard at the same time.
How do I do SBo package updates? I wrote ~15 lines of shell and awk script in a shell function, starts with something like "for file in /var/log/packages/SBo" (packages install logs are plain text files, and require a "tag" part in them, SBo is a tag for example). It uses git pull to sync a local copy of the SBo tree. The end result is that it shows me "package x has update entries in the ChangeLog, currently installed version is vx.y.z, here are the 5 git log entries that modified this package in the package tree". I then go over my packages, and decide whether I want to update them or not. I don't update most things, unless I know them to be security sensitive, or I care about new features.
I've made this "update system" in a few hours, fixed a few bugs a couple of times here and there, and used it for years afterwards. It never breaks, it never changes, it gives me all the agency. Well, not really, because it still relies on package maintainers to update their packages, which I don't like. So, I wrote my own RSS/Atom/OtherFormats reader in some 100 lines of shell script, curl, and xslt style sheets, throw it on a cron, to check package releases for critical packages for me, print any updates whenever I start a new shell. I don't rely much on SBo maintainers for packages critical to business production machines (yes, Slackware in prod, serving millions of customers for my clients). Plus, I maintain half a dozen packages on SBo officially, so I have to know about them first :)
All of this is possible because Slackware doesn't have a system. It doesn't enforce things. It trusts you to know what you're doing, and commits to not breaking your work. I rewrote the networking init scripts because I wanted a weird setup with my private networks, and that was easy, and never broke ever on its own. With Slackware I can compound my work building things, I don't have to continuously "churn" trying to keep up. It's what Common Lisp / C / Lua are to the JS+NPM / Rust+Cargo (et al.) never ending slop generators. Even official package updates ask you, with a diff, to judge the new updates, instead of just overwriting your files, unlike some other unmentionable distros.
I worked deeply (build packages, maintain them, etc. for years) with Arch, Gentoo, Debian, and Fedora, worked with FreeBSD, OpenBSD, NetBSD to port software to them, and I've used CRUX for a while. Slackware is definitely the most UNIX of Linux systems. Heck, it might be more UNIX than FreeBSD. I think CRUX, OpenBSD, NetBSD, and Slackware are closely clustered, philosophically, in the idea space.
I maintain some 20+ Slackware machines, for home use, software development, playing video games, including wine / Steam / VR gaming, self-hosted LLMs, business production (web servers, databases, automation workers, etc.), and a few personal machines and laptops for less-techie family members.
If you have any questions on Slackware, I'm happy to answer.
@a1ba@suya.place hey~ what happened to husky? o.o I just went to install it, but noticed that you were not maintaining it anymore. I haven't been in the loop on fedi for a couple of years, apologize for my ignorance.
@lain@lain.com someone has to run on a platform of "medieval castle for every american" they will get all votes and blow all other political parties out of the murky mud they're in.
@lain@lain.com 10 years later, the average is 4 years later... isn't that the opposite then? houses are getting older, because as time goes forward, their average building year is lagging behind? im confus.
@LukeAlmighty@gameliberty.club Too many people, unfortunately, felt so little love and support that the idea of hope and kindness is fantasy fiction to them. It's scary really. On the other hand, people don't realize how low the barrier to helping others is. The standards are so low that the smallest act of help can be shocking and life-changing to many.
@queenofhatred@akko.wtf@lanodan@queer.hacktivis.me I wonder sometimes, there's Zen in archery, Zen in blacksmithing, Zen in meditation, what Zen is there in programming? What would it look like for a Zen master to practice Zen programming? Offline hacking on C or Lisp might be a start.