3. You simply reboot that target machine. It will now fetch the UKI kernel, which then fetches the root disk image. And everytime you reboot this happens again. The target's machine#s local disks are unnaffected.
4. …
5. Profit!!
3. You simply reboot that target machine. It will now fetch the UKI kernel, which then fetches the root disk image. And everytime you reboot this happens again. The target's machine#s local disks are unnaffected.
4. …
5. Profit!!
It's mostly to tighten my test loop a bit, for physical devices. So here's what this entails:
1. You build your image with mkosi one your development machine, and ask it to serve your image as HTTP. In other words: `mkosi -f serve`.
2. You boot into the target machine once, and register an EFI variable that enables HTTP boot from your development machine. Simply do `kernel-bootcfg --add-uri=http://192.168.47.11:8081/image.efi --title=testloop --boot-order=0`, using @kraxel's wonderful tool.
Net result of this: I can now point my UEFI to a single URL where it will load the UKI from. A few seconds later the initrd will pick up the rootfs from the same source, and boot it up. Magic!
Why all this though?
and even one more comment:
next steps: instead of downloading root fs via http, access it via nvme-over-tcp.
Benefit: better performance (no ahead of time download, but download as needed), and even better: persistency!
oh, and one more comment: this will only work on systems that are relatively high on the systemd adoption scale: you definitely need a systemd-based initrd for this. For deriving the rootfs URL from the UEFI network boot URL you need a systemd-stub based UKI.
WIP PR for all of this is here:
So, two take-aways here:
1. Really nice test loop now for testing immutable, modern OSes on physical devices, with onboard tooling
2. Yeah, you can frickin' boot into a damn tarball now, with just an UKI.
And then there are three other talks, in the aforementioned Image-based Linux & Boot Integrity devroom (about systemd & TPMs), in the bootloader devrom (about supercharged UKIs) and in the identity management devroom (about systemd' userdb API).
And then your's truly will give four talks, at various different places. First of all I have a keynote:
https://fosdem.org/2025/schedule/event/fosdem-2025-6648-14-years-of-systemd/
And unlike some well-known billionaire I am not going to chicken out of my mine. Ha!
PSA: There's going to be a lot of systemd related stuff going on at FOSDEM this weekend. Many folks from the systemd camp and adjacent will be hanging out at the Image-Based Linux & Boot Integrity devroom:
And of course, outside of the image-based linux track, and other than my own talks there's some more systemd adjacent talks in other tracks, for example, Ani Sinha talks about bring-your-own-firmware UKIs, in confidential computing cloud stuff, booting from mkosi initrd in the network in the distributions devroom (by Antonio Feijoo), running podman containers as systemd services (by Axel Stefanini), and probably some more I missed.
See you all in Brussels!
@michelin yeah, all the money in the world, and yet he's chickening out when seeing just a tiny bit of opposition...
@jarkko It's a long text, but the person writing this is basically saying that a TPM2 policy for a disk that only locks to PCR 7 or not even that is not secure? I mean, no shit sherlock, of course it doesn't. If you policy doesn't lock to anything then it doesn't lock to anything...
A full boot chain that gets things right would include at least a UKI with a signed PCR policy + a dynamic systemd-pcrlock policy. The combination should be reasonably secure, I'd claim, but if you have neither…
@jarkko … then you have only a very weak model, probably to the point it's not worth it.
What matters is that distributions actually start deploying UKIs like this, and enable systemd-pcrlock by default. This is not trivial, but some distros are further ahead there then others.
@axboe @osandov but the folks who commented there are marked "senior" in their UI. Hence, they are the true *pros*, and you, you are are just ... *somebody*.
@Foxboron christ. I guess that means that I am not the only asshole doing a keynote there, eh? ;-)
…the AF_VSOCK "CID" (which is like an IP address, i.e. an identifier for the local VM) you can specify a friendly machine name, if the VM in question is registered with systemd-machined. systemd-vmspawn sets things up that way out of the box, of course. That means, with current off-the-shelf systemd inside a VM and on the host you can now just do "ssh machine/foobar" to connect to a local VM called "foobar", via AF_VSOCK, i.e. independently of any fragile network.
3️⃣7️⃣ Here's the 37th post highlighting key new features of the current v257 release of systemd. #systemd257
In systemd v256 we added a small tool "systemd-ssh-proxy" whose job is to allow connecting to local VMs with ssh via the AF_VSOCK protocol (as opposed to AF_INET/AF_INET6). It acts as host-side counterpart to the guest-side systemd-ssh-generator that automatically binds sshd to AF_VSOCK.
In systemd v257 the functionality has been updated so that instead of specifying…
And that's it! After 37 installments I think I covered pretty much all the bigger things in the NEWS file with a story.
Of course, there's a lot more in this release. For the full list, consult our NEWS file:
https://github.com/systemd/systemd/blob/70bae7648f2c18010187c9cf20093155eaa26029/NEWS
Stay tuned so that you won't miss out on the #systemd258 series when the time comes for the next release!
This is extremely handy, since it "just works" here. In fact, I switched over to this for my private VM needs entirely now.
(In related news, systemd-ssh-proxy now supports the AF_VSOCK "MUX" protocol too. This means it's now compatible not only with AF_VSOCK how it's implemented by qemu, but also with the implementations in Firecracker/CloudHypervisor)
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.