@whitequark i have a stupid idea: what exactly are you measuring? if the client is not acknowledging TCP data for a while, does that create backpressure that causes blocking that counts towards that time measurement?
@whitequark are you serving an endpoint for "perform some expensive operation on the entire history of some repository" (like git log or git blame)? or what is that?
@kernellogger@pojntfx@thomasmey when I wanted to get stuff fixed in a subsystem whose maintainer was unresponsive and whose email address bounced, I sent my patch to akpm instead, and also sent akpm a patch to remove the dead address from MAINTAINERS, and that worked
I wrote about how side channels in serialization can theoretically allow breaking ASLR - with a theoretical worst-case example of how a single round trip of deserializing attacker-controlled data, serializing the result again, and sending the re-serialized data to an attacker could leak an entire pointer: "Pointer leaks through pointer-keyed data structures" https://googleprojectzero.blogspot.com/2025/09/pointer-leaks-through-pointer-keyed.html
huh, systemd-run is really neat, you can do stuff like: $ systemd-run --user -S -p MemoryHigh=1000M -p MemoryMax=1100M and get a shell inside which you can't use more than around 1G of RAM (but can use more swap)?
@whitequark what would bypassing ELF loading mean? pretty much the only elf loading the kernel does for a static binary is to map its memory ranges into an address space and then run it starting at the entry point...
@whitequark ah yes the vdso section is just a VMA with a custom page fault handler that inserts PTEs pointing to an in-kernel buffer on demand (and vvar is basically like that, too). but ELF loading in the kernel isn't really all that complicated either, you basically go through an array of "please map this range to this location"...
@vbabka oh no it's much worse (in terms of wall clock time) than just mutex contention, and doing the join before the open_sockets() call in main() would help somewhat but not all that much (because the open_sockets() call in the other thread would still be slow). and it's not the network subsystem's fault 😆
@rgo closing the sockets would be one way to avoid the performance hit, yes; but can you also avoid the performance hit while opening that many sockets and keeping them open? (sorry, I guess it's not a great example)
Linux kernel quiz: Why is this program so slow and takes around 50ms to run? What line do you have to add to make it run in ~3ms instead without interfering with what this program does?
@jmorris@brauner Christian is joking about how I only learned about this feature because I looked at a patch that intended to use MSG_OOB as part of the new core dumping mechanism
If you have C++ code that allocates heap objects with operator new and use a memory allocator that records the addresses from which it is called, this can be used by debugging/profiling tools to determine the types of heap allocations at runtime.
(LLVM does not yet support this for C-style malloc() calls yet though.)
human borrow checker (but logic bugs are best bugs).works at Google Project Zero.The density of logic bugs (compared to memory corruption bugs) goes down as the privilege differential between attacker context and target context goes up.