@swordgeek@theearthisapringle@dalias I’d avoid downstream forks of browsers unless they have a record of pulling updates from upstream within days of upstream updates.
@alwayscurious@swordgeek@theearthisapringle I'd do the opposite. If they're just pulling everything immediately from.upstream they're not vetting changes and they're vulnerable to whatever latest shenanigans upstream pulls. Responsible fork is triaging upstream changes between critical security/safety, desirable new functionality that can be merged on relaxed schedule, and antifeatures that will be new difficulties in future merge conflicts.
@dalias@swordgeek@theearthisapringle The problem is the security patch gap. If one diverges too far from upstream then one risks not being able to release security patches in time.
@alwayscurious@swordgeek@theearthisapringle This is really a problem in philosophy of security fixes I've written about in detail before. It's harder to work around when you don't control upstream's bad behavior, but it should be possible to mitigate most security problems without even needing code changes, as long as you can document what functionality to gate to cut off the vector without excessive loss of functionality.
Most browser vulns are fixable with an extension blocking the garbage feature nobody asked for.
@alwayscurious@swordgeek@theearthisapringle For example "I'm vulnerable to WebRTC vuln attacks from the one Jitsi site I've allowlisted for a few weeks until I upgrade browser to a well tested new version" is far less exposure than "I'm always vulnerable to whatever new antifeatures and undocumented new attack surface Mozilla adds and pushes as an update".
@dalias@swordgeek@theearthisapringle A lot of browser vulnerabilities are JS engine bugs, and those are much harder to mitigate unless one disables JS altogether.
@alwayscurious@swordgeek@theearthisapringle That happens a lot more in Chrome than Firefox probably because of their SV cowboy attitudes about performance, but it might also be a matter of more eyes/valuable targets.
In any case, if you have a real kill switch for JIT, or even better an option to disable the native zomg-UB-is-so-fast engine and use DukTape or something (I suspect you could even do that with an extension running DukTape compiled to wasm...), even these can be mitigated without updates.
@dalias@swordgeek@theearthisapringle In that case the safest option is to run the browser in a tightly sandboxed VM, so a browser exploit is not game over. That’s what Qubes OS does.
@alwayscurious@swordgeek@theearthisapringle That doesn't really help if all your valuable data is in the browser (e.g. a session token for your hosting control panel with logged in consoles) and the host OS is just there to host the browser... 🙃
@alwayscurious@swordgeek@theearthisapringle Yes, that's a very viable model but the browser engines and window managers arenas well tuned to those workflows. Massive resource over usage & navigation difficulty.
If I ever do my dream browser, it will be built around the idea that you have a whole isolated instance per site & that offsite links always open in a new instance.
@dalias@alwayscurious@swordgeek@theearthisapringle One problem with full isolation between sites is it effectively breaks SSO and "Login with $app" kind of workflows which are sometimes mandatory, and copying stuff like cookies/LocalStorage/… over means that token-stealing isn't mitigated.
Meanwhile here I've just made the new-tab button (and ctrl+t) open in a new session (which are all ephemeral).
@alwayscurious@swordgeek@theearthisapringle I don't really see it as a "sadly". The norm is that applications working with complex data from different privilege domains should run in different execution privilege domains.
Browsers just kinda evolved from simple low attack surface document readers to application platforms, and at the same time political norms against malicious behavior disappeared.
But in what they are now, it absolutely makes sense to put them in their own execution privilege domains. Not the fake way FF & Chrome do where there's still shared context with all the secrets in it. But entirely isolated.
@dalias@swordgeek@theearthisapringle I think the HP Sure Click Secure Browser comes close to that. It’s sadly the only viable model with present browsing engines.
A partial solution is to use a mainstream browser (like up-to-date Chromium) for work that needs to be secure (like managing web hosting) and something else in a VM (ideally Tor) for general browsing.
@lanodan@alwayscurious@theearthisapringle@swordgeek Those workflows really should be abolished. I know there's some need for some transitional way to support them, but having it be awkward enough to make them the less convenient option (vs how they're sold now as more convenient) would be beneficial to the ecosystem.
@dalias@swordgeek@theearthisapringle For web compat postMessage() still needs to work for PayPal, as does ??? for Google. Might make sense to just hard-code those services as special-cases for legacy reasons, though.
@dalias@alwayscurious@theearthisapringle@swordgeek True, although I tend to prefer to avoid making higher security much less usable than the status quo because you just risk ending up choosing a much less secure method instead. Specially when a rather common attack vector is to induce stress.
@alwayscurious@lanodan@theearthisapringle@swordgeek The vast majority of SSO systems I've encountered have bugs making it hard or impossible to login with a privacy conscious configuration. I'm not talking JS disabled, just things like 1p isolate, strong cross site tracker blocking, etc.
From a UX and privacy ecosystem standpoint, they're far worse than classic per site authentication.
I understand they sometimes reduce risk of breaches. I see that as lower priority to meeting *user* needs.
And in a gov post-DOGE context, they also leak information between gov entities (centralised records of who logs in to what) in ways that may be harmful to people's safety.
@dalias@lanodan@theearthisapringle@swordgeek Hard disagree on SSO, which (combined with SCIM) really is the right way to authenticate to things in an enterprise or government environment. For instance, many U.S. government websites use https://login.gov as the SSO provider, and that really is an improvement over them all managing authentication separately.
It's designed to make it hard to get off their platforms, and makes it so getting banned by one service provider can cut you off from all your accounts.
1. Performing transformations on the AST/IR to optimize the code abstractly, and
2. Dynamic translation into native machine code and injection of that into the live process.
It's #1 that gets you the vast majority of the performance benefits, but #2 that introduces all the vulnerabilities (because it's all YOLO, there's no formal model for safety of any of that shit).
There are ways to structure it, but most importantly JS JIT usually discards type info instead of keeping it in the native dynamic code, because otherwise the performance wins are more marginal. That's the wrong thing to do. It's a dynamic language, it should act like it.
@hayley would be able to give more concrete and relevant examples.
@lispi314@alwayscurious@theearthisapringle@swordgeek@hayley That's only supposed to happen in code paths where you can prove the type is known, like for data that's known numeric or known integer. But there have been lots of vulns in V8 in this area...
@hayley@dalias@theearthisapringle@swordgeek@lispi314 JS is a very badly designed language from a performance perspective: every property access is semantically a dictionary lookup, and the JS engine must do heroic optimizations to get rid of that lookup. It’s much easier to write a Scheme or Common Lisp compiler because record type accessors are strictly typed, so they will either access something with a known offset or raise a type error.
@dalias@lispi314@theearthisapringle@swordgeek@hayley What kind of performance can one get from a type-1 only JIT? If one only compiles to a bytecode then performance is limited to that of an interpreter, and my understanding is that even threaded code is still quite a bit slower than native code (due to CPU branch predictor limitations I think?). On the other hand, compiling to a safe low-level IR (such as WebAssembly or a typed assembly language) and generating machine code from that could get great performance, but that requires trusting the assembler (which, while probably much simpler than a full JS engine, isn’t trivial either).
@alwayscurious@lispi314@theearthisapringle@swordgeek@hayley Nobody cares if it's a constant factor like 3 slower if it's safe. Dynamic injection of executable pages is always unsafe. But I think it can be made even closer than that in typical code that's memory access bound not insn rate bound.
@dalias@lispi314@theearthisapringle@swordgeek@hayley If you are wanting to get performance that is anything close to what the hardware can actually do, you aren’t doing most of the work on the CPU. You are dong it on the GPU, and that is a nightmare of its own security-wise. Oh, and I highly doubt you will ever able to run an interpreter there with performance that is remotely reasonable due to how the hardware works.
If you limit the browser too much, people will just run desktop applications instead, and for stuff that isn’t fully trusted that is a security regression.
@alwayscurious@lispi314@theearthisapringle@swordgeek@hayley They can run their games in any of the plethora of other browsers if they're too slow. You don't put vuln surface and fingerprinting surface in the browser you use for everything just so it can also be a AAA game platform.