One other important detail is also to have the basement server route through the VPS (via the WireGuard tunnel) for any outbound internet connections as well, especially in the case of federation, or sending email (many mailservers will include the client IP that an email originated from). This is a big detail that many folks overlook when doing a basement fedi server behind Cloudflare, because the actual server IP can be exposed in any server-to-server traffic, if it doesn't route through somewhere else first.
It’s something I need to get around to writing some well-diagrammed guide for. I just know of the vendor documentation, meanwhile I don’t have any comprehensive guides to recommend off-hand.
Effectively there’s two ways to achieve it: HTTP(S) Reverse Proxy: a web browser makes an HTTP or HTTPS connection to a VPS, the VPS then makes an HTTP or HTTPS connection over a tunnel (such as WireGuard) to your basement server (while including the IP address of the original client IP as an HTTP header in the subrequest). Basement server sends it’s response, which the VPS then sends in it’s response to the browser.
In this scenario, the VPS needs to have a valid TLS certificate (and it’s associated private key) on the VPS, if you’re doing HTTPS. If the VPS gets compromised (such as if you don’t trust your VPS host), then any traffic transiting through the VPS is considered compromised. Additionally, whatever web applications you have running on your basement server will need some configuration to handle X-Forwarded-For or Forwarded headers, to properly handle ‘client IP’ info, because otherwise the webapps on the basement server only ever see your VPS as the ‘client’ for every request (which makes blocking bad actors difficult from the basement server without).
If using Apache on the VPS, you need mod_proxy and mod_proxy_http enabled ( https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html ), while Apache will automatically add X-Forwarded-For headers and such for you. You’ll only need ProxyPass, ProxyPassReverse, ProxyPreserveHost in your config.
Say the remote IP of the WireGuard tunnel (your basement server) is 10.12.34.56, the config within a VirtualHost config on the VPS would look like:
ProxyPass "/" "http://10.12.34.56"
ProxyPassReverse "/" "http://10.12.34.56"
ProxyPreserveHost On
Meanwhile, if nginx on the VPS instead, there’s no modules to enable (it’s built-in), you just use the proxy_pass directive ( https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ ) within a location { } directive inside of a server { }, and also set the headers too, such as:
Configuring WireGuard, as well as how to configure your respective hosted webapp for X-Forwarded headers is out-of-scope of this overview, you’ll have to seek out more info elsewhere. But the key detail with the WireGuard tunnel is: on the basement server’s WireGuard config, to have PersistentKeepalive set in the peer declaration for the VPS peer config, in order to keep the tunnel open to be able to receive inbound traffic at any time, to be able to traverse NAT properly.
The other method is:
HAProxy and PROXY Protocol: a web browser tries to make an HTTP or HTTPS connection to the VPS. HAProxy (or nginx with ngx_stream_ssl_preread support) receives the connection: if it’s HTTP, it reads the request and forwards it to the basement server using the PROXY protocol over WireGuard, and forwards any response back to the browser. If it’s HTTPS, it prereads what domain the browser is trying to access in the TLS exchange (for routing the request, in case there’s different backends for different domains), and blindly forwards the rest of the traffic to the basement server using the PROXY protocol over WireGuard, and forwards any response back.
In this scenario, the PROXY protocol is used as a sort of ‘preamble’ to the backend connection, telling the basement server of what client IP it received the connection for, which gets sent before the request itself. Operationally, the VPS serves no more than an application-level router, while any TLS exchange is actually handled at the server forwarded to (e.g. only the basement server decrypts the traffic). In this case, the VPS doesn’t require any sort of private key to handle HTTPS connections, and just routes them instead.
There’s a lot more involvement in setting it up, and requires competency of HAProxy (for the VPS) to configure. Seek out “HAProxy SNI passthrough” guides to make full use of the SNI-based “routing”, and use “send-proxy” in the ‘server’ declaration in a backend. If the VPS is only to be in front of one server, with no need to route between different backends, then just plain “SSL passthrough” is possible too for simpler setup.
YouTube is the new controlled cable television. YouTube intentionally laid the groundwork (implementing Content ID) to hopefully win over the media companies (which they have), by tilting it in their favor. There was also the opportune moment of the Las Vegas mass shooting incident that was used as an excuse to inorganically shove MSM outlets to the front for any 'news-related searches'. There's also weird anomalies with content recommendation that almost feel like "re-education" recommendation bubbles. The more that I didn't click on any of it's forced recommendations, the more it saturated it with them.
Attached are screenshots from YouTube in a private session, after I watched half-way into "Exposing Liberal Hypocrisy and Conservative Close-Mindedness" posted on "Big Think" on YouTube, then watching through several completely unrelated videos after, such as older Linux startup sounds, where it's to the point of 100% of the first page recommendations, and this persisting for majority of that night.
Correction: Upon looking at what it sends to the server, it at least does wrap the key, but sends the unlock password in the same request, and operations are done server-side. Attached are the screenshots of the network traffic to the server for key creation, and use of the key (sending a signed email), with the unlock password circled (intentionally a crap password, for demonstration).
Embed this noticearcanicanis (arcanicanis@were.social)'s status on Saturday, 21-Jan-2023 08:07:42 JST
arcanicanisWell that's fairly useless: apparently Roundcube's PGP plugin works the opposite of what most would expect: the private key is generated client side, and then sent unwrapped to the server, along with the password you want it to be wrapped by. Then for any future operations, you send the password to the server to unwrap it server-side, for the server to do any signing/decryption tasks, to send you the plaintext in the HTTP response.
The only use-case I can make sense of this is if you don't trust the device you're accessing the webmail on, even though it could just export the key, and have all the info needed to unwrap and compromise the private key anyway. I really don't see any real-world use to this model, other than a corporate environment that wants things 'wiretappable'.
You seem very stuck on the point about hardware: there's nothing that imposes that it requires dedicated hardware. The point of specialized hardware is to typically have storage that's engineered that you can't just crack open and do EEPROM dump to pull the keys out, or file down the coating of an IC to probe at any internal parts of the storage to do the same. Or have extra shielding to prevent voltage differences giving off spurious emissions that could infer details about the key.
As for rubber-hose cryptoanalysis: someone could also engineer a dual-purpose security token that by default acts like an ordinary innocuous flash drive, and through some procedure (some button, a fake write-protect switch, or some 'port knock' like communication over USB to the device), have it switch over to presenting itself as an authentication token. Thus you could have something that looks and acts like any generic whitelabel consumer electronic and have plausible deniability and such when crossing some very invasive border searches.
Also, as for some projects within the scope of the specification: there's only so much you can add/revise to a software project for something that's built/used for a very specific and narrow purpose (signing an input, and incrementing a counter), especially something meant to be minimalist, in contrast to something over-engineered like PKCS11 smartcards that can run Java applications.
It kind of kills the whole point of the standard, as to not have keys that are just files on your computer, to instead have it on a separate device with it’s own storage and memory that typically prevent extraction. Same with people using TOTP applications on smartphones: any of that can be swiftly copied, as it’s just another file, and the only enforcement against that relies on the security of your entire operating system to prevent that sensitive keying information to not be read.
Meanwhile dedicated hardware can be reduced down to the model described earlier, of something that simply takes an input of specific parameters, and signs it, with only a very narrow set of possible interactions.
It doesn’t “require” hardware, you could do a software-based token, but that voids the whole point. You could also McGyver your own out of cheap hardware (as I have in another reply), as a balance between the two.
But essentially isolating key storage and cryptographic operations to a separate isolated domain (separate CPU, RAM, storage) for certain applications (SSH public key auth, PGP, etc) is an improvement over doing the same on general-purpose complex networked multi-user desktop operating systems that are engineered to be used by the average normie (versus something isolated down to significant degrees of minimalism and esotericism, far more than just Qubes/Tails). Meanwhile in recent days there were Twitter-people circlejerking about how a Bitcoin developer had their wallet compromised and assets dumped, and people trying to parade the moment as a “See? If they can’t even secure their own wallet, then how is this ready for real-world use?” moment.
Oh neat, I just found this project, which is very similar to another project idea I had on my list (whereas my previous project idea was to try making a 'ghetto' PKCS#11 token with a RPi Zero): https://github.com/mphi-rc/pi-zero-security-key
It can all be implemented within the web application, there's no need for delegation to a "FIDO server". It's also not a direct communication between the server and the token, it's the browser or operating system that handles the CTAP communication to the token (such as filtering what 'RP ID' and other information is presented to the token, versus it just blindly passing through anything from the server), while the communication to the web application is a JSON-based format.
Within the web application, you're just generating a challenge, verifying a cryptographic signature (ECDSA key, SHA-256 hash, if I remember correctly) against a public key stored with the account, and keeping track of the signature count.
Embed this noticearcanicanis (arcanicanis@were.social)'s status on Saturday, 14-Jan-2023 13:32:33 JST
arcanicanisHad been hopping between desktop environments today, from GNOME 3 to Budgie, to KDE (on X11 and then Wayland), and it's insane how stark of a difference it is from GNOME3 to KDE, as I feel like I actually have a modern and performant computer under KDE, versus the second or two it takes to launch some things under GNOME3 (on a Ryzen 5600X, AMD RX 6650 XT, Samsung 980 NVMe SSD, etc).
For USB token auth, it's just an HID device that communicates using the CTAP protocol to serialize requests/responses, so there's not much capacity for it to talk to the outside world, unless you fabricate some RF-emitting component inside the token to transmit to some auxiliary wireless network to exfiltrate that information. It's just a very opinionated standard of public key authentication, anyone's free to implement hardware as they so choose.
My interest in it is solely for hardware-backed authentication, versus private keys that are resident within your filesystem or RAM (such as when a private key is unwrapped). You can also use a token for SSH public key auth for cheap.
Of course it still falls into a matter of trust of the hardware vendor, but that's also the same dilemma but on a much wider scale with most desktop computing hardware.
Nonetheless, as stated: my interest is for USB token authentication, used as a second-factor of authentication. I'm questionable in some areas, such as using a smartphone as a single-factor authenticator (regardless of whether it has it's own isolated hardware cryptographic component). I only advocate for it within the former profile. There's also the standard itself which is openly documented and inspectable (especially in the device communication), and if it starts to get shoved in the wrong usage, then of course that's time to raise hell if any of the larger orgs steer adoption in the wrong direction.
and if the hardware attestation keys are dumped, congrats: the totality of damage you can do is just claim a new authenticator registration originated from a specific model of hardware, that's it. There's no damage in it beyond that from what I'm aware, as it serves no other role. And if a key is dumped, then the service can just restrict any new authenticator device registrations of that batch/model, versus suddenly revoking all devices registered before the key was leaked (unless the time period of the leak is completely unknown to the span of years).
But yes, whitelist-only is a possible issue, but is unlikely for consumer services.
and further on the communication and advocacy of FIDO2 is the absurdity where Microsoft tries to garner attention upon themselves for being a "pioneer" of the effort to "kill the password!". Where they keep using the term "passwordless authentication" which understandably should raise ire from any sysadmin, whereas apparently folks from Microsoft think they're playing some bigbrain 4D chess by using such wording, to get people to look into FIDO2, in clickbait-style tactics, when instead I believe it's scaring people away.
U2F/FIDO2 is a fairly interestingly minimalist and robust concept, versus the mess of PKCS standards with smartcard authentication and management, it's just that Microsoft (and some others) are really botching the messaging about it.
You can use FIDO2 authenticators for two-factor authentication, and you can certainly implement it as such in any of your applications/services. The problem is that some online services, such as crap like Azure, take an over-opinionated approach where your option is ONLY single-factor hardware authentication, which I was bitching about previously here: https://were.social/@arcanicanis/posts/AKQyHBW6ajXA0F468e
> But the actual software being developed requires either spying device (aka smart phone) or worse, biometrics.
No. Not even remotely true. In the simplest implementation (from what I remember, just in brief summary) of a U2F hardware token there are two storage components: - Master key (or rather 'initial key', which is only generated once) - Global signature counter
For 'registering' a U2F token for an account on an online service, the token is presented an RP ID (generally the protocol and domain part of the service authenticated to), and from that RP ID it derives a new private/public keypair (in-memory) from the stored master key, using a key derivation function, and presents the generated public key to the service. All these cryptographic operations happen internally on the token.
Then when authenticating to the same service, the RP ID is presented again along with a challenge to the token, the same private/public keypair is generated in-memory again, and creates a signature of the RP-provided challenge as well as the current 'signature counter' state, and returns the resulting signature (as well as the current 'signature count' it was signed with) to the service, while also internally incrementing the signature count by 1 in it's internal storage.
The service verifies if the signature matches up with the public key on record, and also makes sure that the 'signature count' is greater than the last time authenticated to the service. Otherwise if the signature count is the same as last time, or less than, then it would indicate a replay attack, and would be considered invalid.
In this scheme, there is a different public/private keypair for every service, yet within storage, it only has to remember one key, and has to keep a signature count.
Meanwhile, in further evolution into FIDO2, there's the addition of device attestation, whereas there's a separate private key and certificate burned into a batch of 100,000 tokens/devices of the exact same model, that is ONLY for a service to be able to verify the model and profile/capability of the token, as attested by it's manufacturer.
Each hardware vendor has their own root certificate that certifies the hardware attestation certificate burned into the token or cryptographic component. Hardware attestation is only relevant to appeal to government and financial sectors that require hardware certification, or to have authenticators from specific vendors or profiles. Hardware attestation is just an additional option presented that's not required for implementation in online services, it's just meant for organizations that want tighter control, for example: the US DoD could prohibit a DoD system from allowing some government contractor from registering a Hauwei-brand authenticator for their account. That's the intended use-case it's targeting with that addition. The attestation information is only presented upon registering an authenticator to an account for a service--and not used for any subsequent authentications.
As for biometrics and other components, that's all just internal to the implementation of the authentication token, of whether it wants to sign a challenge or not, and just being an advertised feature presented in the hardware attestation certificate, whereas it NEVER sends biometric information in any response. https://fidoalliance.org/fido-technotes-the-truth-about-attestation/
> A FOSS-Extremist will not tolerate the concept of people actually getting paid to work on commercial software in any spectrum of acceptance. Depending on what you mean, it's actually not about money or commercial, the licensing is about user freedoms. You're free to sell free software, make a business off it, or anything for any commercial purpose: https://www.gnu.org/philosophy/selling.en.html
I wouldn't say IPv6 is significantly complex; just that it's difficult at first with people who are so entrenched in thinking about everything in address scarcity than abundance, such as with subnetting. I've taught/trained Zoomers on networking in the military, by covering IPv4 and IPv6 simultaneously, and they didn't see IPv6 as much of a hurdle. In fact, subnetting is very often easier under IPv6, versus having to often use calculators for most allocations under IPv4. Meanwhile, when I was still in military (just a few years ago), networks were consistently being restructured from IPv4 addresses being shuffled around to skimp around scarcity and changes the size of various units.
3 out of the 4 US ISPs I've subscribed to in the past 5 years have provided service with CGNAT, and one other location that I had homeservers on fiber just switched to CGNAT less than two months ago: - Boingo Internet (WISP, on most military bases) in different 3 states - MetroNet (fiber), switched to CGNAT ~2 months ago - and my current local ISP (that I'll not name, because it's a smaller ISP that would narrow down my location) Spectrum/TWC was the exception, but wasn't available in some locations.