Yes, the problem with Bitchute, Rumble, Bluesky (yes, not a video site, but worth mentioning), and several more examples is they grew a userbase solely over reactionary motives, solely by disenfranchisement over moderation policies, and thus became easy to stereotype.
they are good enough at not alienating people in large numbers, generally.
I don't know, I look at the modern day Windows user and swear some of these people are just a whole different breed entirely, that they will still adamantly cling to it, regardless of any sabotage that Microsoft does to Windows and it's related products.
Nonetheless, further on the topic of dynamics of "network effort": I think that's actually much less of a problem in this situation, because content availability isn't dependent on first-party support, this would just be strictly people downloading and seeding the content they like, and without the needlessly overcomplicated architecture to do it (unlike Odysee, which I don't think ever had any third-party implementations).
I'm talking about a software where you could literally just slap a specific channel URL into wget and for it to be able to just start downloading everything that instance has downloaded of that channel, with supporting metadata, and not some ridiculously overengineered protocol being in the way.
That's just to serve as an alternate player frontend last I recall; I'm talking about something to actually store and seed media, and not limited to just YouTube, but any backend that yt-dlp supports, and probably more (such as probably DID URL support too)
Hopefully I can get around to getting my project in a releasable state, and then have folks set up their own respective instances as a scatter of Tor/whatever sites (to avoid the most obvious imminent risk of DMCA threats), so the crowd of people that hate directly using the platform can finally get away from using YouTube directly. And also have a means of self-publishing content too (probably a la ActivityPub, RSS/Atom, etc)
Fun thing with Tor, is that I could just release some archive with all the necessary tools/dependencies, and all a person would have to do is essentially click the "Power On" button to it, and have a running instance from their desktop or whatever.
It's insane how YouTube gets away with being so buggy and rife with UI-related race conditions. I've also had a time where any video would show the slider at 100%, but the actual level being at 80%; usually these problems cropping up in triggering play before the page has fully loaded (which of course takes time, because of how ungodly bloated they make a single webpage now).
I already have a working concept of a web application that serves as yt-dlp-backed ripper, that catalogues everything saved in a database, uses a deterministic filenaming format (that also allows codec variants of the same upload), shows an equivalent video site frontend, and may soon be able to seed/rip off of other instances of the same software. All purely motivated by the last time I got pissed off with the YouTube UI (specifically the latest wave of A/B testing of another 'new' UI change).
Mentionably, this is all exclusively YouTube; I never have these issues on Soundcloud or other platforms that I make use of. I don't use page zoom, I don't use custom stylesheets, I'm very much the opposite of a power user and use it in the most generic way possible, but still stumble across these hiccups occasionally.
Although Google has long abandoned all of it, having replaced much of everything with their own successive proprietary replacement; unless you're referring to something else broadly with TLS instead.
pre-negotiating a relationship so you can skip these steps and just "hi its me again, token auth here it is, same settings as last time
it was a military spec that was abandoned in draft because nobody but the military cared about cheaper session initiation
I think it's more down to whether other implementations had interest in implementing those XEPs or seeing a purpose in it, not the military, for it to be approved by the XSF. Meanwhile a majority of military use of XMPP I've seen is where people don't even know of what XMPP even is, don't even understand the concept of federation, or even understand the interoperability of clients. And it's not about something formally consulted/supported software, it's where they'll just download whatever copy of ANY software they can find off the internet and just try using it. I could write a wall of text about how backwards and "blind leading the blind" some of it is in execution (not of anything to do of XMPP itself, but people grossly misunderstanding/misusing the tools available, including XMPP misuse).
I think the parts I had an incomplete understanding with SASL, was the choice of why there's two blank values in the initial message (e.g. "n,,"; whereas the first field being in regards of channel binding, but no idea what the other two fields are meant for), as well as some confusions with escaping characters (such as with usernames or passwords) that didn't seem to make sense as things didn't seem to match between what some spec said versus what active implementations were doing.
It's at least a MUST implement in XMPP Core, and thankfully SCRAM being ubiquitous in that ecosystem also (whereas SCRAM also being a trivial thing to improve authentication, but extremely underused). There are some oddities with SASL and GS2, but I can't remember my questions/remarks that I had off-hand.
I think it's an issue of expectations: that presumably any time these companies try fielding token auth for customer use, that they test it on a focus group that's not representative of what the target demographic should be for token auth.
Instead, they probably get some of the most careless, apathetic, illiterate people, give them a hardware token, and then watch the mess of them probably losing it in barely a month's time, struggling with recovery and reissuance, and whatever else, and just concluding "it's wasteful, degrades customer experience, creates new problems, etc" and canning it.
These types of people should never be the target demographic of trialing the availability of token auth. Instead, it should be those that want to be DILIGENT in securing access to their account, that are willing to learn and OPT-IN to using token auth, instead of treating it all in some binary "either it's absolutely EVERYONE, or absolutely NOBODY" business choice.
Even with regressing to the option of exportable/cloud-syncable passkeys---there's no threshold left that industry could dumb this down any further, for the dumbest of human beings; and yet, even in all these compromises/regressions, we STILL don't have relatively ubiquitous public key auth available as an option.
Because it's a regression of compromise of trying to appease the absolute dumbest, craterbrain of user, that we still don't have mainstream public-key based authentication as a mainstream thing (but my opinion is vendors have barely even tried).
So first we started with two-factor authentication, with a physical hardware token and password authentication.
But apparently that's just "too inconvenient".
Then the trend moved onto token-only authentication (passwordless), but apparently that's also just "too inconvenient".
Because apparently whatever target demographic these people are trialing any of it with, is too retarded to maintain possession of an authentication token.
So now we're on the last regressive step: just right back to software 'tokens' (or effectively, just keypair, as a plain file), that people could backup to 'the cloud' "for convenience!" (unlike hardware tokens, which by-design were not to have keys exportable to prevent them being duplicated, and requiring real physical possession), that it's muddied everything down entirely to convenience over everything, to appease to the lower-common denominator (that probably still won't use any of it anyway).
Nonetheless, if I'm not mistaken on implementation: I assume there's at least encapsulation of concerns where a person can of course use hardware-token auth (U2F/FIDO) in a passkey context for an authenticator, versus being regressed to the bottom with everyone else (just a plain keypair file, stupidly synced to some online account).
Lest I also remind that there's been a push for adopting native public key authentication (but probably more rigidly as PKI-style cert auth) as even facilitated in HTML shortly after HTML4.01 (~1999): https://udn.realityripple.com/docs/Web/HTML/Element/keygen
I've been watching this play out for ~2 decades, and seen fairly insignificant progress. We've only gone from shared secrets, to expiring shared secrets (OAuth tokens), and not much else.
The only vendors I used where pubkey or two-factor auth is available is: Namecheap, Porkbun, and Hetzner (but only proprietary OTP; Yubikey OTP) for one of my hosting providers. Meanwhile with all the banking institutions I use, NONE of them offer any level of ACTUAL pubkey/two-factor auth. Because we have to cater entirely to the dumbest of human beings, and not give people the OPTION to OPT-IN to more solid options. Because I guess if we allow there to be an option, some retard will find a way to snag themself in it, and make it sound like it's the company's fault.
Why does RedHat have to keep sawing themselves off at the legs? Was going to do an RHEL 10 workstation install, but apparently the ability to install KDE has been removed since a few releases ago (as a first-party option; not talking EPEL). This also makes it a bit ironic, as RHEL 10 is to be "Wayland-only", and if they don't ship KDE as a first-party supported option, then that means a GNOME Wayland-only desktop, which is probably the worst possible option.
I think any curricula solely focused on just teaching someone a programming language, is kind of pointless.
I think for some real-world objectives to actually learn how things work, be more well-rounded, and understand how simply some things can be implemented, could be objectives like (after getting the basic concepts would be):
Develop something that can read/write a particular file format
Develop something that implements a network protocol
Write a basic parser (JSON, XML, etc), as bonus points
Some examples being: a primitive HTTP client, a basic IRC bot, an SMTP client, read an ODS/XLSX spreadsheet (with an XML lib), a PKZIP or tarball reader/writer, etc.
Projects like that would be astronomically more empowering than the ridiculous joke of what I see with people doing college courses, such as prompts like "write a C# class for calculating the total price of a sale at a lemonade stand (using floats even...)"
The "insult to injury" in all this, is that I just want to write a plugin for Cockpit that's visually consistent with the Cockpit itself.
But I also can't use the Cockpit's copy of PatternFly, because they keep switching between versions, and evidently it's just 'too hard' to simply ship separate major versions as separate files; ergo you can't use Cockpit's copy, you have to ship your own. Originally on v3, then v4, v5, and now it's moving on v6.
As for microwave ovens and other appliances, if updating software is not a normal part of use of the device, then it is not a computer. In that case, I think the user need not take cognizance of whether the device contains a processor and software, or is built some other way. However, if it has an "update firmware" button, that means installing different software is a normal part of use, so it is a computer.
Imagine if we had a protocol where it was required that you'd open up your SQL server to the public internet (with access control on writes, or reads on protected data, of course), and just have remote servers/clients just query straight off your database, regardless of query complexity.
So how is Solid any different than that with SPARQL, N3, Shape Trees, etc?
Every time I look at the stack of protocols to Solid ( https://solidproject.org/TR/ ), it feels like the engineering mess that was the OSI protocols, of overcomplicating a [relatively] simple problem.
I'm not saying SPARQL, semantic data, and alike don't have utility, as I'm sure it's probably used in various massively-scaled production environments; but that I don't see how you expect to have something publicly internet-facing that any entity on the internet could incur a heavy query on the server, or just with offloading so much compute responsibility to a server, instead of making a dumber server (just like how you can achieve an ActivityPub implementation in just static files, since it doesn't have a query language).
I could be uninformed (and I only periodically peek at it, and skim through some of the specs at times, I don't know it in depth), but I still don't see how this is going to be anything that could be operated cost-effectively and not be prone to trivial Denial of Service abuse.
On an unusual train of thought I had: it'd be an interestingly dark and depressing reveal in a dystopian novel to have a subtle detail where nearly anyone that's a threat to some tyrannical government, that whenever they covertly "disappear" someone (of anyone that's not a major highly-visible public figure), that their online accounts are taken over by a government-ran generative AI trained on their collected conversation history, which would continue posting online and holding conversations to their likeness as if nothing happened (and not with any sudden 'change in tune' in beliefs or anything, like some 1984-style compelled speech).
Whereas you're following the story of the protagonist, they've made various connections with people along the way (some that they've met in-person on a few occasions), but where they correspond primarily online and usually in a private and semi-pseudonymous nature. Then it's at some late point in the story, where the protagonist is about to pull off some big feat, of something that requires the resources of their long-time connections, whereas most of them uncharacteristically back out or give very indirect responses/excuses.
Through some deductive reasoning, careful probing, and other hints, the protagonist comes across some hidden information revealing said government program, connects the dots, and realizes most of his contacts are compromised and probably haven't been alive for nearly a year, and they're just being led on as some entrapment scheme to catch them, as the protagonist is one of the very last few remaining of their mindshare.
Obviously I'm not any sort of storyteller nor writer, but that'd be neat to see someone carefully insert something like that into some story idea.
I stole a few ideas from did:plc and did:tdw, yes. It's just an experiment insofar, as I'm using it as a stand-in for other methods, as something I can adjust to my needs as I toy with DIDs in a way with reverse-compatibility to standard non-DID ActivityPub.
As it currently stands, there doesn't seem to be a lot of methods that clarify whether DID URLs are permitted or not with the method.
There were a few adjustments I was going to add, such as what other 'authoritative' servers the did:fedi can be discovered from, within the method-specific protocol, maybe.
Either way, I haven't been public about it yet. Just finished a basic key wrapping and serialization format to go along with it, and I'll probably push out a newer version of the generator demo (which presently lacks a polyfill for browsers that don't have native Ed25519 within WebCrypto) in a day or two. I'll probably be more vocal when I have results.
As for the primer, that was probably over a year ago, and the mentioned FEPs, even a year before that (with all those FEPs devised by @silverpill )