@cnx I've been surprised myself to find out that in 2024 it's apparently very hard to have a domain registered with a registrar without having any form of hosting attached (and pay for it).
A reminder that you should never use services like #SignRequest to handle your legal documents.
I have an account because I had to sign some of my past employment contracts through platforms like these.
Now that enshittification has removed all corporate ethical boundaries, I may have to upgrade to a premium account if I want to maintain access to my old contracts and documents, or my account will be removed with everything attached to it.
(Btw, companies pay licenses to use services like SignRequest or Docusign already, I'm not sure what's the need of aggressively charging end customers as well).
I'm not even sure if it's legal to delete legal documents from a digital platform because of sudden changes in marketing practices, with no guarantees to be able to access content as sensitive as past employment contracts or tax forms without paying a premium for it.
In the meantime, please just send a PDF via email and ask the recipient to send a signed copy back to you.
There should be no space in this world for services with little to none added value like SignRequest that sell our sensitive information to other parties and even threaten to delete it if we don't pay, while providing just a thin layer on top of the "sign this PDF and email it" feature.
"We don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don't think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse".
If you don't like Nazis, white supremacists and conspiracy theorists, you kick them out and demonetize their content, period.
I don't know what's so hard to understand. If you throw a party, and invite someone who shits on the floor, vomits in other guests' cups and insults anybody with a different creed or tone of skin, then all decent people will leave.
If you let Nazis hang out at your party, then your party will become the Nazi party, period.
And I'm sick of reading "we're becoming too polarized, liberals no longer want to share spaces with conservatives, extremes are winning" etc. No, we are NOT becoming more extreme on the left. Quite the opposite, as both in Europe and the US the left has (unfortunately) largely abandoned its social-democratic roots, and most of the parties have moved towards the political center. It's the other political side that has decided to go full in with their extremely distorted and intolerant views of society. And it's our duty, as a civilized society, to confine them to the sewage they belong to, before their shit talk takes over all the platforms, and their hateful ideas are normalized like it's the 1930s again.
Think of it for a moment: would you be welcome, as a gay, trans, or as a socialist/liberal, or as an individual who belongs to a non-white / non-Christian minority, on platforms like Gab, Parler or Truth? The answer is a resounding no - and I can say that after being heavily bruised by encounters with people on those platforms.
Then why are we supposed to welcome Nazi scum on other platforms? We have a civic duty of showing zero tolerance towards the intolerant if we want to preserve the principles of tolerance in our societies. If they want to talk to one another, let them build their own platforms for that - assuming that there are Nazis with enough brains to run a social media platform, without either ripping off Mastodon's codebase or doing hostile takeovers.
I'm looking for a service where I can just buy a domain name - no hosting attached.
I'm tired of 80% of my Bluehost bills being for hosting+WordPress (a service I've never used) when all I need is just a domain name.
And I don't like Bluehost's dark patterns either - in their new UI they've made sure to hide DNS management under many, many clicks under the advanced tab, even with a non-dismissable warning. I'm not sure if they are planning to make it a premium feature in the future.
Requirements:
1. It needs to be certbot/Let's Encrypt friendly - that excludes GoDaddy
2. It needs to provide me with the ability to register as many subdomains as I want and manage the DNS records however I want
3. No mandatory hosting+WordPress upsells. If I never asked for it, it means that I don't want it
4. I'm also happy (if not happier) if it simply allows me to point to my own DNS server. I really just need the domain name, I can run my own DNS
Of course, registering my own registrar with ICANN may be an alternative, but for obvious reasons I'd rather avoid that path if possible.
Migration from #Bluehost to #Porkbun accomplished for my two personal domains.
Migration steps: set up two glue records that point to my existing DNS servers. End. And maybe also glue the DS records if you use DNSSEC.
I've spent the last couple of years with a service that tried to convince me that there was no way of buying only a domain name and make it point to my mail and name servers in the 2020s. That I *had* to buy the whole package of hosting, Wordpress, Microsoft email and Office 365, even if I intended to use none of those features. And I probably had to thank them because at least they still allowed me to generate my own certificates with certbot rather than forcing me to buy their own, like GoDaddy does.
And in the latest rollout of their UI, Bluehost has also taken care of hiding the DNS configuration panels under 5-6 layers of dark patterns, advanced settings hidden under advanced tabs, pointless warnings that can't be dismissed, and they have even made it impossible to configure your own nameservers.
Then I just walked around the corner, and I found a service that's like "you want a name? Sure. No hosting, no mail, no Wordpress? Sure. You run your own DNS? Sure, just pass the glue records before the transfer is complete. We'll take care of everything else. With no downtime. And with a free API. And for a tiny fraction of the previous bills. Have a nice day".
It feels like enjoying a gourmet dinner for the first time after a couple of years of McDonald's.
Bi-directional power generation is the future of sustainable grids.
If I have solar panels that produce more energy than I need, I push thay extra energy back to the grid so others can use it. That reduces the demand for dirtier energy by design.
If I pay for consuming electricity, then I should be paid for creating electricity.
Unfortunately, more and more energy companies seem to go in the opposite direction. As many struggle to remain profitable as they are finally forced to pivot away from dirty energy generation, they're desperate to find other sources of revenue. And they couldn't come up with anything smarter than turning domestic electricity generation from a (tiny) cost for them into an undeserved revenue stream. These companies would literally prefer you to waste your excess electricity, or use it to mine Bitcoins or whatever, than distributing it back to appliances which may need it. They could invest more in storage, if they really have a problem with excessive loads on the grid from domestic production, but in a competitive market with thin profit margins it's always easier to charge the customer - long-term investments are often seen as a liability rather than an asset.
If you are Dutch and you have a contract with Budget Energie, or with any other company that has irrationally decided to turn electricity production into a cost for the producer, then consider terminating your contract with them immediately. The world doesn't need these parasites who readily sacrifice long-term viable business models on the altar of short-term profitability.
p.s. Yes, storage technologies are the proper solution to the problem, but they're currently expensive, I get that point. So these companies may be thinking of getting the owners of solar panels to share the costs. It may sound like a good solution for profitability in corporate meetings, but it throws in the air the whole system of incentives that we've put in place to encourage people to move away from centralized grids. We need more large batteries on the grid and we need them right now. If it's too expensive/unprofitable for private companies to do that kind of long-term investment, then the government needs to take ownership of that problem. Governments have poured billions into recovery plans or military aids. Governments *can* be big when they want to. So what's preventing them from ensuring that our own energy grid is as future-proof as it can?
Several websites now have #RSS feeds that seem to block bots - Reuters, ANSA and Dutch Review are among those.
I've noticed it because several feeds had become unavailable in the past days on my Miniflux instance.
Apparently setting HTTP_CLIENT_USER_AGENT to a Firefox/Chrome user agent rather than a string that contains "Miniflux" is enough to bypass the block.
This kind of stuff just baffles me.
1. How do they expect people to consume RSS feeds? From their browsers? That's a bummer, because both Firefox and Chrome haven't even rendered RSS/Atom content types for the past decade, so it'd require people to be quite fluent in reading XML in order to consume the content.
2. If for some reason they expect people to consume feeds from the browser, then how are they going to notify people that there's a feed available when they navigate on their page? Reuters doesn't even bother to use a <link> attribute in the DOM, for instance, nor it bothers to tell folks about the feeds on the homepage.
3. If, realistically speaking, feeds can no longer be read in a browser in 2024 (sure, there are folks like me that use custom Firefox extensions, but realistically we're <0.1% of the traffic), then of course the only alternative is an offline aggregator. So what's the point of blocking bot user agents, if that's exactly the way things are intended to work?
4. How is a mechanism that simply throws a 403 if the request comes from a user agent containing e.g. "Miniflux" or "libcurl" supposed to be "bot protection", when I'm only one step away from spoofing my user agent?
5. If these folks are really so hostile towards feeds, then why do they even bother to still run feeds?
My proposal: all large news outlets should have mandatory support for RSS/Atom feeds, properly advertised as a <link> tag and/or on the homepage, and with no barriers (especially barriers as dumb as a static UA test).
Being a large news outlet (especially, as it's often the case in Europe, a large publisher partly funded by public money) means that your information *must* be accessible even to users that don't/can't read your articles in a standard web browser. Especially if we want to set up automatic alerts/notifications based on some events. Twitter and its APIs used to be a temporary replacement for this kind of service until Musk took over, but now that the risks of delegating the delivery of information in the public interest to a private for-profit business are clear we need legislation that enshrines the duty for large news providers to adopt open feeds as a way of delivering content.
Sure, I can technically bypass all the dumb barriers and all the pointless friction points that both browser manufacturers and news outlets add to discourage people from using feeds. But at some point I just run into technical fatigue. Open feeds for large outlets that deliver critical news should be a mandatory requirement. Not a war that should be fought only by tech savvy citizens on an individual level.
The plans are moving to pull the plug from this instance and migrate to a new one.
Running Mastodon really isn't fun anymore. I need 100GB of space on an S3 bucket just to store a cache that I can never delete, and 6GB of RAM constantly allocated by Redis or sidekiq just to run an instance with me and 3-4 more active users. There must be a better way.
I have registered a new domain, configured the DNS, and I'm currently toying with Akkoma to see if it meets my needs. I may also toy with CalcKey before taking any decisions.
It may be challenging for me to immediately replicate all the feature of this instance on a new one (especially if I opt for Pleroma/Akkoma, as I'm far more familiar with Ruby than with Elixir/Nix), but it may be a good opportunity to experiment with some new shiny toys.
Of course, I'll personally reach out to the active users on this instance to check if they need any assistance with migration / data dumps before it shutting down.
@tshirtman There are usually scheduled tootctl scripts that admins run to clean up stuff older than a certain threshold. But the cache folder itself can't simply be wiped without basically everything on the instance breaking (including emojis and avatars), and it's often very painful to fix it.
It seems like one of those bad design decisions where the cache folder has been used as a dropbox for almost everything, including things that the software is supposed to store long-term.
So even with the cronjobs scheduled to run every day and very little local traffic, an instance connected to a few big relays ends up having many GBs of media cache that it can't touch.
My new social home is almost ready at @fabio. All of my followers can start following me there, if they want to.
Checklist:
✔️ Configure DNS + nginx + SSL boilerplate ✔️ Install Akkoma ✔️ Clone profile + settings ✔️ Import list of followed accounts ✔️ Import list of blocked/muted accounts ✔️ Clone list of blocked instances ⌛ Clone posts (if someone has a quick way to migrate posts from Mastodon to Pleroma/Akkoma, please let me know, before I dive down the rabbit hole of either SQL or API scripting) ⌛ Replace all references of `rel="me"` and my PGP keys to point to the new profile ⌛ Formalize the transfer and move all the followers along ⌛ Help/wait for the few users on this instance to export their data before shutting down social.platypush.tech
I may speed up things a bit more because Redis on my instance is dying with out-of-memory every couple of hours now. And I'm running a dedicated Linode box with 8GB of RAM, not an early Raspberry Pi.
Not only Mastodon isn't meant to scale, but it fails miserably even at managing things at a micro scale.
You can't tell people "we're democratizing things, everybody can run their own instance", if running even a personal instance with a decent amount of connections to the rest of the Fediverse involves spinning up a machine with 8GB of RAM, either 100GB of storage or a decent S3 bucket, and forget running this stuff in your home network if you're planning to plug it to a relay.
Its bad design decisions turned it into the Bitcoin of social platforms, and it has become a liability for the Fediverse more than an asset. I'd have saved quite a lot of money had I decided to switch to Akkoma earlier.
Another textbook example of enshittification: profit through rental when profit margin become too thin.
Hardware companies are no longer happy with making money out of the hardware purchases that you make. And not even with all the data that they scoop and sell about you.
Their ultimate goal is for you to pay a subscription in order to keep using their hardware.
Everybody wants you to subscribe to everything. Everybody wants money to come in no matter what.
This race towards the bottom has reached such nauseous heights that it requires public intervention.
Subscription-based models can obviously exist, but, especially in the case of hardware, they should always be sold as add-ons on top of the physical product.
The product should be able to operate with or without the subscription. And even if the producing company goes out of business.
Finally, I'm no legal expert, but I don't see how "genuine cartridge checks" processes etc. can be compatible with the right to repair - and inter-operability more broadly.
Ok, git.platypush.tech has successfully been migrated from Gitea to Forgejo.
It didn't take long - it is indeed a drop-in replacement, but the systemd configuration packaged with Arch required a bit of tweaking to point to the previous Gitea paths and some permissions changes, *only then* it's an actual drop-in replacement.
Luckily I've always used `git` as a username rather than `gitea`, so that part, after a few changes in the configuration, won't require any more changes.
Tip: never, ever use the default username provided by your code forge (gitlab, gitea, forgejo...). Code forges come and go, and one after the other they are doomed to enshittification. Code repositories, configurations, scripts and documentation are there to stay instead. So never use a username that is tied to your code forge, or you'll have a hard time changing it later.
@santiago I'm now in the process of migrating to #Forgejo, which is supposed to be a fully FOSS fork and drop-in replacement for Gitea - let's see how long this lasts before pushing all of its users to their cloud offering too...
Then it got acquired by Microsoft, it started taking down repos and accounts at every whim, it started violating FOSS licenses by training their closed models on our open code, and I started self-hosting my Gitlab instance instead.
Then Gitlab enshittified, its open offering started falling apart and serious bugs were left open for months, it started to aggressively push users to their cloud offering, it even embraced openly hostile practices (such as deleting repos and accounts that hadn't been active for a few months), and I migrated to Gitea.
Now Gitea is also enshittifying, providing features in Gitea Cloud that aren't available on the open core (one example, a stupid limitation on the maximum number of users allowed on a server, when it's my own fucking business to decide how many users I want to store on my own database on my own machine), and probably it won't take long before they start pushing their users to migrate to the cloud version by breaking even more basic features on their open version. So I'm in the process of migrating to Forgejo.
Luckily I have enough technical skills to migrate things around and even patching code. But just because I can it doesn't mean that I should. Especially when the trend is that of a code forge migration per year just because all of them have decided to enshittify and are trying to upsell me stuff that I either never asked in the first place, or that was given for granted and it suddenly became a premium.
I just want to have a service to upload and share my own open projects, I'm fine to run it on my own machines with no customer support involved, and I'm fine to even contribute to their codebase. I'm not asking much. I shouldn't go through so much trouble. I haven't been always so hateful towards the techbiz, but the techbiz has made me so embittered because business people keep breaking *our* tech on a daily basis in the name of "monetization" and "everything must be on our cloud", and I'm forced to invest all of my time and energies in seeking alternatives.
I seriously just want all these business parasites that have taken over my industry gone and dead by now. I fucking hate every single one of you.
Hosting your projects on a truly free platform that is relatively shielded from risks of enshittification has become a challenge.
I understand those who feel like giving up, but personally I will never give up.
Exhaustion/apathy by continuous enshittification and hostile practices are exactly the final goal of platforms like Github. I won't let them win this war, even if it costs me one source forge migration per week or writing my own. And I hope that more developers join me in this war (because it's a real war against surveillance capitalism that we are fighting) rather than giving up.
Once Forgejo has full federation support, its greatest pitfall (fragmentation, and the need of one account per platform) will also be mitigated.
I guess that it's again rural-vs-urban, old-vs-young. More than half of the folks in Amsterdam, Utrecht and Groningen vote again and again for center-left parties, and again we lose to a countryside made of folks who are hostile to anyone who doesn't look like them. The productive centers of our economy and the generation currently at the peak of its productivity have basically lost political representativeness to the countryside and the elderly just because of demographics.
I'm tired of seeing my generation robbed again and again of its chance to elect a new, better political class. All because bashing the migrant and replacing solutions with scapegoats works well with the old, with the rural and with the uneducated, without requiring any further political skills or acumen. Say that you'll ban the Quran, kick Moroccans out of the country and make it hard even for EU migrants to come in, and who cares if you don't have a clue of how to tackle climate change, solve the housing crisis or the labour shortage. Angry people want scapegoats, not solutions.
I guess that the time has come for me to look at new destinations outside of here. This country deserves to lose all the skilled migrants that built its fortune, as well as all the less skilled migrants that make sure that keep things running.
I refuse to be amicable to anybody who gave their vote to #Wilders. I refuse to be amicable to someone who sees me as a series B citizen only because I wasn't born here. Me and those who vote for this scum don't even belong to the same species anymore.
There are a few generalizations in this article, but it mostly nails my thoughts on the current state of the IT industry.
Why can we watch 4K videos and play heavy games in hi-res on our new laptops, but Google Inbox takes 10-13 seconds to open an email that weighs a couple of MBs?
Why does Windows 10 take 30 minutes to update, when within that time frame I could flash a whole fresh Windows 10 ISO to an SSD drive 5 times?
Why do we have games that can draw hundreds of thousands of polygons on a screen in 16 ms, but most of the modern editors and IDEs can draw a single character on the screen within the same time frame, while consuming a comparable amount of RAM and CPU?
Why is writing code in IntelliJ today a much slower experience compared to writing code in vim/emacs on a 386 in the early 1990s? And don't tell me that autocompletion features justify the difference between an editor that takes 3 MB of RAM and one that takes 5 GB of RAM to edit the same project.
Why did Windows 95 take 30 MB of storage, but a vanilla installation of Android takes 6 GB?
Why does a keyboard app eat 150-200 MB of storage and is often responsible for 10-20% of the battery usage on many phones?
Why does a simple Electron-based todo/calendar app take 500 MB of storage?
Why do we want to run everything into Docker containers that take minutes or hours to build, when most of those applications would also be supported on the underlying bare metal?
Why did we get to the point where the best way of shipping and running an app across multiple systems is to pack it into a container, a fat Electron bundle, or a Flatpak/Snap package - in other words, every app becomes its own mini-OS with its own filesystem and dependencies, each of them with their own installation of libc, gnutils/busybox, Java, Python, Rust, node.js, Spring, Django, Express and all? Why did we decide to solve the problem of optimizing shared resources in a system by just giving up on solving it? Just because we assume that it's always cheaper to just add more storage and RAM?
Why does even a basic hello world Vue/React app install 200-300 MB of node_modules? What makes a hello world webapp 10x more complex than a whole Windows 95 installation?
We keep repeating "developer time is more expensive than computer time, so it's ok for an application to be dead inefficient if that saves a couple of days of engineering work", but I'd argue that even that doesn't apply anymore. I've spent the last couple of years working in companies where it takes hours (and sometimes days) to deliver a single change of 1-2 lines. All that time goes in huge pipelines that nobody understands in their entirety, compilation tasks that pull in GBs of dependencies just because a developer at some point wanted to try a new framework or flavour of programming in a module of 100 LoC, wasted electricity that goes in building and destroying dozens of containers just to run a test, and so on. While pipelines do their obscure work, developers take long, expensive breaks browsing social media, playing games or watching videos, because often they can't do any other work in the meantime - so much for "optimizing for engineering costs".
How come nobody gets enraged at such an inefficient use of both computing and human resources?
Would you buy a car that can run at 1% (or less) of its potential performance, built with a process that used <10% of the available engineering resources? Then why do we routinely buy and use devices that take 10 seconds to open a simple todo app in 2023? No amount of splash screen animations can sugarcoat that bitter pill.
The thing is that we know what's causing this problem as well.
As industries consolidate and monopolies/oligopolies form, businesses have less incentives for investing engineering resources in improving their products - or take risks with the development of new products or features based on customer's demand.
That creates a vicious cycle. Customers' expectation bars lower because they get used to sub-optimal solutions, because that's all they know and that's all they are used to. That drives businesses to take even less risks and enshittify their products even more, as they know that they can get away with even more sub-optimal solutions without losing market share - folks will just buy a new phone or laptop when they realize that their hardware can no longer store more than 20 Electron apps, or when their browser can't keep more than 10 tabs open without swapping memory pages. That drives the bar further down. Businesses are incentivised to push out MVPs at a franctic pace and call them products - marketing and design tricks will cover the engineering gaps anyway. Moreover, now companies have even one more incentive to enshittify their product: if the same software can no longer run on the same device, make money out of the new hardware that people will be forced to buy (because, of course, you've made it hard to repair or replace components on their existing hardware). And the cycle repeats. Until you reach a point where progress isn't about getting new stuff, nor getting better versions of the existing stuff, but just about buying better hardware in order to do the same stuff that we used to do 10-15 years ago.
Note however that it doesn't have to be always like this. The author brings a good counter-example: gaming.
Gamers are definitely *not* ok if a new version of a game has a few more ms latency than the previous one. They buy expensive hardware, and they expect that the software that they run on that hardware makes the best use of the available resources. As a result, gaming companies are pushed to release every time titles that draw more polygons on the screen than the previous version, while not requiring a 2-10x bump in resource requirements.
If the gaming industry hadn't had such a demanding user base, I wouldn't be surprised if games in 2023 looked pretty much like the SNES 2D games back in the early 1990s, while using up 100-1000x more resources.
I guess that the best solution to the decay problem that affects our industry would be if users of non-gaming software started to have similar expectations as their gaming fellows, and they could just walk away from the products that can't deliver on their expectations.
We finally start to see some interesting applications of #quantum algorithms.
The algorithm for motion tracking proposed in this paper isn't very different from the classic ones. You get a sequence of frames from a video [t-1, t, t+1], do absolute subtraction, get the changes, group them into segments, and track the changes of those segments over time.
However, scanning the frames and performing operations on individual pixels is a big bottleneck in the traditional algorithm, no matter how much we try and be smart or parallelize the operation.
Reducing space complexity by simultaneously exploring multiple paths (thanks to superposition) is exactly where quantum algorithms shine.
I'm just not sure though of how much it costs to convert a "classical" video into a "quantum" domain and back - that may be the bottleneck of the proposed approach.
We may not be able to send new missions to space again sometime soon. If all the current plans for constellation launches go unimpeded, we may soon reach one million (or more) satellites in orbit, plus tens of billions of smaller fragments. Launching anything through such a busy sky without hitting anything may soon become impossible.
We may also lose the ability to observe any astronomical events from earth. At least Elon Musk was forced to coat his satellites in pitch black and keep them small, while AST's satellites are 64 times bigger and brighter.
And, when you talk to them about these problems, they'll respond to you with the usual "we're working with astronomers bla bla" (the why did you launch a cube as big as an apartment that already shines more than any other star in the sky?), or "at least we're only planning to launch 90 of them, while others are planning to launch tens of thousands" - that's like saying "yes, I shouldn't dump my garbage in the middle of the street, but why don't you talk to my neighbours too, who are dumping way more garbage than me?"
Years ago, Western countries had functioning governments, which would usually prevent a private corporation from taking over a public resource (like our own sky) in a way that harms everyone else in the process.
Nowadays, nobody even bothers to blink. If the market has decided that we all need dozens of 5G constellations in the sky that are brighter than the brightest stars, and will harm any future plans of space exploration, the let it be it I guess. Apparently we're only relying on the goodwill of the CEOs of these companies to "take initiative and speak to astronomers", not on a functioning government that does its regulatory job.
:platypush: Tinkerer and main #developer @ #Platypush:mastodon: #MastoAdmin @ social.platypush.tech:booking: Senior #software engineer @ Booking.com⚙ #Automation addict🤖 #AI builder:linux: #Linux user since 2001🔓 #FOSS contributor:arch: Prone to unsolicited "btw I use #Arch" statements🏡 #SelfHost all #tech!🔬 Open #science and open #data advocate🎶 #Music geek🎸 #Guitarist + occasional composer🛹️ #Skater🏄 #Surfer👪 #Dad of a small geek🇮🇹 ⇒ 🇳🇱