F2FS had been emerging in the embedded world a bit and generally proven to be stable. I think there's a few Android handset vendors that may be using F2FS on the flash memory of some phones also. Overall it was shaping up to be a well-adopted filesystem. Then some time later, Microsoft randomly granted royalty-free use of exFAT under Linux, and I don't know if that took any attention away from F2FS maybe.
Last thing on filesystem news to keep an eye on is bcachefs, which has plenty of the selling points to ZFS and more, and no CDDL license issue, and may soon get mainlined in a year or two.
XFS itself doesn't offer data checksumming, only file metadata checksumming, when comparing it to ZFS; but XFS vs ext4 is probably a fair comparison. There is also Stratis, which makes use of XFS, to build software RAID with a goal to be a ZFS alternative. https://stratis-storage.github.io/
But yes, in my main desktop I originally started with a 1600X, and for some reason there'd be times I'd come back to my desktop the next day and it'd be unresponsive, and I'd have to restart it. Upgraded to a different processor, and haven't had an issue since.
Meanwhile, there is definitely a known defect of 1st generation Ryzen under oddly-specific heavy-load situations, and I have suspicion that the one I have may be a defect, per the serial number. But never really followed through completely on the RMA on that one.
Of all the worst possible fates someone could face, I’m sure the most sinister thing to wish upon someone would be “..I hope your ZFS server has intermittent hardware issues..”
I had a compound issue of a defective CPU and later bad RAM on a TrueNAS Core setup, and I originally suspected it to be from a poorly designed cheap motherboard. There were times when the system went unresponsive, or kernel panicked, and it would usually take like 2-5 minutes to get past the boot screen (or sometimes only advance by pressing a key, despite no errors or prompts). Eventually it got so erratic that I absolutely had to take it offline and do an extensive memtest, but there’d never be any RAM errors, yet weirdly sometimes memtest itself would completely freeze. Then it couldn’t even start anymore.
So I said ‘screw it’ and bought a new motherboard, moved over the CPU and RAM, and on the new motherboard it couldn’t boot either. So I took my desktop CPU, put it in the new motherboard, it booted fine. Also moved my desktop CPU back, and carried over the RAM from the server build into my desktop, that booted fine. So I deduced that apparently the CPU must be defective.
I had 3-year warranty on it with NewEgg, went to file a claim, got redirected to the insurer’s website (which got bought out by Allstate), and the dumbest thing was: the only option I had available was to bring it to a computer repair shop, and the insurer would pay for the cost of repair. But.. it’s a processor, it’s not a serviceable component, and I actually AM the repair shop (this is a commissioned home server build for someone else), and I concluded that it needs to be replaced. So I [politely] fought with them for days, and it was just the most patently absurd circular logic; because it was essentially as if they miscategorized the AMD processor SKU as a complete serviceable desktop computer, but even in explaining this, rationality wasn’t there. It was still under manufacturer warranty, so I contacted AMD instead and made my case, sent it in, they confirmed it was indeed defective, and sent a replacement (moral of the story: DO NOT buy warranty on a processor on NewEgg, the manufacturer warranty already exceeds what they offer, and will handle it more sanely).
Meanwhile, in the interim of the CPU RMA, I ordered a different processor just to get it going again, and it’s been running just fine ever since for them.
…
But then, I have a nice spare AM4-socket server motherboard laying around, that was concluded to be fine, plus a replacement processor that arrived later per RMA, both unused. So I figure just use the parts for myself as an upgrade to my existing fileserver. As it was DDR4 while my present hardware was DDR3, I needed to order DDR4 RAM for it.
RAM comes in, I move my harddrives over to the new CPU/RAM/motherboard combo, and put it in a nice rackmount case. Make sure my VMs and everything start fine, and everything’s working. A month later, kernel error message and kernel panics. I test RAM, clearly defective. Send it in for RMA. Replacement RAM comes, I install it and test it, RAM test fails AGAIN. I move the presumed defective RAM into my desktop, test it there, it fails there too. I RMA the replacement RAM, and then finally get fully functional RAM.
Everything runs great for many months. Then I start to rework the setup of my VMs, moving them to a different system (such as a separate system intended as a game server), and/or shutting some unused VMs off. A day or two later, after I close my last SSH session to that server, suddenly the server goes offline a couple minutes later.
I turn it back on, everything’s running again, can’t find any clear cause, nothing in any logs, not anything that stands out in the BMC. Day or two later it happens again. I start digging around online for answers, especially for people using the exact same ‘server’ motherboard, and I find recommendations to change power management options in the motherboard firmware setting, one option originally called “Power Supply Idle Mode”. Apparently if the CPU usage is so low, it’ll go in a low-power idle, which some finnicky power supplies might just completely shut off at.
After correcting that setting, it’s been running for several months uninterrupted.
But man, that was the most stress series of events, especially that it just had to happen ONLY in my fileserver use-cases (two sets of bad RAM, even), but ZFS survived it. All of this occurred earlier this year.
Of all the worst possible fates someone could face, I'm sure the most sinister thing to wish upon someone would be "..I hope your ZFS server has intermittent hardware issues.."
I had a compound issue of a defective CPU **and** later bad RAM on a TrueNAS Core setup, and I originally suspected it to be from a poorly designed cheap motherboard. There were times when the system went unresponsive, or kernel panicked, and it would usually take like 2-5 minutes to get past the boot screen (or sometimes only advance by pressing a key, despite no errors or prompts). Eventually it got so erratic that I absolutely had to take it offline and do an extensive memtest, but there'd never be any RAM errors, yet weirdly sometimes memtest itself would completely freeze. Then it couldn't even start anymore. So I said 'screw it' and bought a new motherboard, moved over the CPU and RAM, and on the new motherboard it couldn't boot either. So I took my desktop CPU, put it in the new motherboard, it booted fine. Also moved my desktop CPU back, and carried over the RAM from the server build into my desktop, that booted fine. So I deduced that apparently the CPU must be defective.
I had 3-year warranty on it with NewEgg, went to file a claim, got redirected to the insurer's website (which got bought out by Allstate), and the dumbest thing was: the only option I had available was to bring it to a computer repair shop, and the insurer would pay for the cost of repair. But.. it's a processor, it's not a serviceable component, and I actually *AM* the repair shop (this is a commissioned home server build for someone else), and I concluded that it needs to be *replaced*. So I [politely] fought with them for days, and it was just the most patently absurd circular logic; because it was essentially as if they miscategorized the AMD processor SKU as a complete serviceable desktop computer, but even in explaining this, rationality wasn't there. It was still under manufacturer warranty, so I contacted AMD instead and made my case, sent it in, they confirmed it was indeed defective, and sent a replacement (moral of the story: **DO NOT** buy warranty on a processor on NewEgg, the manufacturer warranty already exceeds what they offer, and will handle it more sanely).
Meanwhile, in the interim of the CPU RMA, I ordered a different processor just to get it going again, and it's been running just fine ever since for them.
... But then, I have a nice spare AM4-socket server motherboard laying around, that was concluded to be fine, plus a replacement processor that arrived later per RMA, both unused. So I figure just use the parts for myself as an upgrade to my existing fileserver. As it was DDR4 while my present hardware was DDR3, I needed to order DDR4 RAM for it.
RAM comes in, I move my harddrives over to the new CPU/RAM/motherboard combo, and put it in a nice rackmount case. Make sure my VMs and everything start fine, and everything's working. A month later, kernel error message and kernel panics. I test RAM, clearly defective. Send it in for RMA. Replacement RAM comes, I install it and test it, RAM test fails ***AGAIN***. I move the presumed defective RAM into my desktop, test it there, it fails there too. I RMA the replacement RAM, and then *finally* get fully functional RAM.
Everything runs great for many months. Then I start to rework the setup of my VMs, moving them to a different system (such as a separate system intended as a game server), and/or shutting some unused VMs off. A day or two later, after I close my last SSH session to that server, suddenly the server goes offline a couple minutes later.
I turn it back on, everything's running again, can't find any clear cause, nothing in any logs, not anything that stands out in the BMC. Day or two later it happens again. I start digging around online for answers, especially for people using the exact same 'server' motherboard, and I find recommendations to change power management options in the motherboard firmware setting, one option originally called "Power Supply Idle Mode". Apparently if the CPU usage is so low, it'll go in a low-power idle, which some finnicky power supplies might just completely shut off at.
After correcting that setting, it's been running for several months uninterrupted.
But man, that was the most stress series of events, especially that it just had to happen ONLY in my fileserver use-cases (two sets of bad RAM, even), but ZFS survived it. All of this occurred earlier this year.
Of all the worst possible fates someone could face, I’m sure the most sinister thing to wish upon someone would be “..I hope your ZFS server has intermittent hardware issues..”
I had a compound issue of a defective CPU and bad RAM on a TrueNAS Core setup, and I originally suspected it to be from a poorly designed cheap motherboard. There were times when the system went unresponsive, or kernel panicked, and it would usually take like 2-5 minutes to get past the boot screen (or sometimes only advance by pressing a key, despite no errors or prompts). Eventually it got so erratic that I absolutely had to take it offline and do an extensive memtest, but there’d never be any RAM errors, yet weirdly sometimes memtest itself would completely freeze. Then it couldn’t even start anymore.
So I said ‘screw it’ and bought a new motherboard, moved over the CPU and RAM, and on the new motherboard it couldn’t boot either. So I took my desktop CPU, put it in the new motherboard, it booted fine. Also moved my desktop CPU back, and carried over the RAM from the server build into my desktop, that booted fine. So I deduced that apparently the CPU must be defective.
I had 3-year warranty on it with NewEgg, went to file a claim, got redirected to the insurer’s website (which got bought out by Allstate), and the dumbest thing was: the only option I had available was to bring it to a computer repair shop, and the insurer would pay for the cost of repair. But.. it’s a processor, it’s not a serviceable component, and I actually AM the repair shop (this is a commissioned home server build for someone else), and I concluded that it needs to be replaced. So I [politely] fought with them for days, and it was just the most patently absurd circular logic; because it was essentially as if they miscategorized the AMD processor SKU as a complete serviceable desktop computer, but even in explaining this, rationality wasn’t there. It was still under manufacturer warranty, so I contacted AMD instead and made my case, sent it in, they confirmed it was indeed defective, and sent a replacement (moral of the story: DO NOT buy warranty on a processor on NewEgg, the manufacturer warranty already exceeds what they offer, and will handle it more sanely).
Meanwhile, in the interim of the CPU RMA, I ordered a different processor just to get it going again, and it’s been running just fine ever since for them.
…
But then, I have a nice spare AM4-socket server motherboard laying around, that was concluded to be fine, plus a replacement processor that arrived later per RMA, both unused. So I figure just use the parts for myself as an upgrade to my existing fileserver. As it was DDR4 while my present hardware was DDR3, I needed to order DDR4 RAM for it.
RAM comes in, I move my harddrives over to the new CPU/RAM/motherboard combo, and put it in a nice rackmount case. Make sure my VMs and everything start fine, and everything’s working. A month later, kernel error message and kernel panics. I test RAM, clearly defective. Send it in for RMA. Replacement RAM comes, I install it and test it, RAM test fails AGAIN. I move the presumed defective RAM into my desktop, test it there, it fails there too. I RMA the replacement RAM, and then finally get fully functional RAM.
Everything runs great for many months. Then I start to rework the setup of my VMs, moving them to a different system (such as a separate system intended as a game server), and/or shutting some unused VMs off. A day or two later, after I close my last SSH session to that server, suddenly the server goes offline a couple minutes later.
I turn it back on, everything’s running again, can’t find any clear cause, nothing in any logs, not anything that stands out in the BMC. Day or two later it happens again. I start digging around online for answers, especially for people using the exact same ‘server’ motherboard, and I find recommendations to change power management options in the motherboard firmware setting, one option originally called “Power Supply Idle Mode”. Apparently if the CPU usage is so low, it’ll go in a low-power idle, which some finnicky power supplies might just completely shut off at.
After correcting that setting, it’s been running for several months uninterrupted.
But man, that was the most stress series of events, especially that it just had to happen ONLY in my fileserver use-cases (two sets of bad RAM, even), but ZFS survived it. All of this occurred earlier this year.
Oh fun, I'm assuming a Raspberry Pi, or? I wonder if F2FS has gotten any reasonable use in the SBC world. Haven't had issues with ext3/4 so far in my life, but I can understand that it might not be a good option on degradable flash storage.
Just for some perspective on price proportionality: I spent $560 for four 4TB drives (16TB total) in 2017 Just spent $520 for two 16TB drives (32TB total) today. It's very interesting how they still cram more into CMR-style harddrives while price efficiency also continues to improve.
Set some tangible objectives of something to complete, and focus solely on that. If you need help/reminders, I'm sure someone can provide that. Or more importantly, sort out the most trivial shit first: organization and cleaning of workspace, organize your files, get rid of crap you don't need, etc. It's usually all small trivial stuff that seem like subconscious distractions. Then start working into larger tasks.
I don't think people are being scared away by UI, I believe it's an issue of discovery, whereas potential frens (as of those not on fedi yet) aren't aware of some of these lax smaller user instances. It also doesn't help when most people that haven't touched it are hearing about fedi by the disarray of Mastodon and it's respective cancel culture, or it being inaccurately advertised as a 'safe space'.
Either way, I'm sure more inter-instance hangouts/events (of alike instances) and such may get some people to stick around more.
It was just very poorly represented on the supported XEP table: https://www.process-one.net/en/ejabberd/protocols/ (no checkmark under "Community Server", while "Contribution module" looks like it's listed under Business only)
I guess I recant some of my interest in this announcement. I've been on a little expedition of finding a good XMPP server that's capable of clustering, and comparing various offerings. In checking the XMPP Compliance Suite 2020, technically ejabberd Community Edition isn't even compliant to the 'Core Server' profile of the IM Compliance Suite, as HTTP File Upload (XEP-0363) was intentionally left out and only available in the commercial offering, when that's honestly a pretty scammy thing to do since that extension is very essential in XMPP in recent years.
For probably several years now I've been using Prosody which has been very stable and sufficient for a small userbase, but is intended as more of a 'just for fun' project with a focus on very easy moddability, but not architected for clustering (or horizontal scaling). I took a peek at what the commercial offerings for ejabberd are, and it's pretty much starting 300EUR/month for their SaaS option and that's something that nukes the financial ability of a small community thing just starting out.
Then I was peeking around for other options and saw the mention of MongooseIM in the Pleroma docs, which seemed oddly peculiar given that I haven't seen any endorsement/usage of that software elsewhere. Every fediverse instance I've spotted that hosts XMPP adjacent to their Pleroma/MissKey instance all seem to have opted for Prosody (likely for easy setup, and small userbase), I haven't found much else.
ejabberd seems to be a "open core" mindset (open source base, presumably proprietary extras for the rest of it), while MongooseIM is apparently an ejabberd fork of at least over a decade ago (formerly esl-ejabberd) that appears to be a fully open source offering. I figure that whenever the ejabberd XMPP-Matrix source code drops that people will be able to take some ideas into implementing it into other projects (or just making a portable XMPP component service, for that matter), versus it being an ejabberd exclusive.
Either way, I'm still on a hunt for finding anyone out there that actually uses (or used) MongooseIM and will be looking into it further. I've generally lost all interest in ejabberd. Meanwhile Prosody is still good software for <1k users instances.
I guess I recant some of my interest in this announcement. I've been on a little expedition of finding a good XMPP server that's capable of clustering, and comparing various offerings. In checking the XMPP Compliance Suite 2020, technically ejabberd Community Edition isn't even compliant to the 'Core Server' profile of the IM Compliance Suite, as they HTTP File Upload (XEP-0363) intentionally left out and only available in the commercial offering, when that's honestly a pretty scammy thing to do since that extension is very essential in XMPP in recent years.
For probably several years now I've been using Prosody which has been very stable and sufficient for a small userbase, but is intended as more of a 'just for fun' project with a focus on very easy moddability, but not architected for clustering (or horizontal scaling). I took a peek at what the commercial offerings for ejabberd are, and it's pretty much starting 300EUR/month for their SaaS option and that's something that nukes the financial ability of a small community thing just starting out.
Then I was peeking around for other options and saw the mention of MongooseIM in the Pleroma docs, which seemed oddly peculiar given that I haven't seen any endorsement/usage of that software elsewhere. Every fediverse instance I've spotted that hosts XMPP adjacent to their Pleroma/MissKey instance all seem to have opted for Prosody (likely for easy setup, and small userbase), I haven't found much else.
ejabberd seems to be a "open core" mindset (open source base, presumably proprietary extras for the rest of it), while MongooseIM is apparently an ejabberd fork of at least over a decade ago (formerly esl-ejabberd) that appears to be a fully open source offering versus something that hides away essentials into a commercial option. I figure that whenever the ejabberd XMPP-Matrix source code drops that people will be able to take some ideas into implementing it into other projects (or just making a portable XMPP component service, for that matter), versus it being an ejabberd exclusive.
Either way, I'm still on a hunt for finding anyone out there that actually uses (or used) MongooseIM and will be looking into it further. I've generally lost all interest in ejabberd. Meanwhile Prosody is still good software for <1k users instances.
I guess I recant some of my interest in this announcement. I've been on a little expedition of finding a good XMPP server that's capable of clustering, and comparing various offerings. In checking the XMPP Compliance Suite 2020, technically ejabberd Community Edition isn't even compliant to the 'Core Server' profile of the IM Compliance Suite, as they HTTP File Upload (XEP-0363) intentionally left out and only available in the commercial offering, when that's honestly a pretty scammy thing to do since that extension is very essential in XMPP in recent years. For probably several years now I've been using Prosody which has been very stable and sufficient for a small userbase, but is intended as more of a 'just for fun' project with a focus on very easy moddability, but not architected for clustering (or horizontal scaling). I took a peek at what the commercial offerings for ejabberd are, and it's pretty much starting 300EUR/month for their SaaS option and that's something that nukes the financial ability of a small community thing just starting out. Then I was peeking around for other options and saw the mention of MongooseIM in the Pleroma docs, which seemed oddly peculiar given that I haven't seen any endorsement/usage of that software elsewhere. Every fediverse instance I've spotted that hosts XMPP adjacent to their Pleroma/MissKey instance all seem to have opted for Prosody (likely for easy setup, and small userbase), I haven't found much else. ejabberd seems to be a "open core" mindset (open source base, presumably proprietary extras for the rest of it), while MongooseIM is apparently an ejabberd fork of at least over a decade ago (formerly esl-ejabberd) that appears to be a fully open source offering versus something that hides away essentials into a commercial option. I figure that whenever the ejabberd XMPP-Matrix source code drops that people will be able to take some ideas into implementing it into other projects (or just making a portable XMPP component service, for that matter), versus it being an ejabberd exclusive. Either way, I'm still on a hunt for finding anyone out there that actually uses (or used) MongooseIM and will be looking into it further. I've generally lost all interest in ejabberd. Meanwhile Prosody is still good software for <1k users instances.
@r000t I also think that search is a good but dangerous feature. in this country people get in prison for 10 years for a simgle post about war. and we should not give them the tools for repressions. alas, search that is intended for information lookup is de facto used for opressing governments, tyrannies and different crooks. so I closed the web access and block spiders and unknown crawlers on my server. this is weird but it works in current situation.
I used to be in the mindset of "okay dude, you're retarded" (in response to that dev) camp, but seeing how apathetic the Python devs are with legacy support, such as dropping any Windows releases past the EOL of mainstream support, then I can start to agree with their uphill effort.