Oh fun, I'm assuming a Raspberry Pi, or? I wonder if F2FS has gotten any reasonable use in the SBC world. Haven't had issues with ext3/4 so far in my life, but I can understand that it might not be a good option on degradable flash storage.
@arcanicanis Yup, it’s an rPi running OpenWRT. This isn’t the first EXT4 file system I’ve had spontaneously explode. At a previous job we had one which held several hundred gigs of document scans bite the dust completely out of the blue. We switched to XFS afterwards.
I’ll have to look into F2FS! I don’t really dabble in the embedded linux world so I wasn’t even really aware of it haha
F2FS had been emerging in the embedded world a bit and generally proven to be stable. I think there's a few Android handset vendors that may be using F2FS on the flash memory of some phones also. Overall it was shaping up to be a well-adopted filesystem. Then some time later, Microsoft randomly granted royalty-free use of exFAT under Linux, and I don't know if that took any attention away from F2FS maybe.
Last thing on filesystem news to keep an eye on is bcachefs, which has plenty of the selling points to ZFS and more, and no CDDL license issue, and may soon get mainlined in a year or two.
XFS itself doesn't offer data checksumming, only file metadata checksumming, when comparing it to ZFS; but XFS vs ext4 is probably a fair comparison. There is also Stratis, which makes use of XFS, to build software RAID with a goal to be a ZFS alternative. https://stratis-storage.github.io/
@arcanicanis Interesting! I remembered hearing about the exFAT patent thing a few years back. Never really had a use for it, personally.
For what it's worth, we investigated ZFS when rebuilding that doc server, but we just couldn't get the performance to a usable state. These were Ubuntu VMs within vCenter running on (i believe) a Cisco UCS chassis with some external SAN appliance. It just didn't work out, there was probably some bottleneck somewhere. On that same chassis we had absolute *garbage* performance with an infinidat appliance so I hesitate to place the blame on ZFS.
it uses LVM for RAID, snapshots, SSD cache and erasure coding.
however when I tried it, RAID LVM was fucking slow with erasure coding turned on, and the SSD cache makes reading even slower somehow; even after the balancing was finished, the I/O speeds were completely unusable, I could not access the data in any practical way and I had to give up on that data and go back to ZFS and restore from backup