Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth @bonifartius F2FS is still probably much slower than ext4, especially when running something that likes to do a lot of random I/O like a DBMS. It's probably not a good idea to use it on SSDs anyway as those fix a lot of the underlying issues with complex controllers in front of the NAND flash. Google has been using it as the default for both the ro and rw partitions on Android for 4 years. Mainline Linux is probably less stable than that due to a lower degree of testing.
>btrfs, unsurprisingly, performed the worst by an order of magnitude
Probably needs some FS tuning. ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.
> and actually exploded so there are no benchmark numbers for it on some of the SSDs.
Typical BTRFS experience. Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.
>ext4 got more SSD-friendly
There are two sides to this. One is pushing more performance out of the SSDs with more optimized I/O and scheduling (NAND is actually slow on small I/O queue depths and with a DRAM cache, it can perform much worse than spinning rust). The second side is wear-leveling and better managing for the raw flash. ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.