Conversation
Notices
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Sunday, 13-Aug-2023 16:54:36 JST iced depresso well thats.. wild. 7zip does a better job archiving a PDF when its been internally uncompressed than even zpaq. zstd is very close though.
i'm kind of surprised zpaq didn't win this one :blobcatez:-
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 16:54:32 JST mk i'm using lz4 within zfs for everything, because its fast and reliable.
i had some weird problems with zstd..
zfs compression options:
lzjb
gzip
gzip-[1-9]
zle
lz4
zstd
zstd-[1-19]
zstd-fast
zstd-fast-[1-10,20,30,40,50,60,70,80,90,100,500,1000] -
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:01:12 JST mk oh shit.
directory holes == directories missing? == filesystem lost data?
---
did you use a up to date version of btrfs? i heard it had problems "in the past".
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Sunday, 13-Aug-2023 17:01:14 JST iced depresso @mk i've been thinking about trying xfs again. my btrfs laptop has some.. directory holes. :neocat_sad: -
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:12:29 JST mk in my desktop i've 2 SSDs and 2 HDDS
one ssd (green) is regular operating system with ext4.
one sdd (yellow and blue) is swap for the operating system and l2arc (readcache) for the zfs pool.
the two HDDS (red) are stripped (no-parity raid0) and create the zfs pool.
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Sunday, 13-Aug-2023 17:12:31 JST iced depresso @mk its when a directory record is corrupt so it can't be iterated properly and effectively causes the system to loop on itself trying to read it.
i've moved it off to /graveyard but its been probably a full year since i got this machine so resetting the distro might not be the worst concept -
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:13:25 JST mk #desaster #recovery scenario:
desktop (production) #drive dies.1. trigger synchronization of tank_backup to "mobiletank A" on the #truenas ("mobiletank B" is offline and at a different location).
2. meanwhile completely shut down and reinstall the production system.
3. pull the "mobiletank A" from truenas and push it into the freshly installed production system.
4. rename "mobiletank A" to "pool"
> some time after recovery <
5. get a new drive and make it mobiletank A.
-
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:24:43 JST mk i had really big performance problems with glusterfs in 2020...like 5MB/s synchronous reads and writes within a gigabit network.
---
$ fio --filename=/mnt/testfile --sync=1 --rw=readwrite --bs=1024k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test --size=1024M
Run status group 0 (all jobs):
READ: bw=5565KiB/s (5698kB/s)
WRITE: bw=6075KiB/s (6221kB/s) -
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Sunday, 13-Aug-2023 17:24:44 JST iced depresso @mk i thought about setting something up with a drive or two and glusterfs. drives have just been a bit spendy. i guess not so much nowadays. -
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:27:20 JST mk i problably did something wrong, but i didn't care. i did just set up a zfs pool and share files via nfs..much much easier and much better performance.
zfs // nfs
---
server
$ apt install nfs-kernel-server
$ zfs set sharenfs="no_root_squash,sync,rw=@192.168.178.33/24" pool/datasets/home/ron
$ service nfs-kernel-server restart
---
client
$ apt install nfs-common
$ showmount -e 192.168.178.25
$ mount -t nfs 192.168.178.25:/pool/datasets/home/ron /pool/datasets/home/ron -
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:33:02 JST mk if you want to play around with zfs, you should start with installing zfsutils-linux, create a pool with to fake harddrives (just two files) and try to break and recreate zfs ;-)
-
Embed this notice
mk (mk@mastodon.satoshishop.de)'s status on Sunday, 13-Aug-2023 17:35:22 JST mk "easier to do[..]at massive scale."
fuck "at massive scale" every tree has to start with a seed.
it should be easy to learn and hard to master..like zfs.
if the installation and setup doesn't fit into a toot, its too complicated.
In conversation permalink -
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Sunday, 13-Aug-2023 17:35:23 JST iced depresso @mk gluster does somewhat require a tuned filesystem if i recall. it seems the meta is to graduate to ceph once that stops working.
ceph has much more of a penance to get going but one of the reasons it insists on owning the block device is because not having to fight a filesystem makes it easier to do the things they need it to do at massive scale.In conversation permalink
-
Embed this notice