meta data for this page
This is an old revision of the document!
HDD
ZFS Send & RaidZ - Poor performance on HDD #14916
cat /sys/module/zfs/parameters/zfs_vdev_async_write_max_active echo 2 > /sys/module/zfs/parameters/zfs_vdev_async_write_max_active cat /sys/module/zfs/parameters/zfs_vdev_async_write_max_active
Use huge record size - it can help on SMR drives. Note: it only make sense for ZFS file system. Cannot be applied on ZVOL.
zfs set recordsize=16M hddpool/data zfs set recordsize=16M hddpool/vz
For ZVOLs: zvol volblocksize
Change ZFS volblock on a running Proxmox VM
Note: When no stripping is used (simple mirror) volblocksize should be 4kB (or at least the same as ashift).
Note: Latest Proxmox default volblock size was increased form 8k to 16k. When 8k is used warning is shown:
Warning: volblocksize (8192) is less than the default minimum block size (16384). To reduce wasted space a volblocksize of 16384 is recommended.
zfs create -s -V 40G hddpool/data/vm-156-disk-0-16k -o volblock=16k dd if=/dev/zvol/hddpool/data/vm-156-disk-0 of=/dev/zvol/hddpool/data/vm-156-disk-0-16k bs=1M status=progress conv=sparse zfs rename hddpool/data/vm-156-disk-0 hddpool/data/vm-156-disk-0-backup zfs rename hddpool/data/vm-156-disk-0-16k hddpool/data/vm-156-disk-0
Use ofbfq is mandatory. See my findings.
SMR
- SMR drives behaves correctly for sequential writes, but long working ZFS or LVM thin spread writes into lots of random location causing unusable IOPS. So never use SMR.
- If SMR must be used try to set
spa_num_allocators=1(default is 4) spa_num_allocators