====== HDD ======
[[https://github.com/openzfs/zfs/discussions/14916|ZFS Send & RaidZ - Poor performance on HDD #14916]]
cat /sys/module/zfs/parameters/zfs_vdev_async_write_max_active
echo 2 > /sys/module/zfs/parameters/zfs_vdev_async_write_max_active
cat /sys/module/zfs/parameters/zfs_vdev_async_write_max_active
Use huge record size - it can help on SMR drives. Note: it only make sense for ZFS file system. Cannot be applied on ZVOL.
zfs set recordsize=16M hddpool/data
zfs set recordsize=16M hddpool/vz
For ZVOLs: [[https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#zvol-volblocksize|zvol volblocksize]]
[[https://blog.guillaumematheron.fr/2023/500/change-zfs-volblock-on-a-running-proxmox-vm/|Change ZFS volblock on a running Proxmox VM]]
**Note:** When no stripping is used (simple mirror) volblocksize should be 4kB (or at least the same as ashift).
**Note:** Latest Proxmox default volblock size was increased form 8k to 16k. When 8k is used warning is shown:
Warning: volblocksize (8192) is less than the default minimum block size (16384).
To reduce wasted space a volblocksize of 16384 is recommended.
zfs create -s -V 40G hddpool/data/vm-156-disk-0-16k -o volblock=16k
dd if=/dev/zvol/hddpool/data/vm-156-disk-0 of=/dev/zvol/hddpool/data/vm-156-disk-0-16k bs=1M status=progress conv=sparse
zfs rename hddpool/data/vm-156-disk-0 hddpool/data/vm-156-disk-0-backup
zfs rename hddpool/data/vm-156-disk-0-16k hddpool/data/vm-156-disk-0
Use of''bfq'' is mandatory. See [[#my_findings|my findings]].
===== SMR =====
* SMR drives behaves correctly for sequential writes, but long working ZFS or LVM thin spread writes into lots of random location causing unusable IOPS. So never use SMR.
* If SMR must be used try to set ''spa_num_allocators=1'' (default is 4) [[https://openzfs.github.io/openzfs-docs/man/v2.3/4/zfs.4.html#spa_num_allocators|spa_num_allocators]]