meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| linux:fs:zfs:tuning [2026/02/11 10:07] – niziak | linux:fs:zfs:tuning [2026/03/20 07:51] (current) – niziak | ||
|---|---|---|---|
| Line 91: | Line 91: | ||
| * [[https:// | * [[https:// | ||
| - | ===== I/O scheduler ===== | ||
| - | If whole device is managed by ZFS (not partition), ZFS sets scheduler to '' | ||
| - | ==== official recommendation ==== | ||
| - | For rotational devices, there is no sense to use advanced schedulers '' | ||
| - | |||
| - | Only possible scheduler to consider is '' | ||
| - | |||
| - | There is a discussion on OpenZFS project to do not touch schedulers anymore and let it to be configured by admin: | ||
| - | |||
| - | * [[https:// | ||
| - | * [[https:// | ||
| - | |||
| - | ==== my findings ==== | ||
| - | |||
| - | There is huge benefit to use '' | ||
| - | |||
| - | '' | ||
| - | |||
| - | * kernel '' | ||
| - | * kvm processes have prio '' | ||
| - | * kvm process during vzdump have '' | ||
| - | |||
| - | ===== HDD ===== | ||
| - | |||
| - | [[https:// | ||
| - | |||
| - | <code bash> | ||
| - | cat / | ||
| - | echo 2 > / | ||
| - | cat / | ||
| - | |||
| - | |||
| - | </ | ||
| - | |||
| - | Use huge record size - it can help on SMR drives. Note: it only make sense for ZFS file system. Cannot be applied on ZVOL. | ||
| - | |||
| - | <code bash> | ||
| - | zfs set recordsize=1M hddpool/ | ||
| - | zfs set recordsize=1M hddpool/vz | ||
| - | |||
| - | |||
| - | </ | ||
| - | |||
| - | NOTE: SMR drives behaves correctly for sequential writes, but long working ZFS or LVM thin spread writes into lots of random location causing unusable IOPS. So never use SMR. | ||
| - | |||
| - | For ZVOLs: [[https:// | ||
| - | |||
| - | [[https:// | ||
| - | |||
| - | **Note: | ||
| - | |||
| - | **Note: | ||
| - | < | ||
| - | |||
| - | Warning: volblocksize (8192) is less than the default minimum block size (16384). | ||
| - | To reduce wasted space a volblocksize of 16384 is recommended. | ||
| - | |||
| - | </ | ||
| - | |||
| - | <code bash> | ||
| - | zfs create -s -V 40G hddpool/ | ||
| - | dd if=/ | ||
| - | zfs rename hddpool/ | ||
| - | zfs rename hddpool/ | ||
| - | |||
| - | |||
| - | </ | ||
| - | |||
| - | Use '' | ||
| ===== Postgresql ===== | ===== Postgresql ===== | ||
| Line 184: | Line 115: | ||
| # apt install zfsutils-linux | # apt install zfsutils-linux | ||
| - | # arcstat | + | # zarcstat |
| time read miss miss% dmis dm% pmis pm% mmis mm% size | time read miss miss% dmis dm% pmis pm% mmis mm% size | ||
| 16: | 16: | ||
| - | |||
| - | |||
| </ | </ | ||
| <code bash> | <code bash> | ||
| - | # arc_summary | + | # zarcsummary -s arc |
| ARC size (current): | ARC size (current): | ||
| Line 204: | Line 133: | ||
| Dnode cache size (hard limit): | Dnode cache size (hard limit): | ||
| Dnode cache size (current): | Dnode cache size (current): | ||
| - | |||
| - | |||
| </ | </ | ||
| Line 212: | Line 139: | ||
| * '' | * '' | ||
| * '' | * '' | ||
| - | * '' | + | * '' |
| Proxmox recommends following [[https:// | Proxmox recommends following [[https:// | ||
| < | < | ||
| Line 227: | Line 155: | ||
| echo "$[4 * 1024*1024*1024]" | echo "$[4 * 1024*1024*1024]" | ||
| echo " | echo " | ||
| - | |||
| - | |||
| </ | </ | ||
| Line 238: | Line 164: | ||
| options zfs zfs_arc_min=134217728 | options zfs zfs_arc_min=134217728 | ||
| options zfs zfs_arc_meta_limit_percent=75 | options zfs zfs_arc_meta_limit_percent=75 | ||
| - | |||
| - | |||
| </ | </ | ||