meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
linux:fs:zfs:tuning [2026/03/20 07:35] niziaklinux:fs:zfs:tuning [2026/04/14 21:43] (current) niziak
Line 14: Line 14:
 See more in [[linux:fs:zfs:compression]] See more in [[linux:fs:zfs:compression]]
  
 +
 +===== stripe size =====
 +
 +ZFS use dynamic stripe size. One strip is one write transaction (limited by recordsize).
 +So zfs dataset recordsize needs tunning to given type of workload.
 +
 +For example: on pool composed as  3 x 2 HDD mirror:
 +
 +<code bash>fio --name=rand-4k --ioengine=libaio --rw=randrw --rwmixread=70 --bs=1m --direct=1 --size=1G --numjobs=6 --iodepth=16 --runtime=60 --time_based --filename=fio_testfile --group_reporting</code>
 +
 +  * zfs dataset with recordsize 128k:
 +    * BS=4k jobs=1 IOPS RW 214/91
 +    * BS=4k jobs=6 IOPS RW 2107/909
 +    * BS=16k jobs=1 IOPS RW 137/59
 +    * BS=16k jobs=6 IOPS RW 1277/549
 +    * BS=128k jobs=1 IOPS RW 190/82
 +    * BS=128k jobs=6 IOPS RW 549/239
 +    * BS=1m jobs=1 IOPS RW 48/21
 +    * BS=1m jobs=6 IOPS RW 164/71
 +    * BS=16m jobs=1 IOPS RW 9/4
 +    * BS=16m jobs=6 IOPS RW 17/7
 +  * zfs dataset with recordsize 1M:
 +    * BS=4k jobs=6 IOPS RW 21,7/9k - aggregated
 +    * BS=128k jobs=6 IOPS RW 1125/484
 +    * BS=1m jobs=6 IOPS RW 232/101
 +    * BS=16m jobs=6 IOPS RW
 +  * zfs dataset with recordsize 16M:
 +    * BS=4k jobs=1 IOPS RW 38/16
 +    * BS=4k jobs=6 IOPS RW 156k/67k
 +    * BS=16k jobs=1 IOPS RW 31/14
 +    * BS=16k jobs=6 IOPS RW 122k/52k
 +    * BS=128k jobs=1 IOPS RW 20/9
 +    * BS=128k jobs=6 IOPS RW 17.7K/7607 - small iops are aggregated into 16M
 +    * BS=1m jobs=1 IOPS RW 30/13
 +    * BS=1m jobs=6 IOPS RW 2586/1117
 +    * BS=16m jobs=1 IOPS RW 5/2
 +    * BS=16m jobs=6 IOPS RW 20/8
 +
 +For example: on pool composed as 6x HDD raidz2:
 +  * zfs dataset with recordsize 16M:
 +    * BS=128k jobs=6 IOPS RW 16.4k/7026
 +    * BS=1m jobs=6 IOPS RW 2472/1068
 +    * BS=16m jobs=6 IOPS RW 27/11
  
 ===== zil limit ===== ===== zil limit =====
Line 121: Line 164:
  
 <code bash> <code bash>
-# zarcsummary+# zarcsummary -s arc
  
 ARC size (current):                                    98.9 %   15.5 GiB ARC size (current):                                    98.9 %   15.5 GiB
Line 139: Line 182:
   * ''zfs_arc_max'': Maximum size of ARC in bytes. If set to 0 then the maximum size of ARC is determined by the amount of system memory installed (50% on Linux)   * ''zfs_arc_max'': Maximum size of ARC in bytes. If set to 0 then the maximum size of ARC is determined by the amount of system memory installed (50% on Linux)
   * ''zfs_arc_min'': Minimum ARC size limit. When the ARC is asked to shrink, it will stop shrinking at ''c_min''  as tuned by ''zfs_arc_min''.   * ''zfs_arc_min'': Minimum ARC size limit. When the ARC is asked to shrink, it will stop shrinking at ''c_min''  as tuned by ''zfs_arc_min''.
-  * ''zfs_arc_meta_limit_percent'': Sets the limit to ARC metadata, arc_meta_limit, as a percentage of the maximum size target of the ARC, ''c_max''Default is 75.+  * ''zfs_arc_meta_balance'': Balance between metadata and data on ghost hits. Values above 100 increase metadata caching by proportionally reducing effect of ghost data hits on target data/metadata rate. [[https://openzfs.github.io/openzfs-docs/man/master/4/zfs.4.html#zfs_arc_meta_balance]] 
 Proxmox recommends following [[https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage|rule]]: Proxmox recommends following [[https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage|rule]]:
 <code> <code>