meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:zfs [2020/04/26 19:11] niziakvm:proxmox:zfs [2023/12/13 14:54] (current) niziak
Line 1: Line 1:
-====== ZFS ======+====== Proxmox'ZFS ======
  
-Since fall 2015 the default compression algorithm in ZOL is LZ4 and since choosing compression=on means activate compression using default algorithm then your pools are using LZ4 -> [[http://open-zfs.org/wiki/Performance_tuning#Compression]]+Since fall 2015 the default compression algorithm in ZOL is LZ4 and since choosing ''compression=on'' means activate compression using default algorithm then your pools are using LZ4 -> [[http://open-zfs.org/wiki/Performance_tuning#Compression]]
 <code bash> <code bash>
 # Check if LZ4 is active # Check if LZ4 is active
 zpool get feature@lz4_compress rpool zpool get feature@lz4_compress rpool
 </code> </code>
 +
 +
 +===== RAM requiremens =====
 +ZFS base about 4GB and 1GB for each TB used disc space.
 +this is without dedup or L2ARC
 +
  
 ===== Glossary ===== ===== Glossary =====
-  *    ZPool is the logical unit of the underlying disks, what zfs use. +  * ZPool is the logical unit of the underlying disks, what zfs use. 
-  *    ZVol is an emulated Block Device provided by ZFS +  * ZVol is an emulated Block Device provided by ZFS 
-  *    ZIL is ZFS Intent Log, it is a small block device ZFS uses to write faster +  * ZIL is ZFS Intent Log, it is a small block device ZFS uses to write faster 
-  *    ARC is Adaptive Replacement Cache and located in Ram, its the Level 1 cache. +  * SLOG is Separate Intent Log 
-  *    L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD).+  * ARC is Adaptive Replacement Cache and located in Ram, its the Level 1 cache. 
 +  * L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD).
  
 ===== Resources ==== ===== Resources ====
Line 19: Line 26:
   * [[https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Install_on_a_high_performance_system|Install on a high performance system]]   * [[https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Install_on_a_high_performance_system|Install on a high performance system]]
  
 +===== Tunning ====
 +
 +<code bash>
 +zfs set atime=off rpool/data
 +
 +# or
 +
 +zfs set atime=on rpool/data
 +zfs set relatime=on rpool/data
 +</code>
 +
 +===== Adding SSD cache for HDDs =====
 +  * SLOG can speedup synchronous only writes
 +    * The ZIL's purpose is to protect you from data loss. It is necessary because the actual ZFS write cache, which is not the ZIL, is handled by system RAM, and RAM is volatile.
 +    * In default setup of ZFS, asynchronous writes are not handled by ZIL
 +    * The ZIL doesn't need to be very big. Find the transfer speed of the fastest disk in your array and multiple by 10s, this is about how big your ZIL should be.
 +      * For HDDs 2GB of SLOG on SSD is enough. I noticed maximum usage of 1.5GB. 
 +    * [[http://nex7.blogspot.com/2013/04/zfs-intent-log.html]]
 +    * [[https://www.cyberciti.biz/faq/how-to-add-zil-write-and-l2arc-read-cache-ssd-devices-in-freenas/]]
 +    * [[https://jrs-s.net/2019/05/02/zfs-sync-async-zil-slog/]]
 +  * L2ARC is a read cache. L1 in memory, L2 on disk
 +    * L2ARC cache requires RAM for its metadata
 +
 +<code bash>
 +blkid
 +/dev/nvme0n1p4: PARTLABEL="ZIL" PARTUUID="d6da74cd-32e7-4286-8e78-ace66ab659b2"
 +/dev/nvme0n1p5: PARTLABEL="L2ARC" PARTUUID="60c563fc-91f8-4ec4-afc0-b7794c63f31c"
 +
 +zpool add rpool cache 60c563fc-91f8-4ec4-afc0-b7794c63f31c
 +zpool add rpool log d6da74cd-32e7-4286-8e78-ace66ab659b2
 +zpool status
 +  pool: rpool
 + state: ONLINE
 +  scan: scrub repaired 0B in 0 days 00:11:23 with 0 errors on Sun May 10 00:35:24 2020
 +config:
 +
 + NAME                                    STATE     READ WRITE CKSUM
 + rpool                                   ONLINE               0
 +   sda                                   ONLINE               0
 + logs
 +   d6da74cd-32e7-4286-8e78-ace66ab659b2  ONLINE               0
 + cache
 +   60c563fc-91f8-4ec4-afc0-b7794c63f31c  ONLINE               0
 +
 +zpool iostat -v 1
 +</code>
 +
 +===== remove storage pool =====
 +<code bash>
 +zfs destroy rpool/data
 +</code>
  
 ===== create ''local-zfs'' ===== ===== create ''local-zfs'' =====
Line 24: Line 82:
  
 <code bash> <code bash>
-zpool create -f -o ashift=12 rpool /dev/sdb+zpool create -f -o ashift=13 rpool /dev/sdb
 zfs set compression=lz4 rpool zfs set compression=lz4 rpool
 zfs create rpool/data zfs create rpool/data
Line 33: Line 91:
 zpool status -v zpool status -v
 zfs list zfs list
 +</code>
 +
 +Datacenter --> Storage --> ''local-zfs''
 +  * Disable node restriction
 +
 +
 +===== rename zfs pool =====
 +<code bash>
 +zpool checkpoint pve3-nvm
 +zpool export pve3-nvm
 +zpool import pve3-nvm nvmpool
 +</code>
 +    * rename storage pool and paths
 +    * verify
 +<code bash>
 +zpool checkpoint --discard pve3-nvm
 +</code>
 +
 +===== clean old replication snapshots =====
 +
 +<code bash>
 +zfs list -t all | grep @__replicate | cut -f 1 -d ' ' | while read N; do zfs destroy ${N}; done
 +</code>
 +
 +===== trim free space =====
 +
 +<code bash>
 +# Trim with speed 50M/s
 +zpool trim -r 50M nvmpool
 +
 +# And monitor progress:
 +zpool status nvmpool -t
 </code> </code>