meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| vm:proxmox:zfs [2020/05/17 20:54] – niziak | vm:proxmox:zfs [2023/12/13 14:54] (current) – niziak | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== ZFS ====== | + | ======  | 
| Since fall 2015 the default compression algorithm in ZOL is LZ4 and since choosing '' | Since fall 2015 the default compression algorithm in ZOL is LZ4 and since choosing '' | ||
| Line 14: | Line 14: | ||
| ===== Glossary ===== | ===== Glossary ===== | ||
| - | * ZPool is the logical unit of the underlying disks, what zfs use. | + | * ZPool is the logical unit of the underlying disks, what zfs use. | 
| - | * ZVol is an emulated Block Device provided by ZFS | + | * ZVol is an emulated Block Device provided by ZFS | 
| - | * ZIL is ZFS Intent Log, it is a small block device ZFS uses to write faster | + | * ZIL is ZFS Intent Log, it is a small block device ZFS uses to write faster | 
| - |   * SLOG is Seprate  | + |   * SLOG is Separate  | 
| - | * ARC is Adaptive Replacement Cache and located in Ram, its the Level 1 cache. | + | * ARC is Adaptive Replacement Cache and located in Ram, its the Level 1 cache. | 
| - | * L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD). | + | * L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD). | 
| ===== Resources ==== | ===== Resources ==== | ||
| Line 40: | Line 40: | ||
| * SLOG can speedup synchronous only writes | * SLOG can speedup synchronous only writes | ||
| * The ZIL's purpose is to protect you from data loss. It is necessary because the actual ZFS write cache, which is not the ZIL, is handled by system RAM, and RAM is volatile. | * The ZIL's purpose is to protect you from data loss. It is necessary because the actual ZFS write cache, which is not the ZIL, is handled by system RAM, and RAM is volatile. | ||
| - |     * In default setup of ZFS, synchronous  | + |     * In default setup of ZFS, asynchronous  | 
|     * The ZIL doesn' |     * The ZIL doesn' | ||
|       * For HDDs 2GB of SLOG on SSD is enough. I noticed maximum usage of 1.5GB.  |       * For HDDs 2GB of SLOG on SSD is enough. I noticed maximum usage of 1.5GB.  | ||
| Line 47: | Line 47: | ||
|     * [[https:// |     * [[https:// | ||
| * L2ARC is a read cache. L1 in memory, L2 on disk | * L2ARC is a read cache. L1 in memory, L2 on disk | ||
| + | * L2ARC cache requires RAM for its metadata | ||
| <code bash> | <code bash> | ||
| Line 76: | Line 77: | ||
| zfs destroy rpool/data | zfs destroy rpool/data | ||
| </ | </ | ||
| + | |||
| ===== create '' | ===== create '' | ||
| For nodes without '' | For nodes without '' | ||
| <code bash> | <code bash> | ||
| - | zpool create -f -o ashift=12 rpool /dev/sdb | + | zpool create -f -o ashift=13 rpool /dev/sdb | 
| zfs set compression=lz4 rpool | zfs set compression=lz4 rpool | ||
| zfs create rpool/data | zfs create rpool/data | ||
| Line 93: | Line 95: | ||
| Datacenter --> Storage --> '' | Datacenter --> Storage --> '' | ||
| * Disable node restriction | * Disable node restriction | ||
| + | |||
| + | |||
| + | ===== rename zfs pool ===== | ||
| + | <code bash> | ||
| + | zpool checkpoint pve3-nvm | ||
| + | zpool export pve3-nvm | ||
| + | zpool import pve3-nvm nvmpool | ||
| + | </ | ||
| + | * rename storage pool and paths | ||
| + | * verify | ||
| + | <code bash> | ||
| + | zpool checkpoint --discard pve3-nvm | ||
| + | </ | ||
| + | |||
| + | ===== clean old replication snapshots ===== | ||
| + | |||
| + | <code bash> | ||
| + | zfs list -t all | grep @__replicate | cut -f 1 -d ' ' | while read N; do zfs destroy ${N}; done | ||
| + | </ | ||
| + | |||
| + | ===== trim free space ===== | ||
| + | |||
| + | <code bash> | ||
| + | # Trim with speed 50M/s | ||
| + | zpool trim -r 50M nvmpool | ||
| + | |||
| + | # And monitor progress: | ||
| + | zpool status nvmpool -t | ||
| + | </ | ||