meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
vm:proxmox:pbs:performance [2026/03/20 07:13] niziakvm:proxmox:pbs:performance [2026/04/09 13:53] (current) niziak
Line 33: Line 33:
   * For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 x HDD in mirror)   * For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 x HDD in mirror)
   * GC on HDD can takes days without metadata caching on SSD   * GC on HDD can takes days without metadata caching on SSD
-  * Use SSH as cache:+  * Use SSD as cache:
     * MFU only or     * MFU only or
     * configure to cache metadata only     * configure to cache metadata only
   * Use DRAID - where groups of stripes are used by design.   * Use DRAID - where groups of stripes are used by design.
   * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=16M YourPoolName/DatasetUsedAsDatastore</code>   * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=16M YourPoolName/DatasetUsedAsDatastore</code>
-  * Disable atime <code bash>zfs set atime=off backup2</code>+  * Disable atime <code bash>zfs set atime=off backup2</code>. PBS is not relying on this mechanism.
   * Compression is performed on client side. So enable light compression <code bash>zfs set compression=lz4 YourPoolName/DatasetUsedAsDatastore</code>   * Compression is performed on client side. So enable light compression <code bash>zfs set compression=lz4 YourPoolName/DatasetUsedAsDatastore</code>
 +  * For small RAM, prefer to cache metadata and data in L2ARC: ''zfs set primarycache=metadata backup''