meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| vm:proxmox:pbs:performance [2023/04/25 12:30] – niziak | vm:proxmox:pbs:performance [2026/03/20 07:13] (current) – niziak | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== performance ====== | ====== performance ====== | ||
| + | |||
| + | ===== FS tuning ===== | ||
| + | |||
| + | PBS stores everything in files. Maximum file size is 4MB. | ||
| + | * Block device backup is in 4MB chunks | ||
| + | * File backup chunk varies from 64kB to 4MB | ||
| + | |||
| + | Filesystems: | ||
| + | * ZFS | ||
| + | * [[https:// | ||
| + | * 128kB recordsize can cause write amplification for chinks smaller than 128kB | ||
| + | * larger recordsize = less IOPS | ||
| + | * Create histogram from existing PBS chunks sizes. | ||
| + | |||
| < | < | ||
| Line 7: | Line 21: | ||
| read/ | read/ | ||
| - | In general HDDs shouldn' | + | In general HDDs shouldn' |
| still do it, it's highly recommended to also use SSDs for storing the metadata. | still do it, it's highly recommended to also use SSDs for storing the metadata. | ||
| </ | </ | ||
| Line 17: | Line 31: | ||
| So: | So: | ||
| * RAIDZ on HDDS is very slow (due to IO OPS limit to single HDD). Throughput is n x HDD, but IOOPS equals single HDD | * RAIDZ on HDDS is very slow (due to IO OPS limit to single HDD). Throughput is n x HDD, but IOOPS equals single HDD | ||
| - | * For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 hdd in mirror) | + | * For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 x HDD in mirror) |
| + | * GC on HDD can takes days without metadata caching on SSD | ||
| + | * Use SSH as cache: | ||
| + | * MFU only or | ||
| + | * configure to cache metadata only | ||
| * Use DRAID - where groups of stripes are used by design. | * Use DRAID - where groups of stripes are used by design. | ||
| - | * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=1M YourPoolName/ | + | * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=16M YourPoolName/ |
| + | * Disable atime <code bash>zfs set atime=off backup2</ | ||
| + | * Compression is performed on client side. So enable light compression <code bash>zfs set compression=lz4 YourPoolName/ | ||