meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:pbs:performance [2023/01/23 07:11] – niziak | vm:proxmox:pbs:performance [2025/01/09 09:07] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== performance ====== | ====== performance ====== | ||
+ | |||
+ | ===== FS tuning ===== | ||
+ | |||
+ | PBS stores everything in files. Maximum file size is 4MB. | ||
+ | * Block device backup is in 4MB chunks | ||
+ | * File backup chunk varies from 64kB to 4MB | ||
+ | |||
+ | Filesystems: | ||
+ | * ZFS | ||
+ | * [[https:// | ||
+ | * 128kB recordsize can cause write amplification for chinks smaller than 128kB | ||
+ | * larger recordsize = less IOPS | ||
+ | * Create histogram from existing PBS chunks sizes. | ||
+ | |||
< | < | ||
- | PBS needs high IOPS performance. Benefit of ZFS would be that you can accelerate it using SSDs to store the metadata. But won't help that much with verify tasks (but still a bit as the HDDs are hit by less IO, because all the metadata that is read/ | + | PBS needs high IOPS performance. Benefit of ZFS would be that you can accelerate it using |
+ | SSDs to store the metadata. But won't help that much with verify tasks (but still a bit as the HDDs | ||
+ | are hit by less IO, because all the metadata that is read/ | ||
+ | read/ | ||
- | In general HDDs shouldn' | + | In general HDDs shouldn' |
+ | still do it, it's highly recommended to also use SSDs for storing the metadata. | ||
</ | </ | ||
Line 11: | Line 29: | ||
How to calculate special VDEV size: [[https:// | How to calculate special VDEV size: [[https:// | ||
+ | So: | ||
+ | * RAIDZ on HDDS is very slow (due to IO OPS limit to single HDD). Throughput is n x HDD, but IOOPS equals single HDD | ||
+ | * For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 hdd in mirror) | ||
+ | * Use DRAID - where groups of stripes are used by design. | ||
+ | * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=1M YourPoolName/ | ||
+ | * Disable atime <code bash>zfs set atime=off backup2</ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | Don't use a raidz1/2/3 as PBS needs high IOPS performance and IOPS performance will only scale with the number of striped vdevs, not with the number of disks. So 20 disks in a raidz wouldn' | ||
+ | And resilvering would also take forever. |