meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| vm:proxmox:pbs:performance [2023/04/25 09:13] – niziak | vm:proxmox:pbs:performance [2025/01/09 09:07] (current) – niziak | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== performance ====== | ====== performance ====== | ||
| + | |||
| + | ===== FS tuning ===== | ||
| + | |||
| + | PBS stores everything in files. Maximum file size is 4MB. | ||
| + | * Block device backup is in 4MB chunks | ||
| + | * File backup chunk varies from 64kB to 4MB | ||
| + | |||
| + | Filesystems: | ||
| + | * ZFS | ||
| + | * [[https:// | ||
| + | * 128kB recordsize can cause write amplification for chinks smaller than 128kB | ||
| + | * larger recordsize = less IOPS | ||
| + | * Create histogram from existing PBS chunks sizes. | ||
| + | |||
| < | < | ||
| Line 20: | Line 34: | ||
| * Use DRAID - where groups of stripes are used by design. | * Use DRAID - where groups of stripes are used by design. | ||
| * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=1M YourPoolName/ | * Use at least separate ZFS dataset for backup, then set block size to <code bash>zfs set recordsize=1M YourPoolName/ | ||
| + | * Disable atime <code bash>zfs set atime=off backup2</ | ||
| + | |||
| + | |||
| + | Don't use a raidz1/2/3 as PBS needs high IOPS performance and IOPS performance will only scale with the number of striped vdevs, not with the number of disks. So 20 disks in a raidz wouldn' | ||
| + | And resilvering would also take forever. | ||