meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| vm:proxmox:ceph:performance [2024/05/17 19:33] – niziak | vm:proxmox:ceph:performance [2025/01/08 19:05] (current) – niziak | ||
|---|---|---|---|
| Line 24: | Line 24: | ||
| * primary PG - original/ | * primary PG - original/ | ||
| * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM) | * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM) | ||
| + | * i.e. for 7 OSD x 2TB PG autoscaler recommends 256 PG. After changing to 384 IOops drastivally increases and latency drops. | ||
| + | Setting to 512 PG wasn't possible because limit of 250PG/OSD. | ||
| === balancer === | === balancer === | ||
| Line 53: | Line 55: | ||
| ==== check cluster balance ==== | ==== check cluster balance ==== | ||
| + | <code bash> | ||
| ceph -s | ceph -s | ||
| - | ceph osd df - shows standard deviation | + | ceph osd df # shows standard deviation |
| + | </ | ||
| no tools to show primary PG balancing. Tool on https:// | no tools to show primary PG balancing. Tool on https:// | ||