meta data for this page
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| vm:proxmox:ceph:performance [2024/05/17 09:29] – niziak | vm:proxmox:ceph:performance [2025/01/08 19:05] (current) – niziak | ||
|---|---|---|---|
| Line 8: | Line 8: | ||
| ===== Performance tips ===== | ===== Performance tips ===== | ||
| + | |||
| + | Ceph is build for scale and works great in large clusters. In small cluster every node will be heavily loaded. | ||
| + | |||
| * adapt PG to number of OSDs to spread traffic evenly | * adapt PG to number of OSDs to spread traffic evenly | ||
| * use '' | * use '' | ||
| * enable '' | * enable '' | ||
| - | ==== perfomance | + | ==== performance |
| * [[https:// | * [[https:// | ||
| + | * number of PG should be power of 2 (or middle between powers of 2) | ||
| + | * same utilization (% full) per device | ||
| + | * same number of PG per OSD := same number of request per device | ||
| + | * same number of primary PG per OSD = read operations spread evenly | ||
| + | * primary PG - original/ | ||
| + | * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM) | ||
| + | * i.e. for 7 OSD x 2TB PG autoscaler recommends 256 PG. After changing to 384 IOops drastivally increases and latency drops. | ||
| + | Setting to 512 PG wasn't possible because limit of 250PG/OSD. | ||
| + | |||
| + | === balancer === | ||
| + | |||
| + | <code bash> | ||
| + | ceph mgr module enable balancer | ||
| + | ceph balancer on | ||
| + | ceph balancer mode upmap | ||
| + | </ | ||
| + | |||
| + | === CRUSH reweight === | ||
| + | |||
| + | If possible use '' | ||
| + | |||
| + | Override default CRUSH assignment. | ||
| + | |||
| + | |||
| + | === PG autoscaler === | ||
| + | |||
| + | Better to use in warn mode, to do not put unexpected load when PG number will change. | ||
| + | <code bash> | ||
| + | ceph mgr module enable pg_autoscaler | ||
| + | #ceph osd pool set < | ||
| + | ceph osd pool set rbd pg_autoscale_mode warn | ||
| + | </ | ||
| + | |||
| + | It is possible to set desired/ | ||
| + | |||
| + | ==== check cluster balance ==== | ||
| + | |||
| + | <code bash> | ||
| + | ceph -s | ||
| + | ceph osd df # shows standard deviation | ||
| + | </ | ||
| + | |||
| + | no tools to show primary PG balancing. Tool on https:// | ||
| + | |||
| + | |||
| + | ==== performance on slow HDDs ==== | ||