meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:ceph:performance [2024/05/17 19:33] niziakvm:proxmox:ceph:performance [2025/01/08 19:05] (current) niziak
Line 24: Line 24:
     * primary PG - original/first PG - others are replicas. Primary PG is used for read.     * primary PG - original/first PG - others are replicas. Primary PG is used for read.
   * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM)   * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM)
 +    * i.e. for 7 OSD x 2TB PG autoscaler recommends 256 PG. After changing to 384 IOops drastivally increases and latency drops.
 +      Setting to 512 PG wasn't possible because limit of 250PG/OSD.
  
 === balancer === === balancer ===
Line 53: Line 55:
 ==== check cluster balance ==== ==== check cluster balance ====
  
 +<code bash>
 ceph -s ceph -s
-ceph osd df shows standard deviation+ceph osd df shows standard deviation 
 +</code>
  
 no tools to show primary PG balancing. Tool on https://github.com/JoshSalomon/Cephalocon-2019/blob/master/pool_pgs_osd.sh no tools to show primary PG balancing. Tool on https://github.com/JoshSalomon/Cephalocon-2019/blob/master/pool_pgs_osd.sh