meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
vm:proxmox:ceph:db:usage [2023/05/31 09:29] – created niziakvm:proxmox:ceph:db:usage [2024/01/18 16:01] (current) niziak
Line 31: Line 31:
 <code bash> <code bash>
 ceph daemon osd.6 bluefs stats ceph daemon osd.6 bluefs stats
 +ceph tell osd.\* bluefs stats
 </code> </code>
  
Line 59: Line 60:
 On 2nd machine, there is 2.9 GB placed onto DB and 502MB of DB data is placed on SLOW device ! On 2nd machine, there is 2.9 GB placed onto DB and 502MB of DB data is placed on SLOW device !
  
 +Compacting helps a bit:
 +<code bash>
 +ceph tell osd.<osdid> compact
 +# or for all osds
 +ceph tell osd.\* compact
 +</code>
 +
 +<code>
 +{
 +  "db_total_bytes": 4294959104,
 +  "db_used_bytes": 2881486848
 +}
 +
 +Usage matrix:
 +DEV/LEV     WAL         DB          SLOW        *                     REAL        FILES       
 +LOG         0 B         180 GiB     15 GiB      0 B         0 B         3.7 MiB               
 +WAL         0 B         90 MiB      24 MiB      0 B         0 B         109 MiB               
 +DB          0 B         2.6 GiB     319 MiB     0 B         0 B         2.9 GiB     55          
 +SLOW        0 B         0 B         0 B         0 B         0 B         0 B                   
 +TOTALS      0 B         183 GiB     15 GiB      0 B         0 B         0 B         65          
 +</code>
 +
 +<code bash>
 +ceph osd set noout
 +systemctl stop ceph.osd.target
 +ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-6
 +</code>