meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:ceph:db:move_db_disks [2023/05/31 09:23] – ↷ Page moved from vm:proxmox:ceph:move_db_disks to vm:proxmox:ceph:db:move_db_disks niziak | vm:proxmox:ceph:db:move_db_disks [2025/03/30 21:39] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Move DB to bigger storage ====== | ||
+ | |||
+ | I.e: to move from 4GB partition to 30GB partition in case of spillover: | ||
+ | |||
+ | < | ||
+ | 1 OSD(s) experiencing BlueFS spillover | ||
+ | |||
+ | osd.3 spilled over 985 MiB metadata from ' | ||
+ | </ | ||
+ | |||
+ | |||
+ | < | ||
+ | lvdisplay | ||
+ | --- Logical volume --- | ||
+ | LV Path / | ||
+ | LV Name osd-block-4a9e142e-ed4d-4890-a018-5651c36071a5 | ||
+ | |||
+ | --- Logical volume --- | ||
+ | LV Path / | ||
+ | LV Name osd-db-ad0a2108-bcca-4f15-9747-29d86624b911 | ||
+ | VG Name ceph-ffd9911c-2d6c-43aa-b637-bec8a2092a30 | ||
+ | |||
+ | LV Size 4,00 GiB | ||
+ | Current LE 1024 | ||
+ | | ||
+ | vgdisplay | ||
+ | --- Volume group --- | ||
+ | VG Name | ||
+ | VG Size 4,00 GiB | ||
+ | PE Size 4,00 MiB | ||
+ | Total PE 1025 | ||
+ | Alloc PE / Size 1024 / 4,00 GiB | ||
+ | Free PE / Size 1 / 4,00 MiB | ||
+ | |||
+ | pvdisplay | ||
+ | --- Physical volume --- | ||
+ | PV Name / | ||
+ | VG Name | ||
+ | PV Size < | ||
+ | Allocatable | ||
+ | PE Size 4,00 MiB | ||
+ | Total PE 1025 | ||
+ | Free PE 1 | ||
+ | Allocated PE 1024 | ||
+ | PV UUID | ||
+ | </ | ||
+ | |||
+ | ===== resize exsiting DB partition ===== | ||
+ | |||
+ | Remove separate DB and migrate data back to main storage: | ||
+ | <code bash> | ||
+ | ceph osd set noout | ||
+ | systemctl stop ceph-osd@3.service | ||
+ | |||
+ | cat / | ||
+ | lvdisplay | ||
+ | |||
+ | ceph-volume lvm migrate --osd-id 3 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from db --target ceph-5582d170-f77e-495c-93c6-791d9310872c/ | ||
+ | --> Migrate to existing, Source: [' | ||
+ | --> Migration successful. | ||
+ | </ | ||
+ | |||
+ | Create (or resize) new PV,VG,LV for DB and attach it: | ||
+ | |||
+ | <code bash> | ||
+ | pvcreate / | ||
+ | |||
+ | vgcreate ceph-db-8gb / | ||
+ | OR | ||
+ | vgextend ... | ||
+ | |||
+ | lvcreate -n db-8gb -l 100%FREE ceph-db-8gb | ||
+ | OR | ||
+ | lvextend -l +100%FREE ceph-db/db | ||
+ | |||
+ | ceph-volume lvm new-db --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --target ceph-db-8gb/ | ||
+ | --> Making new volume at / | ||
+ | Running command: / | ||
+ | Running command: / | ||
+ | --> New volume attached. | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
+ | ceph-volume lvm migrate --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from data db --target ceph-db-8gb/ | ||
+ | --> Migrate to existing, Source: [' | ||
+ | --> Migration successful. | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
+ | systemctl start ceph-osd@3.service | ||
+ | ceph osd unset noout | ||
+ | </ | ||
+ | |||
+ | ===== OR: create new one ===== | ||
+ | |||
+ | |||
+ | Create new 41GB (41984MB) partition (/ | ||
+ | < | ||
+ | pvcreate / | ||
+ | vgcreate ceph-db-40gb / | ||
+ | lvcreate -n db-40gb -l 100%FREE ceph-db-40gb | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
+ | ceph osd set noout | ||
+ | systemctl stop ceph-osd@3.service | ||
+ | |||
+ | cat / | ||
+ | |||
+ | ceph-volume lvm migrate --osd-id 3 --osd-fsid 4a9e142e-ed4d-4890-a018-5651c36071a5 --from db --target ceph-db-40gb/ | ||
+ | |||
+ | systemctl start ceph-osd@3.service | ||
+ | ceph osd unset noout | ||
+ | </ | ||
+ | |||
====== Move DB disks ====== | ====== Move DB disks ====== | ||
Line 7: | Line 122: | ||
</ | </ | ||
+ | |||
+ | |||
+ | ====== draft ====== | ||
+ | < | ||
+ | $ systemctl stop ceph-osd@68 | ||
+ | $ ceph-bluestore-tool --path / | ||
+ | / | ||
+ | / | ||
+ | $ systemctl start ceph-osd@68 | ||
+ | </ | ||