meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:ceph:db:move_db_disks [2024/01/18 15:53] – niziak | vm:proxmox:ceph:db:move_db_disks [2025/03/30 21:39] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== DB ====== | ||
- | |||
====== Move DB to bigger storage ====== | ====== Move DB to bigger storage ====== | ||
- | I.e: to move from 4GB partition to 30GB partition in case of '' | + | I.e: to move from 4GB partition to 30GB partition in case of spillover: |
- | < | + | <code> |
+ | 1 OSD(s) experiencing BlueFS spillover | ||
+ | |||
+ | osd.3 spilled over 985 MiB metadata from ' | ||
+ | </ | ||
+ | |||
+ | |||
+ | <code> | ||
lvdisplay | lvdisplay | ||
--- Logical volume --- | --- Logical volume --- | ||
Line 41: | Line 46: | ||
</ | </ | ||
- | Create new 41GB partition (/ | + | ===== resize exsiting DB partition ===== |
+ | |||
+ | Remove separate DB and migrate data back to main storage: | ||
+ | <code bash> | ||
+ | ceph osd set noout | ||
+ | systemctl stop ceph-osd@3.service | ||
+ | |||
+ | cat / | ||
+ | lvdisplay | ||
+ | |||
+ | ceph-volume lvm migrate --osd-id 3 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from db --target ceph-5582d170-f77e-495c-93c6-791d9310872c/ | ||
+ | --> Migrate to existing, Source: [' | ||
+ | --> Migration successful. | ||
+ | </ | ||
+ | |||
+ | Create (or resize) new PV,VG,LV for DB and attach it: | ||
+ | |||
+ | <code bash> | ||
+ | pvcreate / | ||
+ | |||
+ | vgcreate ceph-db-8gb / | ||
+ | OR | ||
+ | vgextend ... | ||
+ | |||
+ | lvcreate -n db-8gb -l 100%FREE ceph-db-8gb | ||
+ | OR | ||
+ | lvextend -l +100%FREE ceph-db/ | ||
+ | |||
+ | ceph-volume lvm new-db --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --target ceph-db-8gb/ | ||
+ | --> Making new volume at / | ||
+ | Running command: / | ||
+ | Running command: / | ||
+ | --> New volume attached. | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
+ | ceph-volume lvm migrate --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from data db --target ceph-db-8gb/ | ||
+ | --> Migrate to existing, Source: [' | ||
+ | --> Migration successful. | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
+ | systemctl start ceph-osd@3.service | ||
+ | ceph osd unset noout | ||
+ | </ | ||
+ | |||
+ | ===== OR: create new one ===== | ||
+ | |||
+ | |||
+ | Create new 41GB (41984MB) | ||
< | < | ||
pvcreate / | pvcreate / | ||
vgcreate ceph-db-40gb / | vgcreate ceph-db-40gb / | ||
- | lvcreate -n db-40db -l 100%FREE ceph-db-40gb | + | lvcreate -n db-40gb -l 100%FREE ceph-db-40gb |
</ | </ | ||