meta data for this page
This is an old revision of the document!
CEPH
Prepare
Read Proxmox CEPH requirements. It requires at least one spare hard drive on each node. Topic for later.
Installation
- On one of node:
Datacenter
–>Ceph
–>Install Ceph-nautilus
- Configuration tab
- First Ceph monitor - set to current node.
- NOTE: Not possible to use other nodes now because there is no Ceph installed on it
- Repeat installation on each node. Configuration will be detected automatically.
- On each node - add additional monitors:
- Select node –>
Ceph
–>Monitor
Create
button in Monitor section, and select available nodes.
create OSD
Create Object Storage Daemon
On every node in cluster
- Select host node
- Go to menu
Ceph
→OSD
Create: OSD
- select spare hard disk
- leave other defaults
- press
Create
If there is no unused
disk to choose, erase content of disk with command:
ceph-volume lvm zap /dev/... --destroy
block.db and block.wal
10% (DB)/1% (WAL) of OSD size, so for 2TB OSD:
- DB size is 200MB
- WAL size is 20MB
The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s internal journal or write-ahead log. It is recommended to use a fast SSD or NVRAM for better performance.
Important
Since Ceph has to write all data to the journal (or WAL+DB) before it can ACK writes, having this metadata and OSD performance in balance is really important!
create pool
Size
- number of replicas for poolMin. Size
-Crush Rule
- only possible to choose 'replicated_rule*
pg_num(Placement Groups) use Ceph PGs per Pool Calculator to calculate
pg_num* NOTE: It's also important to know that the PG count can be increased, but NEVER decreased without destroying / recreating the pool. However, increasing the PG Count of a pool is one of the most impactful events in a Ceph Cluster, and should be avoided for production clusters if possible. * Placement Groups *
Add as Storage- automatically create Proxmox RBD storage (Disc Image, Container) ==== pool benchmark ==== Benchmarks for pool name 'rbd' and 10 seconds duration <code bash> # Write benchmark rados -p rbd bench 10 write –no-cleanup # Read performance rados -p rbd bench 10 seq </code> ===== Ceph RADOS Block Devices (RBD) ===== CEPH provides only a pools of object. To use it for VMs block devices additional layer (RBD) is needed. It can be created manually or during CEPH pool creation (option
Add as Storage'')
Ceph FS
It is implementation of POSIX compliant FS top of CEPH POOL. It requires one pool for data (block data) and to keep filesystem information (metadata). Performance strictly depends on metadata pool, so it is recommended to use for backups files.