meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
vm:proxmox:ceph [2022/09/21 09:37] niziakvm:proxmox:ceph [2022/09/26 21:40] (current) niziak
Line 6: Line 6:
   * MDS - (Meta Data Server)   * MDS - (Meta Data Server)
   * RBD - (RADOS Block Device) - A Ceph component that provides access to Ceph storage as a thinly provisioned block device. When an application writes to a Block Device, Ceph implements data redundancy and enhances I/O performance by replicating and striping data across the Storage Cluster.    * RBD - (RADOS Block Device) - A Ceph component that provides access to Ceph storage as a thinly provisioned block device. When an application writes to a Block Device, Ceph implements data redundancy and enhances I/O performance by replicating and striping data across the Storage Cluster. 
 +
 +
 +===== Ceph RADOS Block Devices (RBD) =====
 +
 +CEPH provides only a pools of object. To use it for VMs block devices additional layer (RBD) is needed.
 +It can be created manually or during CEPH pool creation (option ''Add as Storage'')
 +
 +===== Ceph FS =====
 +It is implementation of POSIX compliant FS top of CEPH POOL. 
 +It requires one pool for data (block data) and to keep filesystem information (metadata).
 +Performance strictly depends on metadata pool, so it is recommended to use for backups files.
  
 Used ports: Used ports:
Line 50: Line 61:
 </code> </code>
  
 +===== restart ceph services =====
  
 +<code bash>
 +systemctl stop ceph\*.service ceph\*.target
 +systemctl start ceph.target
 +</code>
  
 ===== create pool ===== ===== create pool =====
Line 72: Line 88:
 </code> </code>
  
-===== Ceph RADOS Block Devices (RBD) ===== 
- 
-CEPH provides only a pools of object. To use it for VMs block devices additional layer (RBD) is needed. 
-It can be created manually or during CEPH pool creation (option ''Add as Storage'') 
- 
-===== Ceph FS ===== 
-It is implementation of POSIX compliant FS top of CEPH POOL.  
-It requires one pool for data (block data) and to keep filesystem information (metadata). 
-Performance strictly depends on metadata pool, so it is recommended to use for backups files.