meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:ceph:performance [2025/11/01 10:26] niziakvm:proxmox:ceph:performance [2026/04/01 17:31] (current) niziak
Line 20: Line 20:
   * net latency <200us (''ping -s 1000 pve'')   * net latency <200us (''ping -s 1000 pve'')
   * [[https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/|Ceph: A Journey to 1 TiB/s]]   * [[https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/|Ceph: A Journey to 1 TiB/s]]
-    * Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. Set ''Max perf'' in BIOS to disable C-States or boot Linux with ''RUB_CMDLINE_LINUX="idle=poll intel_idle.max_cstate=0 intel_pstate=disable processor.max_cstate=1" ''+    * Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. Set ''Max perf'' in BIOS to disable C-States or boot Linux with ''GRUB_CMDLINE_LINUX="idle=poll intel_idle.max_cstate=0 intel_pstate=disable processor.max_cstate=1" ''
     * Disable IOMMU in kernel     * Disable IOMMU in kernel
  
Line 117: Line 117:
 ceph config get osd osd_journal_size ceph config get osd osd_journal_size
 5120 5120
 +</code>
 +
 +==== bluestore_min_alloc_size ====
 +
 +  * Read: [[https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#minimum-allocation-size]]
 +  * Restart of OSD needed
 +  * Impact: A smaller value reduces space waste (space amplification) but increases metadata overhead, while a larger value helps with large sequential writes but wastes space on small files.
 +  * These settings are generally applied to new or freshly deployed OSDs
 +
 +<code bash>
 +# ceph tell 'osd.*' config show | grep bluestore_min_alloc
 +    "bluestore_min_alloc_size": "0",
 +    "bluestore_min_alloc_size_hdd": "4096",
 +    "bluestore_min_alloc_size_ssd": "4096",
 +
 +# ceph tell 'osd.*' config set global bluestore_min_alloc_size_hdd 16384
 +</code>
 +
 +==== filestore_op_threads ====
 +
 +<code bash>
 +# ceph tell 'osd.*' config show | grep filestore_op_threads
 +
 +"filestore_op_threads": "2"
 +# ceph tell 'osd.*' config set filestore_op_threads 4
 +
 </code> </code>