<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://192.168.180.206:8001/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://192.168.180.206:8001/feed.php">
        <title>wiki.niziak.spox.org - vm:proxmox:ceph</title>
        <description></description>
        <link>http://192.168.180.206:8001/</link>
        <image rdf:resource="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg" />
       <dc:date>2026-05-15T12:00:03+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:compression?rev=1761731227&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:db?rev=1685517780&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:guest_performance?rev=1761929556&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:issues?rev=1715930500&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:move_disks?rev=1658734918&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:osd_creation?rev=1638342940&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:performance_monitoring?rev=1617875088&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:performance?rev=1775057479&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:pg?rev=1761929470&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:rebalance_recovery?rev=1690536164&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:remove_node?rev=1610394456&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph:replace_node?rev=1621450475&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg">
        <title>wiki.niziak.spox.org</title>
        <link>http://192.168.180.206:8001/</link>
        <url>http://192.168.180.206:8001/_media/wiki:dokuwiki.svg</url>
    </image>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:compression?rev=1761731227&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-10-29T09:47:07+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>CEPH inline compression</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:compression?rev=1761731227&amp;do=diff</link>
        <description>CEPH inline compression

&lt;https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression&gt;

BlueStore compression performance

get compression ratio


ceph df detail

--- POOLS ---
POOL             ID  PGS   STORED   (DATA)  (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
rbd               1  512  2.3 TiB  2.3 TiB  15 KiB  638.96k  6.4 TiB  6.4 TiB   44 KiB  59.17    1.5 TiB            N/A      …</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:db?rev=1685517780&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-05-31T07:23:00+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>DB</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:db?rev=1685517780&amp;do=diff</link>
        <description>DB

block.db and block.wal
The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s internal journal or write-ahead log. 
It is recommended to use a fast SSD or NVRAM for better performance.Important
Since Ceph has to write all data to the journal (or WAL+DB) before it can ACK writes, 
having this metadata and OSD performance in balance is really important!</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:guest_performance?rev=1761929556&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-10-31T16:52:36+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Guest performance</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:guest_performance?rev=1761929556&amp;do=diff</link>
        <description>Guest performance

Tips

Guests:

	*  writeback cache - it groups sequential small writes into one big pack, so CEPH will handle it as one transaction

The best results for Windows guest:

	*  HDD (no SSD)
		*  probably CEPH better handles bunch of ordered requests (like for rotational drives)</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:issues?rev=1715930500&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-05-17T07:21:40+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Issues</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:issues?rev=1715930500&amp;do=diff</link>
        <description>Issues

auth: unable to find a keyring

It is not possible to create ceph OSD neither from WebUI nor cmdline: 

pveceph osd create /dev/sdc


Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
 stderr: 2021-01-28T10:21:24.996+0100 7fd1a848f700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2021-01-28T10:2…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:move_disks?rev=1658734918&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-07-25T07:41:58+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Move OSDs disks between hosts</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:move_disks?rev=1658734918&amp;do=diff</link>
        <description>Move OSDs disks between hosts

reinstall PVE

After reinstallation of PVE, CEPH automatically detects OSD but keeps it as down. To run OSD it is need to:
ceph-volume lvm activate --all

1st info


If you used ceph-deploy and/or ceph-disk to set up these OSDs (that is, if 
they are stored on labeled GPT partitions such that upstart is 
automagically starting up the ceph-osd daemons for you without you putting 
anythign in /etc/fstab to manually mount the volumes) then all of this 
should be plug …</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:osd_creation?rev=1638342940&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-12-01T07:15:40+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title></title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:osd_creation?rev=1638342940&amp;do=diff</link>
        <description>Header
Proxmox
Virtual Environment 7.1-4
Node &#039;pve2&#039;
No OSD selected
Logs
()
create OSD on /dev/sdc (bluestore)
creating block.db on &#039;/dev/nvme0n1p3&#039;
  Physical volume &quot;/dev/nvme0n1p3&quot; successfully created.
  Volume group &quot;ceph-50bbae14-dbdb-48c0-9fa9-8a50cb991285&quot; successfully created
  Rounding up size to full physical extent 3.99 GiB
  Logical volume &quot;osd-db-7d84ca53-a110-44ba-93f2-f06108526ed9&quot; created.
Warning: The kernel is still using the old partition table.
The new table will be used at…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:performance_monitoring?rev=1617875088&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-04-08T09:44:48+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>CEPH performance monitoring</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:performance_monitoring?rev=1617875088&amp;do=diff</link>
        <description>CEPH performance monitoring

basic info

ceph


ceph -s
ceph -w
ceph df
ceph osd tree
ceph osd df tree


rados


rados df


Where:

	*  USED COMPR: amount of space allocated for compressed data (i.e. this includes comrpessed data plus all the allocation, replication and erasure coding overhead).</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:performance?rev=1775057479&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-04-01T15:31:19+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>CEPH performance</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:performance?rev=1775057479&amp;do=diff</link>
        <description>CEPH performance

	*  BlueStore Config Reference: Sizing
	*  &lt;https://yourcmc.ru/wiki/Ceph_performance&gt;
	*  Ceph Performance Tuning Checklist
	*  New to Ceph, HDD pool is extremely slow
	*  Ceph Storage Performance
	*  Ceph: A Journey to 1 TiB/s
	*  &lt;https://www.boniface.me/posts/pvc-ceph-tuning-adventures/&gt;

Performance tips

Ceph is build for scale and works great in large clusters. In small cluster every node will be heavily loaded.

	*  adapt PG to number of OSDs to spread traffic evenly
	* …</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:pg?rev=1761929470&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-10-31T16:51:10+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Placement Groups</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:pg?rev=1761929470&amp;do=diff</link>
        <description>Placement Groups

PG

Calculations

	*  &lt;https://old.ceph.com/pgcalc/&gt;
	*  Max PG per OSD is 300
	*  Result must be rounded up to the nearest power of 2.

reweight

Adjust reweight according to current OSD utilisation - to prevent filling one OSD

ceph osd reweight-by-utilization</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:rebalance_recovery?rev=1690536164&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-07-28T09:22:44+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Rebalance / recovery speed</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:rebalance_recovery?rev=1690536164&amp;do=diff</link>
        <description>Rebalance / recovery speed

Form &lt;https://www.suse.com/support/kb/doc/?id=000019693&gt;:

	*  osd max backfills This is the maximum number of backfill operations allowed to/from OSD. The higher the number, the quicker the recovery, which might impact overall cluster performance until recovery finishes.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:remove_node?rev=1610394456&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-11T19:47:36+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Remove failed node</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:remove_node?rev=1610394456&amp;do=diff</link>
        <description>Remove failed node


pvecm status

Highest expected: 4



pvecm delnode &lt;hostname&gt;

Killing node 2



pvecm status

Highest expected: 3


And reload WebUI to refresh cluster nodes.

Remove CEPH monitor


ceph mon remove &lt;host&gt;


Remove CEPH OSD


ceph osd out osd.2
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd rm osd.2
ceph osd crush rm &lt;host&gt;</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph:replace_node?rev=1621450475&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-05-19T18:54:35+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Replace node</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph:replace_node?rev=1621450475&amp;do=diff</link>
        <description>Replace node

New Proxmox node under the same hostname pve3 was installed.

	*  Remove old monitor ceph mon remove pve3
	*  On pve3 node: 
		*  Remove mon.pve3 entry from /etc/pve/ceph.conf
		*  Remove mon_host = X.X.X.X in section global

	*  From GUI</description>
    </item>
</rdf:RDF>
