<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://192.168.180.206:8001/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://192.168.180.206:8001/feed.php">
        <title>wiki.niziak.spox.org - vm:proxmox</title>
        <description></description>
        <link>http://192.168.180.206:8001/</link>
        <image rdf:resource="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg" />
       <dc:date>2026-05-14T14:41:10+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:backup?rev=1730222927&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ceph?rev=1664221241&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:cli?rev=1587542837&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:cluster?rev=1600235565&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:disaster_recovery?rev=1707722791&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:docker?rev=1607284876&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:firewall?rev=1752474550&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:ha?rev=1587734020&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:influx?rev=1621956947&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:issues?rev=1711478394&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:kvm?rev=1672503948&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:lxc_vs_vm?rev=1755930703&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:lxc?rev=1613978297&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:network?rev=1664864237&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:on_debian?rev=1600234781&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:pbs?rev=1743620570&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:sdn?rev=1734724530&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:shutdown?rev=1730281319&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:storage?rev=1587913047&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:use_host_monitor_and_keyboard?rev=1609677318&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/vm:proxmox:zfs?rev=1702475662&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg">
        <title>wiki.niziak.spox.org</title>
        <link>http://192.168.180.206:8001/</link>
        <url>http://192.168.180.206:8001/_media/wiki:dokuwiki.svg</url>
    </image>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:backup?rev=1730222927&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-10-29T17:28:47+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Backup</title>
        <link>http://192.168.180.206:8001/vm:proxmox:backup?rev=1730222927&amp;do=diff</link>
        <description>Backup

VM Backup Fleecing

&lt;https://pve.proxmox.com/wiki/Backup_and_Restore&gt;

VM Backup Fleecing
On a storage that’s not thinly provisioned, e.g. LVM or ZFS without the sparse option, the
full size of the original disk needs to be reserved for the fleecing image up-front. On a
thinly provisioned storage, the fleecing image can grow to the same size as the original
image only if the guest re-writes a whole disk while the backup is busy with another disk.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ceph?rev=1664221241&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-09-26T19:40:41+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>CEPH</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ceph?rev=1664221241&amp;do=diff</link>
        <description>CEPH

	*  OSD - (Object Storage Device) - Storage on a physical device or logical unit (LUN). Typically, data on an OSD is configured as a btrfs file system to take advantage of its snapshot features. However, other file systems such as XFS can also be used.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:cli?rev=1587542837&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-04-22T08:07:17+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Commandline</title>
        <link>http://192.168.180.206:8001/vm:proxmox:cli?rev=1587542837&amp;do=diff</link>
        <description>Commandline

	*  pvecm - Proxmox VE Cluster Manager
	*  pvesm - Proxmox VE Storage Manager</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:cluster?rev=1600235565&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-09-16T05:52:45+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Cluster</title>
        <link>http://192.168.180.206:8001/vm:proxmox:cluster?rev=1600235565&amp;do=diff</link>
        <description>Cluster

Different nodes configuration

Cluster is using ONE shared content of /etc/pve. Content of this directory is synchronized on all cluster nodes.
IMPACT:

	*  storage configuration is one for whole cluster. To create local node storage, storage should be limited only own node.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:disaster_recovery?rev=1707722791&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-12T07:26:31+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Disaster recovery</title>
        <link>http://192.168.180.206:8001/vm:proxmox:disaster_recovery?rev=1707722791&amp;do=diff</link>
        <description>Disaster recovery

replace NVM device

Only 1 NVM slot available, so idea is to copy nvm to hdd and then restore it on new nvm device.

Stop CEPH:


systemctl stop ceph.target
systemctl stop ceph-osd.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mon.target
systemctl stop ceph-mds.target
systemctl stop ceph-crash.service</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:docker?rev=1607284876&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-12-06T20:01:16+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Dockers under Proxmox</title>
        <link>http://192.168.180.206:8001/vm:proxmox:docker?rev=1607284876&amp;do=diff</link>
        <description>Dockers under Proxmox

Possible solution:

	*  Docker on PVE host (influence host resoruces, out of control, low security)
	*  Docker in LXC unprivileged container (compatibility problem, permissions)
	*  Docker in LXC privileged container (better, but still risk for security)</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:firewall?rev=1752474550&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-07-14T06:29:10+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>PVE Firewall</title>
        <link>http://192.168.180.206:8001/vm:proxmox:firewall?rev=1752474550&amp;do=diff</link>
        <description>PVE Firewall

Important: If you enable the firewall, traffic to all hosts is blocked by default. Only exceptions is WebGUI(8006) and ssh(22) from your local network.

To use firewall:

	*  enable it at Datacenter level (note default input policy is REJECT!)</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:ha?rev=1587734020&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-04-24T13:13:40+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>High Availability</title>
        <link>http://192.168.180.206:8001/vm:proxmox:ha?rev=1587734020&amp;do=diff</link>
        <description>High Availability

Needs minimum 3 nodes. Problem with one node has to be confirmed by 2 other nodes (to have quorum).

cheap 3rd node

It is possible to add any 3rd cheap node to provide a “Quorum Device” (QDevice) for other nodes (i.e. raspberry PI based).</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:influx?rev=1621956947&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-05-25T15:35:47+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Influxdb2 and Grafana</title>
        <link>http://192.168.180.206:8001/vm:proxmox:influx?rev=1621956947&amp;do=diff</link>
        <description>Influxdb2 and Grafana

Proxmox VE sends the data over UDP, so the influxdb server has to be configured for this. 

influxdb2

Proxmox is using UDP (Influx V1 compatible) 
Install Telegraf Socket listener plugin

&lt;https://hub.docker.com/_/telegraf&gt;

echo &quot;my_measurement,my_tag_key=my_tag_value value=1&quot; | nc -u -4 -w 1 docker-host 3522</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:issues?rev=1711478394&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-03-26T18:39:54+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Issues</title>
        <link>http://192.168.180.206:8001/vm:proxmox:issues?rev=1711478394&amp;do=diff</link>
        <description>Issues

run replication first


TASK ERROR: snapshot &#039;up&#039; needed by replication job &#039;122-2&#039; - run replication first


Job was previously (months ago) removed but stuck in Removal Scheduled state. 
It looks like jobs must be enabled to perform delete remote snapshots.
Snapshots was deleted months ago, so only</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:kvm?rev=1672503948&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-12-31T16:25:48+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>KVM</title>
        <link>http://192.168.180.206:8001/vm:proxmox:kvm?rev=1672503948&amp;do=diff</link>
        <description>KVM

CPU model

QEMU / KVM CPU model configuration

AES

Enable AES in CPU flags. Default KVM64 CPU doesn&#039;t expose AES flag.
Simple openssl benchmark:


openssl speed -evp aes-128-cbc aes-256-cbc aes-256-ecb

# Without AES
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
aes-256 cbc     185216.65k   190818.37k   191588.35k   193247.23k   193489.58k   193353.05k
aes-128-cbc     220375.57k   245515.09k   249103.70k   254411.43k   255770.62k   255393.79k


…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:lxc_vs_vm?rev=1755930703&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-08-23T06:31:43+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>LXC vx VM</title>
        <link>http://192.168.180.206:8001/vm:proxmox:lxc_vs_vm?rev=1755930703&amp;do=diff</link>
        <description>LXC vx VM

Nowadays LXC unprivileged can handle docker daemon smoothly. It is best option for memory constrained hosts. 

LXC Pros:

	*  light for host
	*  no VM overhead (we are using it as Gitlab runners with docker executors)
	*  device pass through (possible to run i.e. Jellyfin or Frigate with Coral accelerator or pass GPU to offload decoding). Setup is a bit complicated but works.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:lxc?rev=1613978297&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-02-22T07:18:17+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>LXC</title>
        <link>http://192.168.180.206:8001/vm:proxmox:lxc?rev=1613978297&amp;do=diff</link>
        <description>LXC

rename CT

pct set &lt;VMID&gt; --hostname &lt;newname&gt;

update CT templates


# pveam - Proxmox VE Appliance Manager
pveam update


Shrink container disc

It is not supported. Command 

pct resize &lt;VMID&gt; rootfs &lt;newsize&gt;

 cannot be used.

Workaround 1:</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:network?rev=1664864237&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-10-04T06:17:17+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Network planning</title>
        <link>http://192.168.180.206:8001/vm:proxmox:network?rev=1664864237&amp;do=diff</link>
        <description>Network planning

	*  CEPH Network Configuration Reference
	*  Ceph network configuration
	*  &lt;https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_configuration_guide/network-configuration-reference&gt;
	*  &lt;https://ceph-users.ceph.narkive.com/wTDiWx2w/have-2-different-public-networks&gt;

Default IP for each node is defined in: /etc/pve/.members

Preffered network layout

	*  WAN interface (to give VMs access to Internet)
	*  10GBE for CEPH private 
	*  10GBE for Proxmo…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:on_debian?rev=1600234781&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-09-16T05:39:41+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>On existing Debian</title>
        <link>http://192.168.180.206:8001/vm:proxmox:on_debian?rev=1600234781&amp;do=diff</link>
        <description>On existing Debian

Install Proxmox on ready Debian system. 
Benefits:

	*  unusual disk setup (i.e. not using ZFS for raid, only generic MD or LVM raid)
	*  for lazy people: running system with remote access only

Setup repositories

Package Repositories
Detailed step-by-step instuciton</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:pbs?rev=1743620570&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-04-02T19:02:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Proxmox Backup Server</title>
        <link>http://192.168.180.206:8001/vm:proxmox:pbs?rev=1743620570&amp;do=diff</link>
        <description>Proxmox Backup Server

aka: PBS

change owner


#!/bin/bash -eux

DSTDIR=&quot;${1}&quot;
OWNER=&quot;backup@pbs&quot;

find &quot;${1}&quot; -name &quot;owner&quot; -exec sed -i &quot;s/^.*\$/${OWNER}/g&quot; {} \;


Internet access

Only https on port 8007 is needed. Use VPN to secure connection between peers.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:sdn?rev=1734724530&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-12-20T19:55:30+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>SDN</title>
        <link>http://192.168.180.206:8001/vm:proxmox:sdn?rev=1734724530&amp;do=diff</link>
        <description>SDN

Software Defined Network

	*  Zone - upper level:
		*  VMs are assigned to zones.
		*  user permissions are applied to zones
		*  zones are containers of VNets
		*  zone types:
			*  Simple a simple bridge on single Proxmox node - no communication across the cluster.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:shutdown?rev=1730281319&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-10-30T09:41:59+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Shutdown</title>
        <link>http://192.168.180.206:8001/vm:proxmox:shutdown?rev=1730281319&amp;do=diff</link>
        <description>Shutdown

Cluster shutdown

Official info:

	*  ceph server: document full cluster shutdown

	*  [pve-devel] [PATCH docs v3] pveceph: document cluster shutdown

previous/own solution

Stop all VMs/CTs.

Disable CEPH rebalance:


ceph osd set noout

# after power restore:
ceph osd unset noout


Stop HA services:


systemctl stop pve-ha-crm.service
systemctl stop and pve-ha-lrm.service</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:storage?rev=1587913047&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-04-26T14:57:27+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Storage</title>
        <link>http://192.168.180.206:8001/vm:proxmox:storage?rev=1587913047&amp;do=diff</link>
        <description>Storage

Terms

	*  shared 
		*  do not set local storage as shared, because content on each node is different
		*  One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. There is no need to copy VM image data, so live migration is very fast in that case.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:use_host_monitor_and_keyboard?rev=1609677318&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-03T12:35:18+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Use host monitor and keyboard</title>
        <link>http://192.168.180.206:8001/vm:proxmox:use_host_monitor_and_keyboard?rev=1609677318&amp;do=diff</link>
        <description>Use host monitor and keyboard

to work with guest.

Idea is to get all benefits of Proxmox VE and also use Proxmox host machine as terminal to one of guest systems.

&lt;https://www.reddit.com/r/Proxmox/comments/a968lh/gpu_passthrough_to_use_a_keyboard_monitor_and/&gt;</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/vm:proxmox:zfs?rev=1702475662&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-12-13T13:54:22+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Proxmox&#039;s ZFS</title>
        <link>http://192.168.180.206:8001/vm:proxmox:zfs?rev=1702475662&amp;do=diff</link>
        <description>Proxmox&#039;s ZFS

Since fall 2015 the default compression algorithm in ZOL is LZ4 and since choosing compression=on means activate compression using default algorithm then your pools are using LZ4 -&gt; &lt;http://open-zfs.org/wiki/Performance_tuning#Compression&gt;


# Check if LZ4 is active
zpool get feature@lz4_compress rpool</description>
    </item>
</rdf:RDF>
