meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
vm:proxmox:issues [2020/04/26 20:11] – created niziakvm:proxmox:issues [2024/03/26 19:39] (current) niziak
Line 1: Line 1:
 ====== Issues ====== ====== Issues ======
 +
 +
 +===== run replication first =====
 +
 +<code>
 +TASK ERROR: snapshot 'up' needed by replication job '122-2' - run replication first
 +</code>
 +
 +Job was previously (months ago) removed but stuck in ''Removal Scheduled'' state. 
 +It looks like jobs must be enabled to perform delete remote snapshots.
 +Snapshots was deleted months ago, so only ''force'' will help.
 +
 +<code bash>pvesr delete 122-1 --force 1</code>
 +
 +Or remove jobs from file ''/etc/pve/replication.cfg''
 +
 +
 +===== Permission denied (os error 13) =====
 +During CT backup:
 +<code>
 +INFO: Error: unable to create backup group "/mnt/datastore/backup/ns/home/ns/not_important/ct/114" - Permission denied (os error 13)
 +</code>
 +
 +**Solution:** fix directory permissions on Proxmox Backup Server to allow given user to create new dir.
 +
 +===== Full clone feature is not supported for drive 'scsi2' (500) =====
 +Try to clone machine from snapshot.
 +Selecting snapshot 'current' for clone source works.
 +
 +**REASON:**
 +  * It is not possible to clone disk from ZFS snapshot. It is not possible to easily and safe mount ZFS snapshot as block device to get it as source of clone.
 +  * [[https://forum.proxmox.com/threads/full-clone-feature-is-not-supported-for-drive-efidisk0-500.71511/|Full clone feature is not supported for drive 'efidisk0' (500)?]]
 +  * It will work with: LVM, CEPH, QCOW2 images
 +
 +
 +===== Replication Job: 151-1 failed =====
 +<code>command 'zfs snapshot nvmpool/data/vm-151-disk-0@__replicate_151-1_1610641828__' failed: got timeout</code>
 +
 +''this described problem normally occurs is the pool is under load and snapshot has a lower priority.''
 +''Yes, the snapshot will be created because it is in the ZFS low priority queue.''
 +
 +===== The current guest configuration does not support taking new snapshots =====
 +
 +LXC container created on ''local-zfs'' and mount point ''/var/lib/docker'' to ''local'' storage (RAW file on disk).
 +It is becasue ''local'' (DIR type) doesn't support snapshots!
 +Solution is to move from ''local'' to ''RBD'' storage.
 +
 +
 +===== TASK ERROR: VM is locked (clone) =====
 +<code bash>qm unlock 105</code>
 +
 +===== No CIFS storage after few days =====
 +See below.
 +
 +===== TASK ERROR: start failed: org.freedesktop.systemd1.NoSuchUnit: Unit 107.scope not found. =====
 +    * **Reason**: tmpfs ''/var'' is full because smbclient fills it with temporary files
 +    * Bugs:
 +        * [[https://www.reddit.com/r/Proxmox/comments/e9hhlu/tmpfs_run_fills_up_and_prevents_services_from/]]
 +        * Workaround proposal: [[https://bugzilla.proxmox.com/show_bug.cgi?id=2333|Bug 2333 - Samba eating up inodes (msg.lock)]]
 +
 +<code>
 +--- /run/samba
 +                         /..
 +    2,8 GiB [##########] /msg.lock
 +  444,0 KiB [          ]  gencache_notrans.tdb
 +    4,0 KiB [          ]  names.tdb
 +</code>
 +
 +Workaround is to clean ''msg.lock'' directory from cron.
 +
 +<file yaml samba-cleanup.yml>
 +- name: PVE | cron | samba-cleanup
 +  become: yes
 +  cron:
 +    name: "Clean /var/run/samba/ daily"
 +    user: root
 +    special_time: hourly
 +    job: "find /var/run/samba/msg.lock -type f -delete"
 +    state: present
 +</file>
 +
  
 ===== no CT network connectivity after migration ===== ===== no CT network connectivity after migration =====
  
-Network works only when running on node which has been **created**. 
  
-  * Check switch MAC security config+==== 1 config ==== 
 + 
 +Config generated by Proxmox 6 Web UI. 
 + 
 +Main network 192.168.64.0 and VLAN 28 network works OK. 
 +No connectivity from VMs/CT connected to ''vmbr0'' 
 + 
 +<file | /etc/network/interfaces> 
 +auto lo 
 +iface lo inet loopback 
 + 
 +iface enp34s0 inet manual 
 + 
 +auto vmbr0 
 +iface vmbr0 inet static 
 + address 192.168.65.123/21 
 + gateway 192.168.64.1 
 + bridge-ports enp34s0 
 + bridge-stp off 
 + bridge-fd 0 
 + bridge-vlan-aware yes 
 + bridge-vids 2-4094 
 + network 192.168.64.0 
 + 
 +auto admin 
 +iface admin inet static 
 + address 192.168.28.232/22 
 + vlan-id 28 
 + vlan-raw-device enp34s0 
 +#ADMIN vlan 
 +</file> 
 + 
 +==== 2 config ==== 
 + 
 +Config copied from another proxmox machine where everything works ok. 
 +On this machine works only connectivity with 192.168.64.0. Packets from VLAN28 are dropped on ''vmbr0'' 
 +<file> 
 +# the PVE managed interfaces into external files! 
 + 
 +source /etc/network/interfaces.d/
 + 
 +auto lo 
 +iface lo inet loopback 
 + 
 +iface enp34s0 inet manual 
 + 
 +auto vmbr0.28 
 +iface vmbr0.28 inet static 
 +        address 192.168.28.232 
 +        netmask 255.255.252.0 
 +#        gateway 192.168.28.1 
 + 
 +auto vmbr0 
 +iface vmbr0 inet static 
 + address 192.168.65.123/21 
 + network 192.168.64.0 
 + gateway 192.168.64.1 
 + bridge-ports enp34s0 
 + bridge-stp off 
 + bridge-fd 0 
 + bridge-vlan-aware yes 
 +</file> 
 + 
 +On first system where everything works well: 
 +<code> 
 +02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) 
 + Subsystem: Micro-Star International Co., Ltd. [MSI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller 
 + Flags: bus master, fast devsel, latency 0, IRQ 17 
 + I/O ports at e000 [size=256] 
 + Memory at f7c04000 (64-bit, prefetchable) [size=4K] 
 + Memory at f7c00000 (64-bit, prefetchable) [size=16K] 
 + Capabilities: [40] Power Management version 3 
 + Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ 
 + Capabilities: [70] Express Endpoint, MSI 01 
 + Capabilities: [b0] MSI-X: Enable+ Count=4 Masked- 
 + Capabilities: [d0] Vital Product Data 
 + Capabilities: [100] Advanced Error Reporting 
 + Capabilities: [140] Virtual Channel 
 + Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00 
 + Kernel driver in use: r8169 
 + Kernel modules: r8169 
 + 
 +[    0.962343] r8169 0000:02:00.0 eth0: RTL8168evl/8111evl, d4:3d:7e:4e:f8:de, XID 2c9, IRQ 30 
 +[    0.963002] r8169 0000:02:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko] 
 +[    0.970877] r8169 0000:02:00.0 enp2s0: renamed from eth0 
 + 
 +</code> 
 + 
 +2nd system where there is problem with bridge and vlans: 
 + 
 +<code> 
 +22:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15) 
 + Subsystem: Micro-Star International Co., Ltd. [MSI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller 
 + Flags: bus master, fast devsel, latency 0, IRQ 35 
 + I/O ports at f000 [size=256] 
 + Memory at f7504000 (64-bit, non-prefetchable) [size=4K] 
 + Memory at f7500000 (64-bit, non-prefetchable) [size=16K] 
 + Capabilities: [40] Power Management version 3 
 + Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ 
 + Capabilities: [70] Express Endpoint, MSI 01 
 + Capabilities: [b0] MSI-X: Enable+ Count=4 Masked- 
 + Capabilities: [100] Advanced Error Reporting 
 + Capabilities: [140] Virtual Channel 
 + Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00 
 + Capabilities: [170] Latency Tolerance Reporting 
 + Capabilities: [178] L1 PM Substates 
 + Kernel driver in use: r8169 
 + Kernel modules: r8169
  
 +[    1.172620] r8169 0000:22:00.0 eth0: RTL8168h/8111h, 00:d8:61:a6:46:b0, XID 541, IRQ 92
 +[    1.173050] r8169 0000:22:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
 +[    1.184003] r8169 0000:22:00.0 enp34s0: renamed from eth0
  
 +</code>