meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:issues [2020/04/27 20:36] niziakvm:proxmox:issues [2024/03/26 19:39] (current) niziak
Line 1: Line 1:
 ====== Issues ====== ====== Issues ======
 +
 +
 +===== run replication first =====
 +
 +<code>
 +TASK ERROR: snapshot 'up' needed by replication job '122-2' - run replication first
 +</code>
 +
 +Job was previously (months ago) removed but stuck in ''Removal Scheduled'' state. 
 +It looks like jobs must be enabled to perform delete remote snapshots.
 +Snapshots was deleted months ago, so only ''force'' will help.
 +
 +<code bash>pvesr delete 122-1 --force 1</code>
 +
 +Or remove jobs from file ''/etc/pve/replication.cfg''
 +
 +
 +===== Permission denied (os error 13) =====
 +During CT backup:
 +<code>
 +INFO: Error: unable to create backup group "/mnt/datastore/backup/ns/home/ns/not_important/ct/114" - Permission denied (os error 13)
 +</code>
 +
 +**Solution:** fix directory permissions on Proxmox Backup Server to allow given user to create new dir.
 +
 +===== Full clone feature is not supported for drive 'scsi2' (500) =====
 +Try to clone machine from snapshot.
 +Selecting snapshot 'current' for clone source works.
 +
 +**REASON:**
 +  * It is not possible to clone disk from ZFS snapshot. It is not possible to easily and safe mount ZFS snapshot as block device to get it as source of clone.
 +  * [[https://forum.proxmox.com/threads/full-clone-feature-is-not-supported-for-drive-efidisk0-500.71511/|Full clone feature is not supported for drive 'efidisk0' (500)?]]
 +  * It will work with: LVM, CEPH, QCOW2 images
 +
 +
 +===== Replication Job: 151-1 failed =====
 +<code>command 'zfs snapshot nvmpool/data/vm-151-disk-0@__replicate_151-1_1610641828__' failed: got timeout</code>
 +
 +''this described problem normally occurs is the pool is under load and snapshot has a lower priority.''
 +''Yes, the snapshot will be created because it is in the ZFS low priority queue.''
 +
 +===== The current guest configuration does not support taking new snapshots =====
 +
 +LXC container created on ''local-zfs'' and mount point ''/var/lib/docker'' to ''local'' storage (RAW file on disk).
 +It is becasue ''local'' (DIR type) doesn't support snapshots!
 +Solution is to move from ''local'' to ''RBD'' storage.
 +
 +
 +===== TASK ERROR: VM is locked (clone) =====
 +<code bash>qm unlock 105</code>
 +
 +===== No CIFS storage after few days =====
 +See below.
 +
 +===== TASK ERROR: start failed: org.freedesktop.systemd1.NoSuchUnit: Unit 107.scope not found. =====
 +    * **Reason**: tmpfs ''/var'' is full because smbclient fills it with temporary files
 +    * Bugs:
 +        * [[https://www.reddit.com/r/Proxmox/comments/e9hhlu/tmpfs_run_fills_up_and_prevents_services_from/]]
 +        * Workaround proposal: [[https://bugzilla.proxmox.com/show_bug.cgi?id=2333|Bug 2333 - Samba eating up inodes (msg.lock)]]
 +
 +<code>
 +--- /run/samba
 +                         /..
 +    2,8 GiB [##########] /msg.lock
 +  444,0 KiB [          ]  gencache_notrans.tdb
 +    4,0 KiB [          ]  names.tdb
 +</code>
 +
 +Workaround is to clean ''msg.lock'' directory from cron.
 +
 +<file yaml samba-cleanup.yml>
 +- name: PVE | cron | samba-cleanup
 +  become: yes
 +  cron:
 +    name: "Clean /var/run/samba/ daily"
 +    user: root
 +    special_time: hourly
 +    job: "find /var/run/samba/msg.lock -type f -delete"
 +    state: present
 +</file>
 +
  
 ===== no CT network connectivity after migration ===== ===== no CT network connectivity after migration =====
Line 85: Line 166:
  Kernel driver in use: r8169  Kernel driver in use: r8169
  Kernel modules: r8169  Kernel modules: r8169
 +
 +[    0.962343] r8169 0000:02:00.0 eth0: RTL8168evl/8111evl, d4:3d:7e:4e:f8:de, XID 2c9, IRQ 30
 +[    0.963002] r8169 0000:02:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
 +[    0.970877] r8169 0000:02:00.0 enp2s0: renamed from eth0
 +
 </code> </code>
  
Line 107: Line 193:
  Kernel driver in use: r8169  Kernel driver in use: r8169
  Kernel modules: r8169  Kernel modules: r8169
 +
 +[    1.172620] r8169 0000:22:00.0 eth0: RTL8168h/8111h, 00:d8:61:a6:46:b0, XID 541, IRQ 92
 +[    1.173050] r8169 0000:22:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
 +[    1.184003] r8169 0000:22:00.0 enp34s0: renamed from eth0
 +
 </code> </code>