meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:issues [2020/10/07 10:20] niziakvm:proxmox:issues [2024/03/26 19:39] (current) niziak
Line 1: Line 1:
 ====== Issues ====== ====== Issues ======
 +
 +
 +===== run replication first =====
 +
 +<code>
 +TASK ERROR: snapshot 'up' needed by replication job '122-2' - run replication first
 +</code>
 +
 +Job was previously (months ago) removed but stuck in ''Removal Scheduled'' state. 
 +It looks like jobs must be enabled to perform delete remote snapshots.
 +Snapshots was deleted months ago, so only ''force'' will help.
 +
 +<code bash>pvesr delete 122-1 --force 1</code>
 +
 +Or remove jobs from file ''/etc/pve/replication.cfg''
 +
 +
 +===== Permission denied (os error 13) =====
 +During CT backup:
 +<code>
 +INFO: Error: unable to create backup group "/mnt/datastore/backup/ns/home/ns/not_important/ct/114" - Permission denied (os error 13)
 +</code>
 +
 +**Solution:** fix directory permissions on Proxmox Backup Server to allow given user to create new dir.
 +
 +===== Full clone feature is not supported for drive 'scsi2' (500) =====
 +Try to clone machine from snapshot.
 +Selecting snapshot 'current' for clone source works.
 +
 +**REASON:**
 +  * It is not possible to clone disk from ZFS snapshot. It is not possible to easily and safe mount ZFS snapshot as block device to get it as source of clone.
 +  * [[https://forum.proxmox.com/threads/full-clone-feature-is-not-supported-for-drive-efidisk0-500.71511/|Full clone feature is not supported for drive 'efidisk0' (500)?]]
 +  * It will work with: LVM, CEPH, QCOW2 images
 +
 +
 +===== Replication Job: 151-1 failed =====
 +<code>command 'zfs snapshot nvmpool/data/vm-151-disk-0@__replicate_151-1_1610641828__' failed: got timeout</code>
 +
 +''this described problem normally occurs is the pool is under load and snapshot has a lower priority.''
 +''Yes, the snapshot will be created because it is in the ZFS low priority queue.''
 +
 +===== The current guest configuration does not support taking new snapshots =====
 +
 +LXC container created on ''local-zfs'' and mount point ''/var/lib/docker'' to ''local'' storage (RAW file on disk).
 +It is becasue ''local'' (DIR type) doesn't support snapshots!
 +Solution is to move from ''local'' to ''RBD'' storage.
 +
 +
 +===== TASK ERROR: VM is locked (clone) =====
 +<code bash>qm unlock 105</code>
 +
 +===== No CIFS storage after few days =====
 +See below.
  
 ===== TASK ERROR: start failed: org.freedesktop.systemd1.NoSuchUnit: Unit 107.scope not found. ===== ===== TASK ERROR: start failed: org.freedesktop.systemd1.NoSuchUnit: Unit 107.scope not found. =====
-**Reason:** tmpfs /run is full of samba msk.lock:+    * **Reason**tmpfs ''/var'' is full because smbclient fills it with temporary files 
 +    * Bugs: 
 +        * [[https://www.reddit.com/r/Proxmox/comments/e9hhlu/tmpfs_run_fills_up_and_prevents_services_from/]] 
 +        * Workaround proposal[[https://bugzilla.proxmox.com/show_bug.cgi?id=2333|Bug 2333 - Samba eating up inodes (msg.lock)]]
  
 <code> <code>
Line 12: Line 68:
 </code> </code>
  
-[[https://bugzilla.proxmox.com/show_bug.cgi?id=2333|Bug 2333 Samba eating up inodes (msg.lock)]]+Workaround is to clean ''msg.lock'' directory from cron. 
 + 
 +<file yaml samba-cleanup.yml> 
 +- name: PVE cron | samba-cleanup 
 +  become: yes 
 +  cron: 
 +    name: "Clean /var/run/samba/ daily" 
 +    user: root 
 +    special_time: hourly 
 +    job: "find /var/run/samba/msg.lock -type f -delete" 
 +    state: present 
 +</file> 
  
 ===== no CT network connectivity after migration ===== ===== no CT network connectivity after migration =====