meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:storage [2020/04/22 14:38] – niziak | vm:proxmox:storage [2020/04/26 16:57] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Storage ====== | ====== Storage ====== | ||
+ | |||
+ | ===== Terms ===== | ||
+ | * **shared** | ||
+ | * do not set local storage as shared, because content on each node is different | ||
+ | * One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. There is no need to copy VM image data, so live migration is very fast in that case. | ||
+ | * **thin-provisioning** - allocates block when they are written. | ||
+ | |||
+ | ===== Content types ===== | ||
+ | |||
+ | ^ content types | ||
+ | | Disk image | ||
+ | | ISO image | ||
+ | | Container template | vztmpl | ||
+ | | VZDump backup file | backup | ||
+ | | Container | ||
+ | | | none | prevent using block device directly for VMs (to create LVM on top) | | | ||
+ | | Snippets | ||
+ | |||
+ | '' | ||
+ | |||
+ | ==== File level storage dir layout ==== | ||
+ | * images - (VM images) - '' | ||
+ | * raw, qcow2, vmdk | ||
+ | * iso - (ISO images) - '' | ||
+ | * vztmpl - (Container templates) - '' | ||
+ | * backup - (Backup files) - '' | ||
+ | * snippets - (Snippets) - '' | ||
+ | |||
+ | ===== Default storage for ZFS ===== | ||
+ | * **local**: file-level storage - you can upload iso images and place backups there. | ||
+ | * **local-zfs**: | ||
+ | |||
+ | Note: Both resides on the same zfs pool. | ||
+ | |||
+ | <file | / | ||
+ | dir: local | ||
+ | path /var/lib/vz | ||
+ | content iso, | ||
+ | |||
+ | zfspool: local-zfs | ||
+ | pool rpool/data | ||
+ | sparse 1 | ||
+ | content images, | ||
+ | </ | ||
+ | |||
===== Storage types ===== | ===== Storage types ===== | ||
+ | * File level storage | ||
+ | * poll types: | ||
+ | * **directory** shared: **NO** | ||
+ | * **glusterFS** shared: **YES** | ||
+ | * **nfs** shared: **YES** | ||
+ | * **cifs** shared: **YES** | ||
+ | * features | ||
+ | * any POSIX compatible filesystem pointed by path | ||
+ | * no snapshot by FS: VMs with qcow2 are capable to do snapshots | ||
+ | * any content type: | ||
+ | * virtual disk images, containers, templates, ISO images, backup files | ||
+ | * Block level storage | ||
+ | * '' | ||
+ | * content types: images | ||
+ | * format: raw, shared: YES, no snapshots, no clones | ||
+ | * '' | ||
+ | * content types: images, none | ||
+ | * format: raw, shared: YES, no snapshots, no clones | ||
+ | * '' | ||
+ | * possible to create on top of iSCSI LUN to get managable disk space | ||
+ | * content types: images, rootdir | ||
+ | * format: raw, shared: YES (iSCSI), no snapshots, no clones | ||
+ | * '' | ||
+ | * new thin volume type on top of existing LVM VG | ||
+ | * thin-provisioning | ||
+ | * content types: images, rootdir | ||
+ | * format: raw, shared: NO, snapshots, clones | ||
+ | * '' | ||
+ | * benefits of ZFS: | ||
+ | * for VMs: zfs volume per VM, live snaphots, cloning | ||
+ | * thin provision | ||
+ | * '' | ||
+ | * local node ZFS | ||
+ | * content types: images, rootdir | ||
+ | * format: raw, subvol; shared: NO, snapshots YES, clones YES | ||
+ | |||
+ | |||
+ | ===== poll type ===== | ||
* Network storage | * Network storage | ||
* LVM Group on iSCSI | * LVM Group on iSCSI | ||
Line 17: | Line 100: | ||
* ZFS | * ZFS | ||
- | ===== Content types ===== | + | |
- | * tbd | + | |
- | * | + | |
===== Local storage ===== | ===== Local storage ===== | ||
Line 28: | Line 110: | ||
- | ===== iSCSI ===== | ||
- | Proxmox doc recommends: | ||
- | |||
- | iSCSI is a block level type storage, and provides no management interface. So it is usually best to export one big LUN, and setup LVM on top of that LUN. You can then use the LVM plugin to manage the storage on that iSCSI LUN. | ||
- | |||
- | iSCSI target/ | ||
- | * images | ||
- | * none | ||
- | |||
- | Use LUNs directly - use directly as VM disk without putting LVM volume on it | ||
- | |||
- | ==== NAS326 CHAP issue ==== | ||
- | |||
- | NAS326 requires CHAP authentication and initiator user name. | ||
- | There are 2 options to use NAS326: | ||
- | * disable CHAP on NAS326 | ||
- | * enable CHAP on Proxmox | ||
- | |||
- | Proxmox initiator name can be found in file: ''/ | ||
- | |||
- | === disable CHAP on NAS326 === | ||
- | * create the LUN(s) and target via the webgui | ||
- | * login to your zyxel via ssh as root | ||
- | * targetcli (this will open a shell where you can manage iscsi, use tab completion to get around in it) | ||
- | * ls (to get an overview) | ||
- | * cd / | ||
- | * set attribute authentication=0 | ||
- | * set attribute generate_node_acls=1 | ||
- | * set attribute demo_mode_write_protect=0 | ||
- | * I also deleted the ACLS by doing; cd acls, delete iqn< | ||
- | * exit | ||
- | * targetcli saveconfig (normally if you exit targetcli, it will autosave, so this is just in case) | ||
- | |||
- | === use CHAP on Proxmox === | ||
- | |||
- | Logout and remove all failed trials to connect to NAS326. | ||
- | Especially if IPv6 was enabled on NAS326, proxmox detect two send_targets: | ||
- | After disabling IPv6 on NAS326, please delete IPv6 target portal: | ||
- | <code bash> | ||
- | targetcli ls | ||
- | targetcli / | ||
- | targetcli / | ||
- | targetcli saveconfig | ||
- | </ | ||
- | |||
- | <code bash> | ||
- | ls / | ||
- | |||
- | # logout | ||
- | iscsiadm -m node -u -T " | ||
- | iscsiadm -m node -u -T " | ||
- | # remove | ||
- | iscsiadm -m node -o delete -T " | ||
- | iscsiadm -m node -o delete -T " | ||
- | </ | ||
- | |||
- | Uncomment and set following config lines: | ||
- | <file conf | / | ||
- | node.session.auth.authmethod = CHAP | ||
- | # get initiator name from / | ||
- | node.session.auth.username = iqn.1993-08.org.debian: | ||
- | node.session.auth.password = my_chap_password_for_NAS326 | ||
- | </ | ||
- | Now disovery should return only one IPv4 target: | ||
- | <code bash> | ||
- | # iscsiadm -m discovery -t sendtargets -p 192.168.28.150 | ||
- | 192.168.28.150: | ||
- | # list config options | + | ==== storage for ISOs ==== |
- | iscsiadm -m node -o show | + | |
- | # login | + | By default it is stored |
- | # iscsiadm -m node --login | + | |
- | Logging | + | |
- | Login to [iface: default, target: iqn.2020-04.com.zyxel: | + | |
- | # check new block device | ||
- | cat / | ||
- | </ | ||
- | | ||
- | iSCSI+LVM supports HA and Live Migration of VMs --> mark LVM storage as shared | ||