<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://192.168.180.206:8001/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://192.168.180.206:8001/feed.php">
        <title>wiki.niziak.spox.org - linux:fs:zfs</title>
        <description></description>
        <link>http://192.168.180.206:8001/</link>
        <image rdf:resource="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg" />
       <dc:date>2026-05-12T23:30:39+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:auto_snapshots?rev=1701788606&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:compression?rev=1770802151&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:copies?rev=1744193391&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:create?rev=1773210144&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:dedup?rev=1625482731&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:draid?rev=1754199560&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:encryption?rev=1637218661&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:monitoring?rev=1615878020&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:mountpoints?rev=1743966503&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:move_to_zfs?rev=1621001212&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:pool_features?rev=1773987847&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:raidz?rev=1683024835&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:replace?rev=1675066215&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:shrink?rev=1708066157&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:swap?rev=1634027428&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:tuning?rev=1776195783&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:zpool_rename?rev=1736246085&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade_features?rev=1755012872&amp;do=diff"/>
                <rdf:li rdf:resource="http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade?rev=1618752409&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://192.168.180.206:8001/_media/wiki:dokuwiki.svg">
        <title>wiki.niziak.spox.org</title>
        <link>http://192.168.180.206:8001/</link>
        <url>http://192.168.180.206:8001/_media/wiki:dokuwiki.svg</url>
    </image>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:auto_snapshots?rev=1701788606&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-12-05T15:03:26+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS snapshots</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:auto_snapshots?rev=1701788606&amp;do=diff</link>
        <description>ZFS snapshots


zfs list -t snapshot


get space used by snapshots


zfs list -o space


destroy snapshots

Destroy all snapshots contains 2022:


zfs list -H -t snapshot -o name | grep 2022 | xargs -n1 zfs destroy


Tools

zfs-auto-snapshot

Automatically create, rotate, and destroy periodic ZFS snapshots.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:compression?rev=1770802151&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-11T09:29:11+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS compresison</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:compression?rev=1770802151&amp;do=diff</link>
        <description>ZFS compresison


zfs get compression
zfs get compression /rpool

zfs get compressratio 
zfs get compressratio /rpool

zfs get compression,compressratio,used,logicalused,referenced,logicalreferenced rpool/data/subvol-118-disk-0


switch to ZSTD

From OpenZFS 2.0.0 there is support for ZSTD:</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:copies?rev=1744193391&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-04-09T10:09:51+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>copies=2</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:copies?rev=1744193391&amp;do=diff</link>
        <description>copies=2

For one HDD it is possible to write each data twice.
It can protect from:

	*  bitrot
	*  bad sector


zfs set copies=2 hddpool


Storing Multiple Copies of ZFS User Data</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:create?rev=1773210144&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-03-11T06:22:24+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS create</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:create?rev=1773210144&amp;do=diff</link>
        <description>ZFS create

&lt;https://wiki.archlinux.org/title/ZFS&gt;


zfs create hddpool /dev/disk/by-id/ata-...

zfs set aclinherit=passthrough hddpool
zfs set acltype=posixacl hddpool
zfs set xattr=sa hddpool


zfs set canmount=off hddpool

zfs create hddpool/backup

zfs set relatime=on hddpool/backup
zfs set mountpoint=/mnt/backup hddpool/backup

zfs create -o canmount=noauto -o mountpoint=/mnt/backup hddpool/backup</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:dedup?rev=1625482731&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-07-05T10:58:51+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS Dedupliciation</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:dedup?rev=1625482731&amp;do=diff</link>
        <description>ZFS Dedupliciation

For deduplication, it is recommended to have L2ARC cache size of 2-5GB per 1TB of disk.

	*  For every TB of pool data, you should expect 5 GB of dedup table data, assuming an average block size of 64K.
	*  This means you should plan for at least 20GB of system RAM per TB of pool data, if you want to keep the dedup table in RAM, plus any extra memory for other metadata, plus an extra</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:draid?rev=1754199560&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-08-03T05:39:20+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>dRAID</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:draid?rev=1754199560&amp;do=diff</link>
        <description>dRAID

declustered RAID

From ZFS RAID Level Considerations



In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID. Their spare capacity is reserved and used for 
rebuilding when one drive fails. This provides, depending on the configuration, faster rebuilding compared to a RAIDZ in 
case of drive failure. More information can be found in the official OpenZFS documentation. [1]
Note: dRAID is intended for more than 10-15 disks in a dRAID. A RAIDZ setup should be bett…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:encryption?rev=1637218661&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-11-18T06:57:41+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS Encryption</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:encryption?rev=1637218661&amp;do=diff</link>
        <description>ZFS Encryption

OpenZFS can compress single dataset. All childs will share the same encryption settings.


zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase rpool/data_enc


Unlock:


zfs load-key -r rpool/data_enc</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:monitoring?rev=1615878020&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-03-16T07:00:20+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS monitoring</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:monitoring?rev=1615878020&amp;do=diff</link>
        <description>ZFS monitoring

Munin plugins

&lt;https://gallery.munin-monitoring.org/keywords/zfs/&gt;

	*  zfs_list
		*  Author: Adam Michel (elfurbe@furbism.com)
		*  It generated lots of charts. One per dataset. It shows:
			*  used by dataset
			*  used by snapshots
			*  used by children
			*  used by ref reservation</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:mountpoints?rev=1743966503&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-04-06T19:08:23+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Mountpoints</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:mountpoints?rev=1743966503&amp;do=diff</link>
        <description>Mountpoints

change mountpoint


zfs set mountpoint=none backupool
zfs set mountpoint=none backupool/BACKUP
zfs set mountpoint=/BACKUP backupool/BACKUP

# disable auto mount
zfs set canmount=noauto /pool/dataset



allow user to mount datasets


sudo zfs allow -u USER mount,mountpoint pool/path/to/prj</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:move_to_zfs?rev=1621001212&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-05-14T14:06:52+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Move root to ZFS</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:move_to_zfs?rev=1621001212&amp;do=diff</link>
        <description>Move root to ZFS

Based on  Debian Buster Root on ZFS

Additional references:

	*  &lt;https://www.tomica.net/blog/2019/02/moving-ubuntu-to-root-on-zfs/&gt;
	*  &lt;https://blog.heckel.io/2016/12/31/move-existing-linux-install-zfs-root/&gt;
	*  ZFS - Debian Wiki

Prepare system


apt install --yes zfs-initramfs zfs-dkms


Prepare new disk


DISK=/dev/disk/by-id/scsi-SATA_disk1
sgdisk --zap-all $DISK
sgdisk -a8 -n1:24K:+1000K -t1:EF02 $DISK  # grub bios
sgdisk     -n2:1M:+256M   -t2:EF00 $DISK  # EFI
sgdisk …</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:pool_features?rev=1773987847&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-03-20T06:24:07+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>zpool features</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:pool_features?rev=1773987847&amp;do=diff</link>
        <description>zpool features

zpool-features.7

	*  blake3 - This feature enables the use of the BLAKE3 hash algorithm for checksum and dedup
	*  block_cloning - Block cloning allows to create multiple references to a single block. To use cp --reflink and other commands needing bclone support.</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:raidz?rev=1683024835&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-05-02T10:53:55+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>RAIDZ</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:raidz?rev=1683024835&amp;do=diff</link>
        <description>RAIDZ

	*  RAIDZ-1 A variation on RAID-5, single parity. Requires at least 3 disks.
	*  RAIDZ-2 A variation on RAID-5, double parity. Requires at least 4 disks.
	*  RAIDZ-3 A variation on RAID-5, triple parity. Requires at least 5 disks. 

Space efficiency

Adding more HDDs</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:replace?rev=1675066215&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-01-30T08:10:15+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS disc replace</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:replace?rev=1675066215&amp;do=diff</link>
        <description>ZFS disc replace

faulty special device

Consider hddpool with special device as mirrors on 2 SSD partitions:


NAME                                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hddpool                                          1.83T   294G  1.55T        -         -    22%    15%  1.00x  DEGRADED  -
  mirror-0                                       1.81T   291G  1.53T        -         -    22%  15.7%      -    ONLINE
    ata-TOSHIBA_HDW…</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:shrink?rev=1708066157&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T06:49:17+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS: resize zpool</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:shrink?rev=1708066157&amp;do=diff</link>
        <description>ZFS: resize zpool

extend pool

Get device name used in pool:


zpool status nvmpool



# resize /dev/nvme0n1p3
parted /dev/nvme0n1



resizepart 3
End ? [X.XGB]?
quit



zpool online -e nvmpool nvme0n1p3


ZFS: shrink zpool

Shrinking of zpool is not possible, but trick with 2nd device (or even file) works:</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:swap?rev=1634027428&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-10-12T08:30:28+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>swap on ZFS</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:swap?rev=1634027428&amp;do=diff</link>
        <description>swap on ZFS


zfs create -V 8G -b $(getconf PAGESIZE) -o compression=zle \
      -o logbias=throughput -o sync=always \
      -o primarycache=metadata -o secondarycache=none \
      -o com.sun:auto-snapshot=false rpool/swap

mkswap -f /dev/zvol/rpool/swap</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:tuning?rev=1776195783&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-04-14T19:43:03+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ZFS performance tuning tips</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:tuning?rev=1776195783&amp;do=diff</link>
        <description>ZFS performance tuning tips

Copy-paste snippet:


zfs set recordsize=1M rpool
zfs set recordsize=16M hddpool
zfs set recordsize=1M nvmpool
zfs set compression=zstd rpool
zfs set compression=zstd hddpool
zfs set compression=zstd nvmpool


Note: zstd means</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:zpool_rename?rev=1736246085&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-01-07T10:34:45+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>zpool rename</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:zpool_rename?rev=1736246085&amp;do=diff</link>
        <description>zpool rename

Not possible directly, but:


zpool export nvmpool
zpool import nvmpool oldnvmpool -N -R /oldnvmpool</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade_features?rev=1755012872&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-08-12T15:34:32+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>zpool upgrade features</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade_features?rev=1755012872&amp;do=diff</link>
        <description>zpool upgrade features

ZFS 2.3.3:

	*  redaction_list_spill
	*  raidz_expansion
	*  fast_dedup (still 1.25GiB RAM per 1TiB)
	*  longname
	*  large_microzap</description>
    </item>
    <item rdf:about="http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade?rev=1618752409&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-04-18T13:26:49+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>zpool upgrade</title>
        <link>http://192.168.180.206:8001/linux:fs:zfs:zpool_upgrade?rev=1618752409&amp;do=diff</link>
        <description>zpool upgrade

Do not blindly upgrade boot volume when grub is used. Grub support limited set of zpool features!
Or use systemd-boot and EFI partition.


~# zpool status nvmpool

status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using &#039;zpool upgrade&#039;. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.</description>
    </item>
</rdf:RDF>
