Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Apr 2021 11:53:00 -0700
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: advise or best practices with ZFS snapshot
Message-ID:  <52857d52-152e-335f-cb9a-97f342dc006b@holgerdanske.com>
In-Reply-To: <MX_GFO7--B-2@keemail.me>
References:  <MX_GFO7--B-2@keemail.me>

next in thread | previous in thread | raw e-mail | index | archive | help
On 4/5/21 8:41 PM, sesquivels--- via freebsd-questions wrote:

> Geom name: ada1
> Providers:
> 1. Name: ada1
>     Mediasize: 128035676160 (119G)
>     Sectorsize: 512
>     Mode: r1w1e2
>     descr: MTFDDAK128MAM-1J1
>     lunid: 500a0751039d7440
>     ident: 1405039D7440
>     rotationrate: 0
>     fwsectors: 63
>     fwheads: 16


What is this drive used for?


> zpool status
>    pool: zroot
> state: ONLINE
>    scan: scrub repaired 0 in 0 days 01:43:28 with 0 errors on Thu Mar 18 01:57:21 2021
> config:
> 
>          NAME        STATE     READ WRITE CKSUM
>          zroot       ONLINE       0     0     0
>            raidz1-0  ONLINE       0     0     0
>              ada4p3  ONLINE       0     0     0
>              ada5p3  ONLINE       0     0     0
>              ada6p3  ONLINE       0     0     0
>              ada7p3  ONLINE       0     0     0
>            raidz1-2  ONLINE       0     0     0
>              ada0    ONLINE       0     0     0
>              ada2    ONLINE       0     0     0
>              ada3    ONLINE       0     0     0
>              ada8    ONLINE       0     0     0
>          logs
>            nvd0p2    ONLINE       0     0     0
>          cache
>            nvd0p1    ONLINE       0     0     0
> 
> errors: No known data errors


So, your pool is comprised of two raidz1 vdevs of 4 @ 1 TB each, an NVMe 
partition for log, and an NVMe partition for cache.


OS pools are named 'zpool' by the FreeBSD installer.  Data pools are 
conventionally named 'tank' (I use the letter 'p' followed by a unique 
number).  This article has some good ideas:

https://b3n.org/zfs-hierarchy/


Is the above your OS pool?


If your log device fails, you may lose data.  The recommended practice 
is to use a vdev with redundancy for the log (e.g. 2 drive mirror, 3 
drive raidz1, etc.).

https://docs.oracle.com/cd/E19253-01/819-5461/gazgw/index.html


Using a partition on your NVMe device for the cache should help performance.


 > About zfs administration, and snapshots, where can I found more 
information?


I like the books by Lucas:

https://mwl.io/nonfiction/os#af3e

https://mwl.io/nonfiction/os#fmzfs

https://mwl.io/nonfiction/os#fmaz


 > https://docs.oracle.com/cd/E19253-01/819-5461/gbiqe/index.html

 > zfs list -t snapshot
 > NAME                 USED  AVAIL  REFER  MOUNTPOINT
 > zroot/Dom@03252021  32.6G      -   353G  -


My pools were created with early versions of FreeBSD 11R and 12R, and do 
not have the 'listsnapshots' property.


This is how I look at snapshots of a given filesystem:

2021-04-06 09:48:42 toor@f3 ~
# zfs list -t all -d 1 p3/ds2 | grep @ | tail
p3/ds2@manual-20210217-1553                  0      -    88K  -
p3/ds2@manual-20210226-2334                  0      -    88K  -
p3/ds2@manual-20210228-1807                  0      -    88K  -
p3/ds2@manual-20210301-1524                  0      -    88K  -
p3/ds2@manual-20210306-0009                  0      -    88K  -
p3/ds2@manual-20210314-1201                  0      -    88K  -
p3/ds2@manual-20210321-1355                  0      -    88K  -
p3/ds2@manual-20210328-1520                  0      -    88K  -
p3/ds2@manual-20210328-2306                  0      -    88K  -
p3/ds2@manual-20210404-1338                  0      -    88K  -


 > https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html

 > Doing a zfs send to file  as this, may I delete [snapshot] from my 
zpool and then restore it using zfs receiver?

 > zfs send zroot/Dom@03252021 | gzip > /mnt/bkp/march2021.gz


Do you understand that the above command will not backup any child 
filesystems of zroot/Dom?  Nor any other snapshots?


If and when you restore that filesystem, you may want to validate it. 
mtree(8) is one option:

https://www.techrepublic.com/blog/it-security/use-mtree-for-filesystem-integrity-auditing/


Once you figure out how to properly backup, restore, and validate your 
filesystem(s), you will need to think about backup redundancy and aging 
policies.  I would retain snapshots on the live system until it needed 
more space.  I would retain backup files on the backup device or system 
until it needed more space.


Are you aware of zfs-auto-snapshot?

# pkg install zfstools


zfs-auto-snapshot is configured via crontab(1):

2021-04-06 10:30:36 toor@f3 ~
# crontab -l | grep ds2
  3 *  *  * * /usr/local/sbin/zfs-auto-snapshot -P p3/ds2  -k h 24
15 3  *  * 0 /usr/local/sbin/zfs-auto-snapshot -P p3/ds2  -k w  4
21 3  1  * * /usr/local/sbin/zfs-auto-snapshot -P p3/ds2  -k m 24
27 3  1  1 * /usr/local/sbin/zfs-auto-snapshot -P p3/ds2  -k y 99

and via ZFS properties:

2021-04-06 10:33:05 toor@f3 ~
# zfs get all p3 p3/ds2 | grep auto-snapshot
p3      com.sun:auto-snapshot  true                   local
p3/ds2  com.sun:auto-snapshot  true                   inherited from p3


I write scripts for repetitive operations to get accuracy and 
consistency.  My collection of scripts has grown and evolved as I have 
learned.


ZFS may appear simple going in, but is non-trivial in practice.  I had 
more than a few painful accidents early on.  I suggest that you build a 
virtual machine, some pools, and some filesystems to learn and practice 
with.  Go through the various use-cases and make sure that you 
understand what is happening before you attempt them on live data.


David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?52857d52-152e-335f-cb9a-97f342dc006b>