Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Sep 2021 18:28:28 -0700
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: zfs newbie
Message-ID:  <8c1c61d2-2b55-ae46-3304-9bfdcd6bd2d1@holgerdanske.com>
In-Reply-To: <alpine.BSF.2.00.2109071816090.65542@bucksport.safeport.com>
References:  <alpine.BSF.2.00.2109071816090.65542@bucksport.safeport.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 9/7/21 3:17 PM, Doug Denault wrote:
> 
> Following the default 12.2 zfs install I got one pool (zroot) and a 
> dataset for each of the traditional mount points. So zfs list shows:
> 
> NAME                 USED  AVAIL  REFER  MOUNTPOINT
> zroot                279G  6.75T    88K  /zroot
> zroot/ROOT          1.74G  6.75T    88K  none
> zroot/ROOT/default  1.74G  6.75T  1.74G  /
> zroot/tmp            176K  6.75T   176K  /tmp
> zroot/usr            277G  6.75T    88K  /usr
> zroot/usr/home       276G  6.75T   276G  /usr/home
> zroot/usr/ports       88K  6.75T    88K  /usr/ports
> zroot/usr/src        670M  6.75T   670M  /usr/src
> zroot/var           47.5M  6.75T    88K  /var
> zroot/var/audit       88K  6.75T    88K  /var/audit
> zroot/var/crash       88K  6.75T    88K  /var/crash
> zroot/var/log        820K  6.75T   820K  /var/log
> zroot/var/mail      46.3M  6.75T  46.3M  /var/mail
> zroot/var/tmp         88K  6.75T    88K  /var/tmp
> 
> I had consultant configure another server for us. He set up the disk 
> array with one dataset. so zfs list on this system give:
> 
> NAME    USED  AVAIL  REFER  MOUNTPOINT
> zroot  2.65G  13.2T  2.62G  legacy
> 
>  From a sysadmin view I rather like the multiple datasets. Are there 
> advantages to one over the other?


I have a SOHO LAN with one primary FreeBSD 12.2 server (CVS and Samba) 
and various Windows, macOS, iOS, and Debian clients.


As another reader mentioned, you can set ZFS properties differently on 
different datasets.


You can also apply different disaster preparedness/ recovery policies to 
different datasets -- e.g. snapshots and replication.


However, more datasets means more work and more complexity.  Few of the 
standard ZFS CLI tools work recursively on nested datasets.  For 
example, how to do you make a tree of 10 nested datasets read-only with 
one shell command?  Or, make them read-write?  Or, replicate them to 
another pool?  Or do today's backup replication job when datasets have 
been added, removed, and/or renamed since yesterday's?  Or, selectively 
destroy old snapshots?  Performing these use-cases by hand is tedious 
and error prone.  Automating them is non-trivial.  I would estimate the 
system administration complexity of nested ZFS datasets as O(N*log(N)).


But, my primary comment on your ZFS listings is that you put root on a 
6.75T pool (!) and your consultant put root on a 13.2T pool (!).  It is 
my practice to keep my OS instances small enough to fit onto a single 
"16 GB" device, and to put my data on RAID in a file server.  This 
allows me to quickly, easily, and reliably take and restore raw binary 
images of the OS devices.  How are you going to backup and restore your 
OS images?


David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8c1c61d2-2b55-ae46-3304-9bfdcd6bd2d1>