Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 3 Oct 2021 10:20:37 +0100
From:      Steve O'Hara-Smith <steve@sohara.org>
To:        David Christensen <dpchrist@holgerdanske.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: zfs q regarding backup strategy
Message-ID:  <20211003102037.7da16f00d478fc1e8d15c443@sohara.org>
In-Reply-To: <f5bd4ea5-1bcd-9dc7-6898-9c6f236679ae@holgerdanske.com>
References:  <YVZM1HnPuwIUQpah@ceres.zyxst.net> <ba54a415-da45-e662-73fe-65702c4131e2@holgerdanske.com> <YVcXsF5NFq2abE%2B7@ceres.zyxst.net> <20211001222816.a36e9acbd4e8829aed3afb68@sohara.org> <809e4f3b-9e59-eb53-5b7d-0bcf7e401cd5@holgerdanske.com> <20211002115440.85c4342a49fe6e4573c37dd0@sohara.org> <daf9ba49-82a3-670c-f59c-745e0c315f18@holgerdanske.com> <20211002205504.9d81ee94caa231ee9b008d6a@sohara.org> <69954781-9e3c-ba96-5f1e-9b4043ecf56c@holgerdanske.com> <20211003063353.7415d12917f0e514d433ae1c@sohara.org> <f5bd4ea5-1bcd-9dc7-6898-9c6f236679ae@holgerdanske.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 3 Oct 2021 01:36:51 -0700
David Christensen <dpchrist@holgerdanske.com> wrote:

> The idea was to do redundancy (mirror, raidzN), caching, etc, once at 
> the bottom level, rather than multiple times (once for each 
> archive-source pool).  But if it is not possible to build second-level 
> ZFS pools on top of ZFS volumes on top of a first-level ZFS pool, then 
> GPT partitions and doing it the hard way should work.  But first, I 
> would want to research GEOM and see if it can do RAID (I suspect the 
> answer is yes).

	GEOM can do RAID, but I'd rather leave redundancy to ZFS it does a
better job of it. I think the easy option is to pick a partition size, a
stripe width and RAIDZ policy (this is an archive so space over performance
efficiency) and prepare a script that just adds a standard vdev to a pool
from the stock of unused partitions.

> Yes.  Figuring out where to put this, and the other settings/ data/ 
> logs/ whatever, will be important to usability and to failure survival/ 
> recovery.

	The easy option is to have the archive server boot from UFS so that
it can do things before ZFS starts up. A pair of small SSDs in a mirror
should do the job well enough.

> I suppose the 'zfs receive -u' is overkill if 'altroot' is set properly 
> on the pool, but I am not adverse to another layer of safety when doing 
> sysadmin scripting.  I also prefer having explicit control over if/when 
> the replica is mounted.

	Fair enough.

> Most of the prior ideas are for the first full replication job of each 
> dataset.  More research/ testing/ thinking is needed for ongoing 
> incremental replication jobs.

	Minimal condition (that's skipped all too often) *verify* with the
destination that the last increment was properly handled.

	Yeah there are interesting failure modes - even before you try to
allow for ham fisted sysadmins doing stupid things.

> Yes -- that and probably a dozen more use-cases/ features to get to a 
> minimal, fully-automatic implementation.

	Yep, not least being "stop dead and shout loudly when it's too
screwed to proceed". Many manage the first bit and fail the second leaving
the lazy admin in blissful ignorance until they need the data.

> Do you have any idea if and what hooks are available during system boot 
> and ZFS setup?

	The easiest thing is to use the rc script dependency mechanism to
do things before zfs gets going.

	Another possible place would be the sender but that's getting very
fiddly and creating nasty boot time dependencies between systems (I still
have bad memories of a site that could never be fully shut down because
of the rats nest of NFS mounts that contained many cycles - I hope they
sorted it out before the first power failure).

	KISS and UFS[1] I think.

[1] Don't say that to anyone who doesn't know what it means.

-- 
Steve O'Hara-Smith <steve@sohara.org>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20211003102037.7da16f00d478fc1e8d15c443>