Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Feb 2021 14:07:32 -0800
From:      joe mcguckin <joe@via.net>
To:        freebsd-fs@freebsd.org
Subject:   zfs send, recv questions
Message-ID:  <1D71C028-33C7-4950-B3D5-6811A2C47ECE@via.net>
In-Reply-To: <CAOjFWZ6CM1ke1mZYh3065%2BN6NWw7Usr4wmXus-Hd8ZRTLs%2BQng@mail.gmail.com>
References:  <8BF84A0F-E66D-423A-AB99-0D19A9BB37EE@via.net> <CAOjFWZ6CM1ke1mZYh3065%2BN6NWw7Usr4wmXus-Hd8ZRTLs%2BQng@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


I=E2=80=99m using zfs send to populate the test box with some sample =
throwaway files. zfs recv wants the name of a non-existant =
directory/mountpoint that it will
create with all the new files. Is there a way to have zfs add the files =
to an existing directory? I tried simply =E2=80=98mv=E2=80=99ing the =
files to another directory on the same pool
(trying to add the files to an existing directory) - Usually on UFS this =
is very quick, just a quick change to the directory, but on zfs it=E2=80=99=
s recopying all the files; yet another 30 minute wait=E2=80=A6
I guess since this is going across a mount point, FreeBSD wants to make =
a copy.

Is there a better way to achieve this?

I=E2=80=99m cheating by doing ali of this as root, how to do zfs recv as =
non-root? The Lucas book did a lot of hand-waving without a concrete =
example.

Thanks,

Joe


Joe McGuckin
ViaNet Communications

joe@via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax



> On Feb 8, 2021, at 1:06 PM, Freddie Cash <fjwcash@gmail.com> wrote:
>=20
>=20
>=20
>=20
>=20
> On Mon., Feb. 8, 2021, 12:27 p.m. joe mcguckin, <joe@via.net =
<mailto:joe@via.net>> wrote:
> df -h reports 66T available
>=20
> zpool list says 102T
>=20
> Why the discrepency?
>=20
> This is on a system with 7 16Tb drives configured as raidz2.
>=20
> Thanks,
>=20
> Joe
>=20
>=20
> Joe McGuckin
> ViaNet Communications
>=20
> joe@via.net <mailto:joe@via.net>
> 650-207-0372 cell
> 650-213-1302 office
> 650-969-2124 fax
>=20
> "zpool list" shows the raw storage available on the pool, across all =
the disks in the pool, minus some internal reserved storage.
>=20
> "zfs list" shows the usable storage space after all the parity drives =
are removed from the calculation.
>=20
> "df" output can be misleading as it doesn't take into account =
compression and reservations and things like that. It can give you an =
approximation of available space, but it won't be as accurate as "zfs =
list".
>=20
> For example, if you have 6x 2 TB drives configured as a single raidz2 =
vdev, then:
>=20
> zpool list: around 12 TB (6 drives x 2 TB)
> zfs list: around 8 TB (4 data drives x 2 TB)
> df: should be around 8 TB
>=20
> Cheers,
> Freddie
>=20
> Typos due to smartphone keyboard.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1D71C028-33C7-4950-B3D5-6811A2C47ECE>