Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 May 2017 07:34:36 +0000
From:      kc atgb <kisscoolandthegangbang@hotmail.fr>
To:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: Different size after zfs send receive
Message-ID:  <AMSPR05MB148026492ACA46AD8D2A15BA0E50@AMSPR05MB148.eurprd05.prod.outlook.com>
In-Reply-To: <58A6B47B-2992-4BB8-A80E-44F74EAE93B2@longcount.org>
References:  <DBXPR05MB157C1956B267EA6BE59F570A0E40@DBXPR05MB157.eurprd05.prod.outlook.com> <58A6B47B-2992-4BB8-A80E-44F74EAE93B2@longcount.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Le Thu, 18 May 2017 21:53:23 +0000,
Mark Saad <nonesuch@longcount.org> a =E9crit :

Hi,

I see what you are talking about I thing. You refer to "raid" splitting, ri=
ght ? In this case this is something in the "internals" of the raid system.=
 Isn't
zfs list suppose to report raw data sizes (without metadata, checksums, ...=
 ) ?=20

I don't really think it is related to what I'm refering.=20

Look, for the same pool configuration (one 4 disks raidz1 vdev) with the sa=
me disks and the same data, it reports for storage/usrobj
5819085888 before backup and
5820359616 after restore to the recreated pool.=20

Even for pools with one single disk vdev (again same disks, same configurat=
ion, same data as above...) for the same dataset
5675081728 in backup1 disk and
5675188224 in backup2

The difference isn't so big but the numbers differ and I would imagine numb=
ers to be the same.=20

K.

> Hi kc=20
>   This has to do with how data blocks are replicated when stored on a rai=
dzN . Moving them to a mirror removes replicated blocks . This is way over
> simplified but imagine you store a file of 10gb on a raidz1 . The system =
splits the file into smaller chunks; of say 1mb , and stores one extra chun=
k for
> each chunk that us striped around the raidz1 . Storing on a mirror is jus=
t write the chunk once on each disk . However with a mirror since you only =
see 1/2
> the number of disks you never see the extra chunks in the used field .=20
>=20
> Hope this helps .=20
>=20
> ---
> Mark Saad | nonesuch@longcount.org
>=20
> > On May 18, 2017, at 3:36 PM, kc atgb <kisscoolandthegangbang@hotmail.fr=
> wrote:
> >=20
> > Hi,
> >=20
> > Some days ago I had a need to backup my current pool and restore it aft=
er pool destroy and create.=20
> >=20
> > The pool in my home server is a raidz1 with 4 disks. To backup this poo=
l I grabbed two 4TB disks (single disk pools) to have a double backup (I ha=
ve just
> > one sata port left I can use to plug a disk).=20
> >=20
> > The whole process of backup and restore went well as I can say. But loo=
king at the size reported by zfs list make me a little bit curious.=20
> >=20
> > storage/datas/ISO                                                      =
  35420869824  381747995136    35420726976  /datas/ISO
> > storage/datas/ISO@backup_send                                          =
       142848             -    35420726976  -
> > storage/datas/ISO@backup_sync                                          =
            0             -    35420726976  -
> >=20
> > b1/datas/ISO                                                        354=
39308800  2176300351488    35439210496  /datas/ISO
> > b1/datas/ISO@backup_send                                               =
   98304              -    35439210496  -
> > b1/datas/ISO@backup_sync                                               =
       0              -    35439210496  -
> >=20
> > b2/datas/ISO                                                        354=
39308800  2176298991616    35439210496  /datas/ISO
> > b2/datas/ISO@backup_send                                               =
   98304              -    35439210496  -
> > b2/datas/ISO@backup_sync                                               =
       0              -    35439210496  -
> >=20
> > storage/datas/ISO                                                      =
  35421024576  381303470016    35420715072  /datas/ISO
> > storage/datas/ISO@backup_send                                          =
       142848             -    35420715072  -
> > storage/datas/ISO@backup_sync                                          =
        11904             -    35420715072  -
> >=20
> >=20
> > storage/usrobj                                                         =
   5819085888  381747995136     5816276544  legacy
> > storage/usrobj@create                                                  =
       166656             -         214272  -
> > storage/usrobj@backup_send                                             =
      2642688             -     5816228928  -
> > storage/usrobj@backup_sync                                             =
            0             -     5816276544  -
> >=20
> > b1/usrobj                                                            56=
75081728  2176300351488     5673222144  legacy
> > b1/usrobj@create                                                       =
  114688              -         147456  -
> > b1/usrobj@backup_send                                                  =
 1744896              -     5673222144  -
> > b1/usrobj@backup_sync                                                  =
       0              -     5673222144  -
> >=20
> > b2/usrobj                                                            56=
75188224  2176298991616     5673328640  legacy
> > b2/usrobj@create                                                       =
  114688              -         147456  -
> > b2/usrobj@backup_send                                                  =
 1744896              -     5673328640  -
> > b2/usrobj@backup_sync                                                  =
       0              -     5673328640  -
> >=20
> > storage/usrobj                                                         =
   5820359616  381303470016     5815098048  legacy
> > storage/usrobj@create                                                  =
       166656             -         214272  -
> > storage/usrobj@backup_send                                             =
      2535552             -     5815098048  -
> > storage/usrobj@backup_sync                                             =
        11904             -     5815098048  -
> >=20
> > As you can see the numbers are different for each pool (the initial rai=
dz1, backup1 disk, backup2 disk and new raidz1). I mean in the USED column.=
 I have
> > nearly all my datasets in the same situation (those with fixed data tha=
t have not changed between the beginning of the process and now). backup1 a=
nd backup2
> > are identical disks with exactly the same configurations and have diffe=
rent numbers. I used the same commands for all my transfers except the name=
 of the
> > destination pool.=20
> >=20
> > So, I wonder what can cause these differences ? Is it something I have =
to worry about ? Can I consider this as a normal behavior ?=20
> >=20
> > Thanks for your enlightments,
> > K.
> > _______________________________________________
> > freebsd-fs@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AMSPR05MB148026492ACA46AD8D2A15BA0E50>