Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 Apr 2015 10:05:02 +0200
From:      "Ronald Klop" <ronald-lists@klop.ws>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS 10.1 send single snapshot - space 'used' irregularity
Message-ID:  <op.xwu92ocqkndu52@ronaldradial.radialsg.local>
In-Reply-To: <20150409163900.Horde.ZLVwr91i2UaonmJT1bC-Pw1@www.vfemail.net>
References:  <20150409163900.Horde.ZLVwr91i2UaonmJT1bC-Pw1@www.vfemail.net>

next in thread | previous in thread | raw e-mail | index | archive | help
How about disk types? Do they use the same sector size? Which might give a  
different overhead.
What is the layout of your pools? ZRAID1, 2 or 3, MIRROR?

Regards,
Ronald.


On Thu, 09 Apr 2015 23:39:00 +0200, Rick Romero <rick@havokmon.com> wrote:

> I have 3 servers, A, B, C.  I'm building C to replace A, and replicating
> the data to C from backup B.  A is offsite in relation to B and C.
> All servers are FreeBSD 10.1, except A - which is 9.2.
>
> I'm confused on disk usage. Not so much a GB here or there, but 250GB is
> 'unaccounted for' on C.  C and A should be a pretty close match.
>
> A - looks correct
>
> sysvolssd2/home  used                495G                   -
> sysvolssd2/home  usedbysnapshots     37.9G                  -
> sysvolssd2/home  usedbydataset       456G                   -
> sysvolssd2/home  usedbychildren      669M                   -
> sysvolssd2/home  usedbyrefreservation0                      -
> sysvolssd2/home  logicalused         585G                   -
>
> NAME         SIZE     ALLOC   FREE    CAP  DEDUP    HEALTH  ALTROOT
> sysvolssd2  1.39T   744G   680G       52%    1.00x     ONLINE  -
>
> B - looks correct (backup of A, holds more snapshots and other crap than  
> A)
> sysvol/primessd_home  used                777G                   -
> sysvol/primessd_home  usedbysnapshots     240G                   -
> sysvol/primessd_home  usedbydataset       537G                   -
> sysvol/primessd_home  usedbychildren      0                      -
> sysvol/primessd_home  usedbyrefreservation0                      -
> sysvol/primessd_home  logicalused         754G                   -
>
> NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAPDEDUP  HEALTH   
> ALTROOT
> sysvol          4.53T  2.43T  2.10T    20%       -                 53%   
> 1.00x  ONLINE  -
> C - missing what appears to be the multiple snapshot data.  Only the
> latest snapshot was sent, not the entire dataset.  So 531GB is close
> enough to the 537G of B's dataset.
> sysvol_enc/home  used                758G                   -
> sysvol_enc/home  usedbysnapshots     3.00M                  -
> sysvol_enc/home  usedbydataset       752G                   -
> sysvol_enc/home  usedbychildren      5.84G                  -
> sysvol_enc/home  usedbyrefreservation0                      -
> sysvol_enc/home  logicalused         531G                   -
> NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAPDEDUP  HEALTH   
> ALTROOT
> sysvol_enc  1.39T  1.12T   277G    49%       -                   80%   
> 1.00x  ONLINE  -
>
> C is geli encrypted and B is not.
>
> Unfortunately when I check another server that's geli encrypted, it looks
> fine:
>
> E -
> nlsysvol/home  used                13.8G                  -
> nlsysvol/home  usedbysnapshots     5.58G                  -
> nlsysvol/home  usedbydataset       7.78G                  -
> nlsysvol/home  usedbychildren      483M                   -
> nlsysvol/home  usedbyrefreservation0                      -
> nlsysvol/home  logicalused         12.0G                  -
> NAME       SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAPDEDUP  HEALTH   
> ALTROOT
> nlsysvol   115G  42.8G    72.2G      -       -                    37%   
> 1.00x    ONLINE  -
>
> So the difference shouldn't be related to the encryption.  It's almost as
> if the send from B to C included all the incremental snapshots, but  
> didn't
> actually account for them.  Am I reading this wrong, or is something else
> not right ?
> Should I delete that dataset, re-send the entire original dataset, then
> delete the incremental snapshots?
>
> It makes me a little concerned that deleting a snapshot might delete the
> data which was written at that time, even though it was not deleted in
> followup snapshots...
> And I assume FRAG is fragmentation.  50% is a bit strange for a brand new
> receive, isn't it?
>
> help.  :)
>
> Rick
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.xwu92ocqkndu52>