Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jul 2013 03:55:14 -0400
From:      Zaphod Beeblebrox <zbeeble@gmail.com>
To:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Efficiency of ZFS ZVOLs.
Message-ID:  <CACpH0MfRr_SzjXbTSs72NJdcDzOp%2Bwyzgi5ipidjDVy%2BoA2Hng@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
I have a ZFS pool that consists of 9 1.5T drives (Z1) and 8 2T drives
(Z1).  I know this is not exactly recommended, but this is more a home
machine that provides some backup space rather than a production machine
--- and thus it gets what it gets.

Anyways... a typical filesystem looks like:

[1:7:307]root@virtual:~> zfs list vr2/tmp
NAME      USED  AVAIL  REFER  MOUNTPOINT
vr2/tmp  74.3G  7.31T  74.3G  /vr2/tmp

... that is "tmp" uses 74.3G and the whole mess has 7.31T available.  If
tmp had children, "USED" could be larger than "REFER" because the children
account for the rest

Now... consider:

[1:3:303]root@virtual:~> zfs list -rt all vr2/Steam
NAME                      USED  AVAIL  REFER  MOUNTPOINT
vr2/Steam                3.25T  9.27T  1.18T  -
vr2/Steam@20130528-0029   255M      -  1.18T  -
vr2/Steam@20130529-0221   172M      -  1.18T  -

vr2/Steam is a ZVOL exported by iSCSI to my desktop and it contains an NTFS
filesystem which is mounted into C:\Program Files (x86)\Steam.  Windows
sees this drive as a 1.99T drive of which 1.02T is used.

Now... the value of "REFER" seems quite right: 1.18T vs. 1.02T is pretty
good... but the value of "USED" seems _way_ out.  3.25T ... even regarding
that more of the disk might have been "touched" (ie: used from the ZVOL's
impression) than is used, it seems too large.  Neither is it 1.18T + 255M +
172M.

Now... I understand that the smallest effective "block" is 7x512 or 8x512
(depending on which part of the disk is in play) --- but does that really
account for it?  A quick google check says that NTFS uses a default cluster
of 4096 (or larger).  Is there a fundamental inefficiency in the way ZVOLs
are stored on wide (or wide-ish) RAID stripes?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CACpH0MfRr_SzjXbTSs72NJdcDzOp%2Bwyzgi5ipidjDVy%2BoA2Hng>