From owner-freebsd-fs@FreeBSD.ORG Wed May 13 16:40:20 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9F98F6D9 for ; Wed, 13 May 2015 16:40:20 +0000 (UTC) Received: from smtp102-5.vfemail.net (eightfive.vfemail.net [96.30.253.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6E93015D2 for ; Wed, 13 May 2015 16:40:20 +0000 (UTC) Received: (qmail 29072 invoked by uid 89); 13 May 2015 16:33:37 -0000 Received: by simscan 1.4.0 ppid: 29065, pid: 29068, t: 0.0957s scanners:none Received: from unknown (HELO d3d3MTExQDE0MzE1MzQ4MTc=) (cmlja0BoYXZva21vbi5jb21AMTQzMTUzNDgxNw==@MTcyLjE2LjEwMC45M0AxNDMxNTM0ODE3) by 172.16.100.62 with ESMTPA; 13 May 2015 16:33:37 -0000 Date: Wed, 13 May 2015 11:33:37 -0500 Message-ID: <20150513113337.Horde.PhGEagHa3QWN97i4Qrk6hw8@www.vfemail.net> From: Rick Romero To: freebsd-fs@freebsd.org Subject: Re: 10.1 + ZFS snapshot eating diskspace In-Reply-To: <20150427110832.Horde.MAoPtcoic1-3sfV0OhkyxQ1@www.vfemail.net> User-Agent: Internet Messaging Program (IMP) H5 (6.2.2) X-VFEmail-Originating-IP: MTIuMzEuMTAwLjE0Ng== X-VFEmail-AntiSpam: Notify admin@vfemail.net of any spam, and include VFEmail headers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes Content-Transfer-Encoding: 8bit Content-Disposition: inline Content-Description: Plaintext Message X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 May 2015 16:40:20 -0000 Ronald was on the right track.  https://lists.freebsd.org/pipermail/freebsd-fs/2015-April/021144.html The drives on the new system, while identical to the old, use native 4k sectors. In FreeBSD 10.1, ZFS automatically used ashift=12 - the old 9.2 system used an ashift of 9. Since my data is small files (mostly < 16k), the space is used VERY inefficiently with an ashift of 12. Setting vfs.zfs.max_auto_ashift: 9  prior to creating the new pool resulted in a properly sized volume after the zfs receive. zpool status complains about performace degradation, of which I'm definitely choosing space+cost over performance, but at least zpool status -x shows all good. Quoting Rick Romero : > Try number two.   I built another new system, no encryption this time. > > I replacated ONE snapshot that is about 562GB of data. > (I just found Ronalds reply in my Spam folder, sorry!) > This new 10.1 system has the exact same 3 drives in RAIDZ1 as the original > source (9.2).  What's confusing is the original RAIDZ1 is replicated > correctly to a 10 drive RAIDZ2 (10.1), but the RAIDZ2 source cannot > replicate data correctly to a new 3 drive RAIDZ1. > So not only is this a problem with the new system, but it concerns me that > if there were a problem with the old system that a full restore from > backup > would eat all the disk space. > > Source: > # zfs get all sysvol/primessd_home |grep -i used > sysvol/primessd_home  used                  > 822G                   - > sysvol/primessd_home  usedbysnapshots       > 260G                   - > sysvol/primessd_home  usedbydataset         > 562G                   - > sysvol/primessd_home  usedbychildren        > 0                      - > sysvol/primessd_home  usedbyrefreservation  > 0                      - > sysvol/primessd_home  logicalused           > 811G                   - > > Right? 562 is the 'current' amount of space used?  > > So I send it to a new box, and this is the result > > # zfs list -t all > NAME                        USED  AVAIL  REFER  > MOUNTPOINT > sysvol                      919G      0  12.5G  > /sysvol > sysvol/home                 906G      0   898G  > /sysvol/home > sysvol/home@remrep-Week16  8.53G      -   898G  - > > I can see a possible sector size diff or recordsize affecting a few bytes, > but 400G is a bit excessive. The fact that it more closely matches the > full > dataset+snapshots, IMHO, is much more telling. > > # zfs get all sysvol/home | grep used > sysvol/home  used                  > 906G                   - > sysvol/home  usedbysnapshots       > 8.53G                  - > sysvol/home  usedbydataset         > 898G                   - > sysvol/home  usedbychildren        > 0                      - > sysvol/home  usedbyrefreservation  > 0                      - > sysvol/home  logicalused           > 574G                   - > > logical used is actual used, correct?   Why is it the 'full' amount, when > only one snapshot was replicated? > > So I thought maybe it's not reporting correctly > > # zfs list > NAME          USED  AVAIL  REFER  MOUNTPOINT > sysvol        907G  12.3G   256M  /sysvol > sysvol/home   906G  12.3G   898G  /sysvol/home > > # dd bs=1M count=12560 if=/dev/zero of=test2 > dd: test2: No space left on device > 12558+0 records in > 12557+1 records out > 13167886336 bytes transferred in 33.499157 secs (393081126 bytes/sec) > # zfs list > NAME          USED  AVAIL  REFER  MOUNTPOINT > sysvol        919G      0  12.5G  /sysvol > sysvol/home   906G      0   898G  /sysvol/home > # dd bs=1M count=12560 if=/dev/zero of=test3 > dd: test3: No space left on device > > So what's going on?  Is this a known issue? > I suppose I can take the new server down to the colo and replicate from > the > original, but that doesn't resolve the 'restore from backup' issue that I > see happening... > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fsTo unsubscribe, send > any mail to "freebsd-fs-unsubscribe@freebsd.org"