From owner-freebsd-fs@FreeBSD.ORG Mon Apr 27 16:15:16 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7647BF39 for ; Mon, 27 Apr 2015 16:15:16 +0000 (UTC) Received: from smtp101-5.vfemail.net (eightfive.vfemail.net [96.30.253.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4689C1991 for ; Mon, 27 Apr 2015 16:15:15 +0000 (UTC) Received: (qmail 28676 invoked by uid 89); 27 Apr 2015 16:08:32 -0000 Received: by simscan 1.4.0 ppid: 28669, pid: 28673, t: 0.0715s scanners:none Received: from unknown (HELO d3d3MTEwQDE0MzAxNTA5MTI=) (cmlja0BoYXZva21vbi5jb21AMTQzMDE1MDkxMg==@MTcyLjE2LjEwMC45MkAxNDMwMTUwOTEy) by 172.16.100.61 with ESMTPA; 27 Apr 2015 16:08:32 -0000 Date: Mon, 27 Apr 2015 11:08:32 -0500 Message-ID: <20150427110832.Horde.MAoPtcoic1-3sfV0OhkyxQ1@www.vfemail.net> From: Rick Romero To: freebsd-fs Subject: 10.1 + ZFS snapshot eating diskspace User-Agent: Internet Messaging Program (IMP) H5 (6.2.2) X-VFEmail-Originating-IP: MTIuMzEuMTAwLjE0Ng== X-VFEmail-AntiSpam: Notify admin@vfemail.net of any spam, and include VFEmail headers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes Content-Transfer-Encoding: 8bit Content-Disposition: inline Content-Description: Plaintext Message X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2015 16:15:16 -0000 Try number two.   I built another new system, no encryption this time. I replacated ONE snapshot that is about 562GB of data. (I just found Ronalds reply in my Spam folder, sorry!) This new 10.1 system has the exact same 3 drives in RAIDZ1 as the original source (9.2).  What's confusing is the original RAIDZ1 is replicated correctly to a 10 drive RAIDZ2 (10.1), but the RAIDZ2 source cannot replicate data correctly to a new 3 drive RAIDZ1. So not only is this a problem with the new system, but it concerns me that if there were a problem with the old system that a full restore from backup would eat all the disk space. Source: # zfs get all sysvol/primessd_home |grep -i used sysvol/primessd_home  used                  822G                   - sysvol/primessd_home  usedbysnapshots       260G                   - sysvol/primessd_home  usedbydataset         562G                   - sysvol/primessd_home  usedbychildren        0                      - sysvol/primessd_home  usedbyrefreservation  0                      - sysvol/primessd_home  logicalused           811G                   - Right? 562 is the 'current' amount of space used?  So I send it to a new box, and this is the result # zfs list -t all NAME                        USED  AVAIL  REFER  MOUNTPOINT sysvol                      919G      0  12.5G  /sysvol sysvol/home                 906G      0   898G  /sysvol/home sysvol/home@remrep-Week16  8.53G      -   898G  - I can see a possible sector size diff or recordsize affecting a few bytes, but 400G is a bit excessive. The fact that it more closely matches the full dataset+snapshots, IMHO, is much more telling. # zfs get all sysvol/home | grep used sysvol/home  used                  906G                   - sysvol/home  usedbysnapshots       8.53G                  - sysvol/home  usedbydataset         898G                   - sysvol/home  usedbychildren        0                      - sysvol/home  usedbyrefreservation  0                      - sysvol/home  logicalused           574G                   - logical used is actual used, correct?   Why is it the 'full' amount, when only one snapshot was replicated? So I thought maybe it's not reporting correctly # zfs list NAME          USED  AVAIL  REFER  MOUNTPOINT sysvol        907G  12.3G   256M  /sysvol sysvol/home   906G  12.3G   898G  /sysvol/home # dd bs=1M count=12560 if=/dev/zero of=test2 dd: test2: No space left on device 12558+0 records in 12557+1 records out 13167886336 bytes transferred in 33.499157 secs (393081126 bytes/sec) # zfs list NAME          USED  AVAIL  REFER  MOUNTPOINT sysvol        919G      0  12.5G  /sysvol sysvol/home   906G      0   898G  /sysvol/home # dd bs=1M count=12560 if=/dev/zero of=test3 dd: test3: No space left on device So what's going on?  Is this a known issue? I suppose I can take the new server down to the colo and replicate from the original, but that doesn't resolve the 'restore from backup' issue that I see happening...