From owner-freebsd-fs@FreeBSD.ORG Sun Jan 27 09:03:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 16697B43 for ; Sun, 27 Jan 2013 09:03:14 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id CB2D27AB for ; Sun, 27 Jan 2013 09:03:13 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id EF21247E11; Sun, 27 Jan 2013 10:03:05 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.4 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 5410947DE6 for ; Sun, 27 Jan 2013 10:03:03 +0100 (CET) Message-ID: <5104ED41.8020800@platinum.linux.pl> Date: Sun, 27 Jan 2013 10:02:57 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS slackspace, grepping it for data References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Jan 2013 09:03:14 -0000 On 2013-01-27 09:36, grarpamp wrote: > Say there's a 100GB zpool over a single vdev (one drive). > It's got a few datasets carved out of it. > How best to stroll through only the 10GB of slackspace > (aka: df 'Avail') that is present? > I tried making a zvol out of it but only got 10mb of zeros, > which makes sense because zfs isn't managing anything > written there in that empty zvol yet. > I could troll the entire drive, but that's 10x the data and > I don't really want the current 90gb of data in the results. > There is zdb -R, but I don't know the offsets of the slack, > unless they are somehow tied to the pathname hierarchy. > Any ideas? zdb -mmm pool_name for on-disk offset add 0x400000 If i remember correctly.