From owner-freebsd-fs@FreeBSD.ORG Sun Jul 14 07:55:15 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 92F6C112 for ; Sun, 14 Jul 2013 07:55:15 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-vc0-x231.google.com (mail-vc0-x231.google.com [IPv6:2607:f8b0:400c:c03::231]) by mx1.freebsd.org (Postfix) with ESMTP id 5ACBB8A4 for ; Sun, 14 Jul 2013 07:55:15 +0000 (UTC) Received: by mail-vc0-f177.google.com with SMTP id hv10so8526306vcb.22 for ; Sun, 14 Jul 2013 00:55:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=du1WtNY51RXPEOaCCtF1o5QrMpg3h3hi0MF/r3rLfxY=; b=0YFtDsoq/PJfNm6CwLPoz5QUGH5VdV2c2l6//hFa3vpjxBcQUYT/1IAKbsx8Zq+VpQ m6DOtW8HUhRiaD6ocOkT2lVSbnNCWOweNV8vt+O6UTy9qyYM5QsQZFFTjLvbrlfbdaGx w20kZlmv+epZsiUk62BR73mOXLM9f7eJJMrlf1Wxikg/0psRBnM1ML2gJnNC178SCO3K FWJxh0leYIprnKXZQ0hNNS7XpjTDzcXDiAQNEddG2oO+T8IdjI/iO1LSiBPztXBVRZCE /zmTaLz4oSfthhhie0nY+vnwiQVnraixhpl6DTfVvijG92vxXyjNLx/xa6SfnB9BGTzD 4ZrQ== MIME-Version: 1.0 X-Received: by 10.220.168.141 with SMTP id u13mr26953401vcy.23.1373788514638; Sun, 14 Jul 2013 00:55:14 -0700 (PDT) Received: by 10.221.22.199 with HTTP; Sun, 14 Jul 2013 00:55:14 -0700 (PDT) Date: Sun, 14 Jul 2013 03:55:14 -0400 Message-ID: Subject: Efficiency of ZFS ZVOLs. From: Zaphod Beeblebrox To: freebsd-fs Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jul 2013 07:55:15 -0000 I have a ZFS pool that consists of 9 1.5T drives (Z1) and 8 2T drives (Z1). I know this is not exactly recommended, but this is more a home machine that provides some backup space rather than a production machine --- and thus it gets what it gets. Anyways... a typical filesystem looks like: [1:7:307]root@virtual:~> zfs list vr2/tmp NAME USED AVAIL REFER MOUNTPOINT vr2/tmp 74.3G 7.31T 74.3G /vr2/tmp ... that is "tmp" uses 74.3G and the whole mess has 7.31T available. If tmp had children, "USED" could be larger than "REFER" because the children account for the rest Now... consider: [1:3:303]root@virtual:~> zfs list -rt all vr2/Steam NAME USED AVAIL REFER MOUNTPOINT vr2/Steam 3.25T 9.27T 1.18T - vr2/Steam@20130528-0029 255M - 1.18T - vr2/Steam@20130529-0221 172M - 1.18T - vr2/Steam is a ZVOL exported by iSCSI to my desktop and it contains an NTFS filesystem which is mounted into C:\Program Files (x86)\Steam. Windows sees this drive as a 1.99T drive of which 1.02T is used. Now... the value of "REFER" seems quite right: 1.18T vs. 1.02T is pretty good... but the value of "USED" seems _way_ out. 3.25T ... even regarding that more of the disk might have been "touched" (ie: used from the ZVOL's impression) than is used, it seems too large. Neither is it 1.18T + 255M + 172M. Now... I understand that the smallest effective "block" is 7x512 or 8x512 (depending on which part of the disk is in play) --- but does that really account for it? A quick google check says that NTFS uses a default cluster of 4096 (or larger). Is there a fundamental inefficiency in the way ZVOLs are stored on wide (or wide-ish) RAID stripes?