From owner-freebsd-fs@FreeBSD.ORG Thu Sep 1 17:46:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B94A71065670 for ; Thu, 1 Sep 2011 17:46:16 +0000 (UTC) (envelope-from dan@3geeks.org) Received: from mail-yi0-f54.google.com (mail-yi0-f54.google.com [209.85.218.54]) by mx1.freebsd.org (Postfix) with ESMTP id 7DD4B8FC08 for ; Thu, 1 Sep 2011 17:46:16 +0000 (UTC) Received: by yib19 with SMTP id 19so2019759yib.13 for ; Thu, 01 Sep 2011 10:46:15 -0700 (PDT) Received: by 10.236.136.65 with SMTP id v41mr874025yhi.29.1314899175618; Thu, 01 Sep 2011 10:46:15 -0700 (PDT) Received: from [172.16.1.35] (99-126-192-237.lightspeed.austtx.sbcglobal.net [99.126.192.237]) by mx.google.com with ESMTPS id o48sm227019yhl.4.2011.09.01.10.46.13 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 01 Sep 2011 10:46:14 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1084) From: Daniel Mayfield In-Reply-To: <4E5FBE3E.7020706@snakebite.org> Date: Thu, 1 Sep 2011 12:46:12 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <553883C7-B97D-429F-AF4A-E208B6051B62@3geeks.org> References: <4E5F811A.2040307@snakebite.org> <7FAD4A4D-2465-4A80-A445-1D34424F8BB6@3geeks.org> <4E5FBE3E.7020706@snakebite.org> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1084) Subject: Re: gptzfsboot and 4k sector raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Sep 2011 17:46:16 -0000 >> I noticed that the free data space was also bigger. I tried it with >> raidz on the 512B sectors and it claimed to have only 5.3T of space. >> With 4KB sectors, it claimed to have 7.25T of space. Seems like >> something is wonky in the space calculations? >=20 > Hmmmm. It didn't occur to me that the space calculations might be = wonky. That could explain why I was seeing disk usage much higher on 4K = than 512-bytes for all my zfs datasets. Here's my zpool/zfs output w/ = 512-byte sectors (4-disk raidz): >=20 > [root@flanker/ttypts/0(~)#] zpool list tank > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > tank 7.12T 698G 6.44T 9% 1.16x ONLINE - > [root@flanker/ttypts/0(~)#] zfs list tank > NAME USED AVAIL REFER MOUNTPOINT > tank 604G 4.74T 46.4K legacy >=20 > It's a raidz1-0 of four 2TB disks, so the space available should be = (4-1=3D3)*2TB=3D6TB? Although I presume that's 6-marketing-terabtyes, = which translates to ... 6000000000000/(1024^4)=3D5. And I've got 64k = boot, 8G swap, 16G scratch on each drive *before* the tank, so eh, I = guess 4.74T sounds about right. >=20 > The 7.12T reported by zpool doesn't seem to be taking into account the = reduced space from the raidz parity. *shrug* >=20 > Enough about sizes; what's your read/write performance like between = 512-byte/4K? I didn't think to test performance in the 4K = configuration; I really wish I had, now. I didn't test performance. I'm doing all the work running from the = mfsBSD boot disc. I'm not sure a simple 'dd' is a good test, but if you = have suggestions, I'm open. daniel