From owner-freebsd-fs@FreeBSD.ORG Sun May 6 06:21:50 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 06CA3106566C for ; Sun, 6 May 2012 06:21:50 +0000 (UTC) (envelope-from skvortsov42@gmail.com) Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com [209.85.212.172]) by mx1.freebsd.org (Postfix) with ESMTP id 8D5918FC0A for ; Sun, 6 May 2012 06:21:49 +0000 (UTC) Received: by wibhj6 with SMTP id hj6so2063320wib.13 for ; Sat, 05 May 2012 23:21:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=jN+0c3MEpPch6G+qXL3FsLOp8agFTxxTLmAdrTM1+8s=; b=evMoQZOiQo2LuG30uFIZ+E0u1Bst10izPtEIgOVr6skZIod7cAy58oAw3xda57stUd 8ivSTO5NLDiGd2rZvU2cNQMZm4mRR4HvDJI/pafSwFctd3QktqA0ITjTvKJx6mpXJ9B2 V1tkWPfJFh8JMRbS36PUNeKG2ksZZFQki+uXVEh+SGCHm/+1vOH16Dl942isOtTt3S/D +tZJtbXU+cMlVU7zrtwFDDzQvDFmJvbt6huwRGgfEpUpRRSFwANW/F4UfmFv1UOtXMox aVvPVp6uTGrxC9KJFWYB6cw028qHuYxRpwzs2vTCUwj6yZNivnVTINDtvxtY7xrnS3Uf ywkw== MIME-Version: 1.0 Received: by 10.216.198.154 with SMTP id v26mr7340909wen.74.1336285303545; Sat, 05 May 2012 23:21:43 -0700 (PDT) Received: by 10.216.143.222 with HTTP; Sat, 5 May 2012 23:21:43 -0700 (PDT) Date: Sat, 5 May 2012 23:21:43 -0700 Message-ID: From: Chris To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: ZFS 4K drive overhead X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 May 2012 06:21:50 -0000 Hi all, I'm planning on making a raidz2 with 6 2 TB drives - all 4K sectors, all reporting as 512 bytes. I've been reading some disturbing things about ZFS when used on 4K drives. In this discussion (http://mail.opensolaris.org/pipermail/zfs-discuss/2011-October/049959.html), Jim Klimov pointed out that when ZFS is used with ashift=12, the metadata overhead for a filesystem with a lot of small files can reach 100% (http://mail.opensolaris.org/pipermail/zfs-discuss/2011-October/049960.html)! That seems pretty bad to me. My questions are: Does anyone on this list have experience using ZFS on 4K drives with ashift=12? Is the overhead per file, such that having a relatively large average filesize, say, 19 MB, would render it insignificant? Or would the overhead be large regardless? What is the speed penalty for using ashift=9 on the array? Is the safety of the data on the array an issue (due to how ZFS can't write to a 512 byte sector but it's coded with the assumption that it can thus making it no longer strictly copy-on-write)? Does anyone have any experience with ashift=9 arrays on 4K drives? Thanks in advance.