From owner-freebsd-fs@FreeBSD.ORG Sat Sep 24 19:22:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 39FA71065673 for ; Sat, 24 Sep 2011 19:22:11 +0000 (UTC) (envelope-from rabgvzr@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id 000368FC14 for ; Sat, 24 Sep 2011 19:22:10 +0000 (UTC) Received: by yxk36 with SMTP id 36so4364840yxk.13 for ; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=lVh9a4ONyP+0u/Y/JBezlcc0LdI3zjNsANb3wjGrUsU=; b=ERzfxy7XQSw7jJW1JLmai0bruVtkGjDqCUc79qWiLoEjXht9LhnSxYviyiyAsALY+f WxMbt4os9I/4fc3HkU7Dp4z/BA+cOQmQvFpku8CdV26dcmCjqYAMpQu7BqjgeuSkB5rj s9NU8yZlJVj7yDR6tvmDgMrC3geAW02e1JKQ0= MIME-Version: 1.0 Received: by 10.236.201.234 with SMTP id b70mr13345425yho.122.1316892130223; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) Received: by 10.236.41.10 with HTTP; Sat, 24 Sep 2011 12:22:10 -0700 (PDT) Date: Sat, 24 Sep 2011 15:22:10 -0400 Message-ID: From: Rotate 13 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: [ZFS] Mixed 512 and 4096 byte physical sector size X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Sep 2011 19:22:11 -0000 Anyone had decent performance running mix of 512 and 4096 ("advanced format") physical sector size drives on same vdev, with ashift=12 and correct alignment? Looking at zmirror, maybe zraid. >From what I can tell, worst that can happen is I/O amplification and cache pressure against drive with smaller sector size. Search shows people had problems with regular RAID in such configuration; I think ZFS probably smart enough that only problem is more I/O. But I am not ZFS expert, so I ask. I know it's not ideal, but sometimes must work with what I've got. And better set ashift=12 from the start on live zpool, as added/replacement drives in future will probably have 4096 byte sectors.