From owner-freebsd-fs@FreeBSD.ORG Sun Jan 22 16:13:15 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0D1F4106566C for ; Sun, 22 Jan 2012 16:13:15 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from mail.digiware.nl (mail.ip6.digiware.nl [IPv6:2001:4cb8:1:106::2]) by mx1.freebsd.org (Postfix) with ESMTP id 9553E8FC12 for ; Sun, 22 Jan 2012 16:13:14 +0000 (UTC) Received: from rack1.digiware.nl (localhost.digiware.nl [127.0.0.1]) by mail.digiware.nl (Postfix) with ESMTP id EFE79153434; Sun, 22 Jan 2012 17:13:12 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from mail.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ax6hxPI_D9ht; Sun, 22 Jan 2012 17:13:12 +0100 (CET) Received: from [192.168.10.10] (vaio [192.168.10.10]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.digiware.nl (Postfix) with ESMTPSA id 39668153433; Sun, 22 Jan 2012 17:13:12 +0100 (CET) Message-ID: <4F1C3597.4040009@digiware.nl> Date: Sun, 22 Jan 2012 17:13:11 +0100 From: Willem Jan Withagen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: Peter Maloney References: <4F193D90.9020703@digiware.nl> <20120121162906.0000518c@unknown> <4F1B0177.8080909@digiware.nl> <20120121230616.00006267@unknown> <4F1BC493.10304@brockmann-consult.de> In-Reply-To: <4F1BC493.10304@brockmann-consult.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Jan 2012 16:13:15 -0000 On 22-1-2012 9:10, Peter Maloney wrote: > Am 21.01.2012 23:06, schrieb Alexander Leidinger: >>> Corsair reports: >>>> Max Random 4k Write (using IOMeter 08): 50k IOPS (4k aligned) >>>> So I guess that suggests 4k aligned is required. >> Sounds like it is. >> > I'm not an SSD expert, but I read as much as I can, and found that many > say that the sector size is not the only thing that matters on an SSD, > but also the *erase boundary*. The size of the erase boundary varies, > but 2MiB is a common factor (or 1MiB for 99% of them), so you can use > that for all. > > The theory I read about is that when the SSD wants to write something, > it must erase the whole erase block first. If it needs to erase a whole > erase boundary space to write 512 bytes, that is just normal. But if you > are misaligned, it often needs to erase 2 erase boundary spaces. > > Here is an example from our FreeBSD forum: > http://forums.freebsd.org/showthread.php?t=19093 Thanx for this thread, there is a lot of usefull info there. pithy thing is to blow 66Mb, but then again on 40 or 120 Gb SSDs it is only marginal. (Guess it stems from the time that HDs where 5Mb :) ) I'm still not really shure that that is needed it the bios has nothing to do with these disks, as in our case: SSDs are only used as caches under ZFS. Especially the testing methods are useful. They are of course valid for any type partinioning... So getting things right on this level is the first required. >> I create the first partition at the usual 63 sectors offset from the >> start of the disk (track 1) which is /unaligned/ with the SSD erase >> block. The second partition is set to start at sector 21030912 >> (10767826944 bytes) which is /aligned/ with the SSD erase block. > >> SSD erase block boundaries vary from manufacturer to manufacturer, but >> a safe number to assume should be 1 MiB (1048576 bytes). I'd consider using 1Mib as a boundary. And compare that to the ~66MB boundary as suggested by aragon. > In my testing, it made no difference. But as daniel mentioned: > >> With ZFS, the 'alignment' is on per-vdev -- therefore you will need to recreate the mirror vdevs again using gnop to make them 4k aligned. > But I just resilvered to add my aligned disks and remove the old. If > that applies to erase boundaries, then it might have hurt my test. I'm not treally fluent in ZFS lingo, but the vdev is what makes up my zfsdata pool? And the alignment in there carries over to the caches underneath? So what is the consequence if ashift = 9, and the partitions are nicely aligned even on the rease-boundary..... --WjW