From owner-freebsd-questions@FreeBSD.ORG Mon Jan 27 18:15:32 2014 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CF411E00 for ; Mon, 27 Jan 2014 18:15:32 +0000 (UTC) Received: from mail-wg0-x230.google.com (mail-wg0-x230.google.com [IPv6:2a00:1450:400c:c00::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5D21D125A for ; Mon, 27 Jan 2014 18:15:32 +0000 (UTC) Received: by mail-wg0-f48.google.com with SMTP id x13so5862816wgg.15 for ; Mon, 27 Jan 2014 10:15:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type; bh=/OFKYigwLPRrUR/occVtieYGYXTJbw4IAhQdjd8io7U=; b=FqNf+KTz84HSYIZIPZdStkT9nBZKIr2KyvqlqmCQj1E5ZoILwOz+ftZdbPBZrapvkD xTVjLe1ZP7XCbpHmFdgYST21VD3Di1n1XCeFEdOGx7OhQJXcoIA1W1l7P7ogxpqNgss/ nDwIX3BC+hpgchKsyuC3BktK+8zB914Cf2oOMxvQNYjvl1lHxnnW6/LN+bhvohOLI0Z9 GF3AudIFdK6iND2u0ICOaFIkZa+elJ/KbrqlX9vXv5ttnwSt/apnGvcOIWP2NpQNahoP XI0WJBzICuzSdzr5nrazS2eU8WM0WU4LAdoiXmaG+PADGDkdb4y6s/wWCdyrJ9hODogU ZK1Q== X-Received: by 10.194.71.47 with SMTP id r15mr21336944wju.19.1390846530762; Mon, 27 Jan 2014 10:15:30 -0800 (PST) Received: from x220.optiplex-networks.com (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id n15sm29575970wij.3.2014.01.27.10.15.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 27 Jan 2014 10:15:30 -0800 (PST) Message-ID: <52E6A240.8010404@gmail.com> Date: Mon, 27 Jan 2014 18:15:28 +0000 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: krad Subject: Re: ZFS confusion References: <52E40C82.7050302@gmail.com> <52E62DFF.3010600@gmail.com> <52E6463C.6090805@gmail.com> <52E6537F.8020907@gmail.com> <52E6657E.1050103@gmail.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-questions , =?ISO-8859-1?Q?Trond_Endrest=F8l?= X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jan 2014 18:15:32 -0000 Many thanks I really appreciate the advice :-) Best Regards, Kaya On 01/27/2014 04:52 PM, krad wrote: > Look into under provisioning the SSD drives as well, this can preserve > write performance in the long term and decrease write wear. Looking at > the number of drives, and general spec of what you are putting > together, I would try to stretch to 256 GB ssd but only provision them > to use say 128-160 GB of the capacity. > > I'm not 100% sure this is still all necessary now as TRIM support is > much better now under zfs but here is how i did my ssd drives under > linux. You may well be able to do it under freebsd but I havent > figured out how. > > root@ubuntu-10-10:~# hdparm -N /dev/sdb > > /dev/sdb: > max sectors = 312581808/312581808, HPA is disabled > > root@ubuntu-10-10:~# hdparm -Np281323627 /dev/sdb > > /dev/sdb: > setting max visible sectors to 281323627 (permanent) > Use of -Nnnnnn is VERY DANGEROUS. > You have requested reducing the apparent size of the drive. > This is a BAD idea, and can easily destroy all of the drive's contents. > Please supply the --yes-i-know-what-i-am-doing flag if you really want this. > Program aborted. > > root@ubuntu-10-10:~# hdparm -Np281323627 --yes-i-know-what-i-am-doing /dev/sdb > > /dev/sdb: > setting max visible sectors to 281323627 (permanent) > max sectors = 281323627/312581808, HPA is enabled > > root@ubuntu-10-10:~# > > > On 27 January 2014 13:56, Kaya Saman > wrote: > > Many thanks for the explanation :-) > > > On 01/27/2014 01:13 PM, krad wrote: > > Neither of these setups is ideal, The best practice for your > vdev is to use 2^n + your parity drives > This means in your case with raidz3 you would do something > > 2 + 3 > 4 + 3 > 8 + 3 > > the 1st two are far from ideal as the ratios are low 8 + 3, so > 11 drives per raidz3 vdev would be optimal. This would fit > nicely with your 26 drive enclosure as you would use 2x11 > drive raidz3 vdevs, 2 hot spares, and two devices left for > l2arc/zil. Probably best chop up the ssds, mirror the zil and > stripe the l2arc, assuming you dont want to do down the route > using generic SSD's rather than write/read optimized ones > > > Yep was going to use your suggestion for l2arc/zil on 2x 128GB > Corsair Force Series GS, 2.5" which have quite good w/r speeds - > also I use these on other servers which tend to be quite good and > reliable. > > I think the way to create a mirrored zil and stiped l2arc would be > to use GPT to partition the drives, then use the zfs features > across the partitions. > > > Hmm.... so it also looks like I'm gona have to wait a while for > some more drives then in order to create an 11 disk raidz3 pool. > > > But at least things will be done properly and in a good manner > rather then going down a patch with "no return". > > > > > Regards, > > > Kaya > >