From owner-freebsd-stable@FreeBSD.ORG Tue Feb 25 19:16:36 2014 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 463D25FA for ; Tue, 25 Feb 2014 19:16:36 +0000 (UTC) Received: from h2.funkthat.com (gate2.funkthat.com [208.87.223.18]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1FF921D96 for ; Tue, 25 Feb 2014 19:16:35 +0000 (UTC) Received: from h2.funkthat.com (localhost [127.0.0.1]) by h2.funkthat.com (8.14.3/8.14.3) with ESMTP id s1PJGNX8021249 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 25 Feb 2014 11:16:24 -0800 (PST) (envelope-from jmg@h2.funkthat.com) Received: (from jmg@localhost) by h2.funkthat.com (8.14.3/8.14.3/Submit) id s1PJGNgD021248; Tue, 25 Feb 2014 11:16:23 -0800 (PST) (envelope-from jmg) Date: Tue, 25 Feb 2014 11:16:23 -0800 From: John-Mark Gurney To: Dmitry Sivachenko Subject: Re: fsck dumps core Message-ID: <20140225191623.GR92037@funkthat.com> Mail-Followup-To: Dmitry Sivachenko , d@delphij.net, stable@freebsd.org References: <417919B7-C4D7-4003-9A71-64C4C9E73678@gmail.com> <530BC062.8070800@delphij.net> <206E2401-F263-4D50-9E99-F7603828E206@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <206E2401-F263-4D50-9E99-F7603828E206@gmail.com> User-Agent: Mutt/1.4.2.3i X-Operating-System: FreeBSD 7.2-RELEASE i386 X-PGP-Fingerprint: 54BA 873B 6515 3F10 9E88 9322 9CB1 8F74 6D3F A396 X-Files: The truth is out there X-URL: http://resnet.uoregon.edu/~gurney_j/ X-Resume: http://resnet.uoregon.edu/~gurney_j/resume.html X-TipJar: bitcoin:13Qmb6AeTgQecazTWph4XasEsP7nGRbAPE X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? can i haz chizburger? X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (h2.funkthat.com [127.0.0.1]); Tue, 25 Feb 2014 11:16:24 -0800 (PST) Cc: stable@freebsd.org, d@delphij.net X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Feb 2014 19:16:36 -0000 Dmitry Sivachenko wrote this message on Tue, Feb 25, 2014 at 15:13 +0400: > It is always the same story: I was looking for software replacement of DELL PERC raid controller, so I test different variants of raidz. > With low load, it is OK. > Under heavy write load, after it eats all free RAM for ARC, writing process stucks in zio->i state, write performance drops to few MB/sec > (with 15-20 disks in raidz), and it takes dozens of seconds even to spawn login shell. Well, if you mean a single raidz w/ 15-20, then of course your performance would be bad, but I assume that you're doing 3-4 sets of 5 disks raidz, or even maybe 5-7 sets of 3 disk raidz... I'm sure you found this and know this, but... I can't find the link right now, but vdevs become effectively "one disk" so, each vdev will only be as fast as it's slowest disk, and you then only have x vdevs worth of "disks"... So, if you are using 7200RPM SATA drives w/ an IOPS of ~150, and only use one or two vdevs, you're perf will suck compared to the same RAID5 system which has 3-5x the IOPS... Also, depending upon sync workland (NFS), adding a SSD ZIL can be a big improvement... > These ZFS problems are heavily documented in mailing lists, time goes and nothing changes. ZFS's raidz should be compared w/ raid3, not raid5 if you want to do a more realistic comparision between fs's... > avg@ states "Empirical/anecdotal safe limit on pool utilization is said to be about 70-80%." -- isn't it too much price for fsck-less FS? :) > http://markmail.org/message/mtws224umcy5afsa#query:+page:1+mid:xkcr53ll3ovcme5f+state:results Even Solaris's ZFS guide says that: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations > (my problems arise regardless of pool usage, even on almost empty partition). > > So either I can't cook it (yes, I spent a lot of time reading FreeBSD's ZFS wiki and trying different settings), or ZFS is suitable only for low-load scenarios like root/var/home on zfs. I know others are running high IOPS on ZFS... so, not sure what to say.. -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."