From owner-freebsd-stable@FreeBSD.ORG Fri Jul 25 09:18:41 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 387551065672 for ; Fri, 25 Jul 2008 09:18:41 +0000 (UTC) (envelope-from kris@FreeBSD.org) Received: from weak.local (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 710DB8FC16; Fri, 25 Jul 2008 09:18:40 +0000 (UTC) (envelope-from kris@FreeBSD.org) Message-ID: <48899A71.4040508@FreeBSD.org> Date: Fri, 25 Jul 2008 11:18:41 +0200 From: Kris Kennaway User-Agent: Thunderbird 2.0.0.16 (Macintosh/20080707) MIME-Version: 1.0 To: Claus Guttesen References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Stable Subject: Re: zfs, raidz, spare and jbod X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Jul 2008 09:18:41 -0000 Claus Guttesen wrote: > Hi. > > I installed FreeBSD 7 a few days ago and upgraded to the latest stable > release using GENERIC kernel. I also added these entries to > /boot/loader.conf: > > vm.kmem_size="1536M" > vm.kmem_size_max="1536M" > vfs.zfs.prefetch_disable=1 > > Initially prefetch was enabled and I would experience hangs but after > disabling prefetch copying large amounts of data would go along > without problems. To see if FreeBSD 8 (current) had better (copy) > performance I upgraded to current as of yesterday. After upgrading and > rebooting the server responded fine. > > The server is a supermicro with a quad-core harpertown e5405 with two > internal sata-drives and 8 GB of ram. I installed an areca arc-1680 > sas-controller and configured it in jbod-mode. I attached an external > sas-cabinet with 16 sas-disks at 1 TB (931 binary GB). > > I created a raidz2 pool with 10 disks and added one spare. I copied > approx. 1 TB of small files (each approx. 1 MB) and during the copy I > simulated a disk-crash by pulling one of the disks out of the cabinet. > Zfs did not activate the spare and the copying stopped until I > rebooted after 5-10 minutes. When I performed a 'zpool status' the > command would not complete. I did not see any messages in > /var/log/message. State in top showed 'ufs-'. That means that it was UFS that hung, not ZFS. What was the process backtrace, and what role does UFS play on this system? Kris > A similar test on solaris express developer edition b79 activated the > spare after zfs tried to write to the missing disk enough times and > then marked it as faulted. Has any one else tried to simulate a > disk-crash in raidz(2) and succeeded? >