From owner-freebsd-stable@FreeBSD.ORG Wed Oct 20 17:37:21 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 94CCE106566B for ; Wed, 20 Oct 2010 17:37:21 +0000 (UTC) (envelope-from spawk@acm.poly.edu) Received: from acm.poly.edu (acm.poly.edu [128.238.9.200]) by mx1.freebsd.org (Postfix) with ESMTP id 429348FC18 for ; Wed, 20 Oct 2010 17:37:20 +0000 (UTC) Received: (qmail 75187 invoked from network); 20 Oct 2010 17:37:20 -0000 Received: from unknown (HELO ?10.0.0.124?) (spawk@128.238.64.31) by acm.poly.edu with CAMELLIA256-SHA encrypted SMTP; 20 Oct 2010 17:37:20 -0000 Message-ID: <4CBF28D2.4080103@acm.poly.edu> Date: Wed, 20 Oct 2010 13:37:22 -0400 From: Boris Kochergin User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) Gecko/20101014 Thunderbird/3.1.4 MIME-Version: 1.0 To: Sean Thomas Caron References: <20101020112738.12467cvfvvh4zb0g@web.mail.umich.edu> <038301cb706f$cf5a6640$6e0f32c0$@co.uk> <20101020132627.20874pa6yfreu6io@web.mail.umich.edu> In-Reply-To: <20101020132627.20874pa6yfreu6io@web.mail.umich.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Lawrence Farr , freebsd-stable@freebsd.org Subject: Re: Spurious reboot in 8.1-RELEASE when reading from ZFS pool with > 9 disks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Oct 2010 17:37:21 -0000 Ahoy. I just thought I'd add a data point to the mix. I have an 11-disk v13 pool comprised of 400-GB disks on an 8.1 amd64 system and the machine behaves just fine with it: # zpool status pool: archive state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: resilver completed after 0h0m with 0 errors on Fri Oct 8 17:56:52 2010 config: NAME STATE READ WRITE CKSUM archive ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad4 ONLINE 0 0 0 133K resilvered ad6 ONLINE 0 0 0 84K resilvered ad8 ONLINE 0 0 0 85.5K resilvered ad10 ONLINE 0 0 0 84.5K resilvered ad12 ONLINE 0 0 0 88K resilvered ad14 ONLINE 0 0 0 83.5K resilvered ad16 ONLINE 0 0 0 83K resilvered ad18 ONLINE 0 0 0 84.5K resilvered ad20 ONLINE 0 0 0 85.5K resilvered ad22 ONLINE 0 0 0 84K resilvered ad24 ONLINE 0 0 0 86.5K resilvered errors: No known data errors -Boris On 10/20/10 13:26, Sean Thomas Caron wrote: > Hi Lawrence, > > Interesting; have you tried this for raidz2 as well? > > I just created a raidz2 pool with 5 disks and then added another 5 > disk raidz2 to it, so, total of 10 disks in the pool (though this is > ultimately a losing strategy unless the number of disks is >> 9 > because two drives are lost for parity in each sub-raid in the pool). > > It (seemed) just slightly more stable than creating a single raidz2 > pool with > 9 disks but it still crashes. > > I guess this does allow me to say its more an issue of number of > devices in the pool versus capacity of the pool because with the > parity drives taken out, the pool with two 5-disk raidz2s has less > total capacity than a pool with a single 9-disk raidz2. > > Just out of idle curiousity, I also tried it with raidz1 on my system. > Again, I created a 5-disk pool, raidz1 this time, then added another > 5-disk raidz1 to the pool for, again, total of 10 disks. > > Again, a bit of a losing strategy versus creating one great big raidz > unless the number of disks is >> 9 because of losing a disk in each > sub-raidz1 in the pool for parity but less so of course than raidz2. > > This seemed to crash too, same behavior. > > Are you using 8.1-RELEASE or STABLE or ...? > > Best, > > -Sean > >> >> I have a 16 disk pool, if you create it with >> >> zpool create poolname raidz disk1 disk2 disk3 etc >> >> then >> >> zpool add poolname raidz disk8 disk9 disk10 etc >> >> You get the full size pool and no issues. >> >> pool: tank >> state: ONLINE >> scan: scrub repaired 0 in 0h0m with 0 errors on Wed Oct 20 14:54:08 >> 2010 >> config: >> >> NAME STATE READ WRITE CKSUM >> tank ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> da0 ONLINE 0 0 0 >> da1 ONLINE 0 0 0 >> da2 ONLINE 0 0 0 >> da3 ONLINE 0 0 0 >> da4 ONLINE 0 0 0 >> da5 ONLINE 0 0 0 >> da6 ONLINE 0 0 0 >> da7 ONLINE 0 0 0 >> raidz1-1 ONLINE 0 0 0 >> da8 ONLINE 0 0 0 >> da9 ONLINE 0 0 0 >> da10 ONLINE 0 0 0 >> da11 ONLINE 0 0 0 >> da12 ONLINE 0 0 0 >> da13 ONLINE 0 0 0 >> da14 ONLINE 0 0 0 >> da15 ONLINE 0 0 0 >> >> errors: No known data errors >> > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"