From owner-freebsd-stable@FreeBSD.ORG Sat Jan 26 01:00:55 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E04D16A417 for ; Sat, 26 Jan 2008 01:00:55 +0000 (UTC) (envelope-from jdc@parodius.com) Received: from mx01.sc1.parodius.com (mx01.sc1.parodius.com [72.20.106.3]) by mx1.freebsd.org (Postfix) with ESMTP id 051C313C469 for ; Sat, 26 Jan 2008 01:00:54 +0000 (UTC) (envelope-from jdc@parodius.com) Received: by mx01.sc1.parodius.com (Postfix, from userid 1000) id E680F1CC079; Fri, 25 Jan 2008 17:00:54 -0800 (PST) Date: Fri, 25 Jan 2008 17:00:54 -0800 From: Jeremy Chadwick To: Joe Peterson Message-ID: <20080126010054.GA52891@eos.sc1.parodius.com> References: <479A0731.6020405@skyrush.com> <20080125162940.GA38494@eos.sc1.parodius.com> <479A3764.6050800@skyrush.com> <3803988D-8D18-4E89-92EA-19BF62FD2395@mac.com> <479A4CB0.5080206@skyrush.com> <20080126003845.GA52183@eos.sc1.parodius.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080126003845.GA52183@eos.sc1.parodius.com> User-Agent: Mutt/1.5.16 (2007-06-09) Cc: freebsd-stable@freebsd.org Subject: Re: "ad0: TIMEOUT - WRITE_DMA" type errors with 7.0-RC1 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 Jan 2008 01:00:55 -0000 On Fri, Jan 25, 2008 at 04:38:46PM -0800, Jeremy Chadwick wrote: > I'll have to poke at SMART stats later to see what showed up. So the box did indeed panic. The backtrace contained about 1.5 screens of function calls from the stack, which makes taking a photo of the screen a bit worthless. All the functions shown were predominantly I/O related, and a disk locked up (or something), this didn't surprise me. SMART stats showed absolutely nothing wrong with ad6, or any of the other drives on the system. Worse: my ZFS pool appears *completely* gone -- that's about 170GB of data. I don't even know how that happened, because there were absolutely no issues reported on either of the disks on the ZFS pool. It's like the situation somehow caused ZFS to go crazy and lose all of it's metadata. icarus# zfs list no datasets available This doesn't bode well, and doesn't make me happy. At all. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |