From owner-freebsd-stable@FreeBSD.ORG Tue Dec 28 16:15:46 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2FF62106566B for ; Tue, 28 Dec 2010 16:15:46 +0000 (UTC) (envelope-from jyavenard@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id EB6728FC0C for ; Tue, 28 Dec 2010 16:15:45 +0000 (UTC) Received: by iyb26 with SMTP id 26so8577650iyb.13 for ; Tue, 28 Dec 2010 08:15:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=3VJ+QjofLYfDtQWq5yAwjWn2tuOhIzvJovErh8AKfV8=; b=vKUd5TvAPj8QA+iiLDnpwFK3TVmlu/HXRjssBTjq0kdyJX0T9T8TseVlmYtgkWN2sG Udufgxt5MU3O41JlgLsVA30YjUsTsYLe0GRsREnH1s83L0rNGOKfwgl08OxX2telGdyB 1SWwacBqv4QhewmaTsat80xz+wfyN6ZZswrtI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=CM5a+b0wHR220yhXhyFN/EXcwuzsCWNMnLiE6dk61my6A/xelMK/lIQaMuj15JwGV3 Uq5OSxaNEwFZJ+XDeIjchdtcfPyNYQXFOjAaU6UXHnMdLntxlZyjKqfc4Fr2CNGmBxJs i65/ZjFlbz+qOB4OPrlbyqGr/6/5cXrCxDk8k= MIME-Version: 1.0 Received: by 10.42.175.4 with SMTP id ay4mr13613114icb.394.1293552945207; Tue, 28 Dec 2010 08:15:45 -0800 (PST) Received: by 10.42.220.70 with HTTP; Tue, 28 Dec 2010 08:15:45 -0800 (PST) In-Reply-To: <4D181E51.30401@DataIX.net> References: <4D181E51.30401@DataIX.net> Date: Wed, 29 Dec 2010 03:15:45 +1100 Message-ID: From: Jean-Yves Avenard To: jhell Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-stable@freebsd.org Subject: Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Dec 2010 16:15:46 -0000 Hi On 27 December 2010 16:04, jhell wrote: > 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1) > 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded > 3) ( mount -w / ) to make sure you can remove and also write new > zpool.cache as needed. > 3) Remove /boot/zfs/zpool.cache > 4) kldload both zfs and opensolaris i.e. ( kldload zfs ) should do the trick > 5) verify that vfs.zfs.recover=1 is set then ( zpool import pool ) > 6) Give it a little bit monitor activity using Ctrl+T to see activity. Ok.. I've got into the same situation again, no idea why this time. I've followed your instructions, and sure enough I could do an import of my pool again. However, wanted to find out what was going on.. So I did: zpool export pool followed by zpool import And guess what ... hanged zpool again.. can't Ctrl-C it, have to reboot.. So here we go again. Rebooted as above. zpool import pool -> ok this time, I decided that maybe that what was screwing things up was the cache. zpool remove pool ada1s2 -> ok zpool status: # zpool status pool: pool state: ONLINE scan: scrub repaired 0 in 18h20m with 0 errors on Tue Dec 28 10:28:05 2010 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 ada6 ONLINE 0 0 0 ada7 ONLINE 0 0 0 logs ada1s1 ONLINE 0 0 0 errors: No known data errors # zpool export pool -> ok # zpool import pool -> ok # zpool add pool cache /dev/ada1s2 -> ok # zpool status pool: pool state: ONLINE scan: scrub repaired 0 in 18h20m with 0 errors on Tue Dec 28 10:28:05 2010 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 ada4 ONLINE 0 0 0 ada5 ONLINE 0 0 0 ada6 ONLINE 0 0 0 ada7 ONLINE 0 0 0 logs ada1s1 ONLINE 0 0 0 cache ada1s2 ONLINE 0 0 0 errors: No known data errors # zpool export pool -> ok # zpool import load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.11r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.94r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.57r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.95r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 32.19r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 32.72r 0.00u 0.03s 0% 2556k load: 0.00 cmd: zpool 405 [spa_namespace_lock] 40.13r 0.00u 0.03s 0% 2556k ah ah ! it's not the separate log that make zpool crash, it's the cache ! Having the cache in prevent from importing the pool again.... rebooting: same deal... can't access the pool any longer ! Hopefully this is enough hint for someone to track done the bug ...