From owner-freebsd-fs@FreeBSD.ORG Sat Feb 12 09:54:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2E369106566C for ; Sat, 12 Feb 2011 09:54:45 +0000 (UTC) (envelope-from freebsd-list@nikazam.com) Received: from mail-gw0-f54.google.com (mail-gw0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id E6B978FC08 for ; Sat, 12 Feb 2011 09:54:44 +0000 (UTC) Received: by gwj21 with SMTP id 21so1454869gwj.13 for ; Sat, 12 Feb 2011 01:54:44 -0800 (PST) MIME-Version: 1.0 Received: by 10.236.103.175 with SMTP id f35mr502798yhg.27.1297504483879; Sat, 12 Feb 2011 01:54:43 -0800 (PST) Received: by 10.147.32.10 with HTTP; Sat, 12 Feb 2011 01:54:43 -0800 (PST) In-Reply-To: <4D46D0CF.8090103@chreo.net> References: <4D0A09AF.3040005@FreeBSD.org> <4D46D0CF.8090103@chreo.net> Date: Sat, 12 Feb 2011 09:54:43 +0000 Message-ID: From: Nik A Azam To: freebsd-fs@freebsd.org, mm@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Re: New ZFSv28 patchset for 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Feb 2011 09:54:45 -0000 Hi Martin, all I'm testing the ZFS v28 on FreeBSD stable (r218583M , ZFS patch from http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20110208-nopython.patch.xz) and been getting this panic everytime I issue any zfs/zpool command. This is 100% reproducible. panic: _sx_xlock_hard: recursed on non-recursive sx GEOM topology @ /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:380 cpuid = 1 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a kdb_backtrace() at kdb_backtrace+0x37 panic() at panic+0x182 _sx_xunlock_hard() at _sx_xunlock_hard _sx_xlock() at _sx_xlock+0xa9 vdev_geom_open_by_path() at vdev_geom_open_by_path+0x45 vdev_geom_open() at vdev_geom_open+0x100 vdeev_open() at vdeev_open+0xc9 vdev_open_children() at vdev_open_children+0x39 vdev_raidz_open() at vdev_raidz_open+0x4f vdev_open() at vdev_open+0xc9 vdev_open_children() at vdev_open_children+0x39 vdev_root_open() at vdev_root_open+0x40 vdev_open() at den)open+0xc9 spa_load() at spa_load+0x23f spa_load_best() at spa_load_best+0x4a pool_status_check() at pool_status_check+0x19 zfsdev_ioctl() at zfsdev_ioctl+0x208 devfs_ioctl_f() at devfs_ioctl_f+0x73 kern_ioctl() at kern_ioctl+0x8b ioctl() at 90ctl+0xec syscall() at syscall+0x41 Xfast_syscall() at Xfast_syscall+0x2e I'm more than happy to investigate this further given instructions on how to do so. Really appreciate the work that you guys have put in FreeBSD/ZFS! Thanks, Nik On Mon, Jan 31, 2011 at 3:10 PM, Chreo wrote: > Hello Martin, > > On 2010-12-16 13:44, Martin Matuska wrote: > >> Following the announcement of Pawel Jakub Dawidek (pjd@FreeBSD.org) I am >> providing a ZFSv28 testing patch for 8-STABLE. >> >> Link to the patch: >> >> >> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz >> >> > I've tested > http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20110116-nopython.patch.xz > with 8-STABLE from 2011-01-18 > > Seems to work nicely except for a panic when importing a degraded pool on > GELI vdevs: > (captured from the screen and OCR'd) > vdev.geom_detach:156[1]: Closing access to label/Disk.4.eli. > vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.4.eli. > vdev.geomdetach:156[1]: Closing access to label/Disk.5.eli. > vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.5.eli. > Solaris: WARNING: can't open objset for Ocean/Images > panic: solaris assert: bpobj_iterate(defer_bpo, spa_free_sync_cb, zio, tx) > == 0 (0x6 == 0x0), file: > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c, > line: 5576 > cpuid = 1 > KDB: stack backtrace: > #0 0xffffffff802f14ce at kdb_backtrace+0x5e > #1 0xffffffff802bf877 at panic+Ox187 > #2 0xffffffff808e0c48 at spa_sync+0x978 > #3 0xffffffff808f1011 at txg_sync_thread+0x271 > #4 0xffffffff802960b7 at fork_exit+0x117 > #5 0xffffffff804b7a7e at fork_trampoline+0xe > GEOM_ELI: Device label/Disk.5.eli destroyed. > GEOM_ELI: Device label/Disk.4.eli destroyed. > > The command run was: > # zpool import -F Ocean > and that worked with ZFS v15 > > The panic is 100% reproducible. The reason for this import was that I > wanted to try and clear the log (something which is possible on v28 but not > v15 it seems) with: zpool clear Ocean, and that caused a panic. An export > was done and the import was tried. Using the same command on v15 works and > imports the pool but it is faulted (due to the log). > > Anything I can test or do about this? I've also tried importing with -o > failmode=continue and that does absolutely nothing to prevent the panic. > > The other pool on the same system works perfectly so far with v28. Many > thanks to you and PJD for your work on ZFS. > > Regards, > Christian Elmerot > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >