Date: Wed, 10 Mar 2010 18:31:43 +0100 From: Pawel Jakub Dawidek <pjd@FreeBSD.org> To: Borja Marcos <borjam@sarenet.es> Cc: freebsd-fs@freebsd.org, FreeBSD Stable <freebsd-stable@freebsd.org> Subject: Re: Many processes stuck in zfs Message-ID: <20100310173143.GD1715@garage.freebsd.pl> In-Reply-To: <E04F91AA-B2C4-4166-A24A-74F1BEF01519@sarenet.es> References: <864468D4-DCE9-493B-9280-00E5FAB2A05C@lassitu.de> <20100309122954.GE3155@garage.freebsd.pl> <EC9BC6B4-8D0E-4FE3-852F-0E3A24569D33@sarenet.es> <20100309125815.GF3155@garage.freebsd.pl> <CB854F58-03AF-46DD-8153-85FA96037C21@sarenet.es> <BFF1E2D6-B48A-4A5E-ACEE-8577FDB07820@sarenet.es> <20100310110202.GA1715@garage.freebsd.pl> <E04F91AA-B2C4-4166-A24A-74F1BEF01519@sarenet.es>
next in thread | previous in thread | raw e-mail | index | archive | help
--BI5RvnYi6R4T2M87 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Mar 10, 2010 at 04:12:36PM +0100, Borja Marcos wrote: > =09 > On Mar 10, 2010, at 12:02 PM, Pawel Jakub Dawidek wrote: >=20 > > Once the deadlock occur, enter DDB and send me the output of: > >=20 > > ps > > show alllocks > > show lockedvnods > > show allchains > > alltrace >=20 > (Again, crossposted to -fs, ZFS related) >=20 >=20 > Previous one was a panic when performing the test with several tar jobs r= unning in parallel. >=20 > Now this is a capture of the deadlock itself, instead of a panic. (I call= ed panic from the debugger to generate a dump) [...] Hmm, interesting. Especially those two traces: Tracing command zfs pid 1820 tid 100105 td 0xffffff0002ca4000 [...] _cv_wait() at _cv_wait+0x17a txg_wait_synced() at txg_wait_synced+0x98 zfsvfs_teardown() at zfsvfs_teardown+0x1f6 zfs_suspend_fs() at zfs_suspend_fs+0x2b zfs_ioc_recv() at zfs_ioc_recv+0x28b zfsdev_ioctl() at zfsdev_ioctl+0x8d devfs_ioctl_f() at devfs_ioctl_f+0x76 kern_ioctl() at kern_ioctl+0xc5 ioctl() at ioctl+0xfd [...] Tracing command bsdtar pid 1699 tid 100093 td 0xffffff000262dae0 [...] _sx_slock_hard() at _sx_slock_hard+0x1b7 _sx_slock() at _sx_slock+0xc1=20 zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x63 VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0xb5 vgonel() at vgonel+0x119 vnlru_free() at vnlru_free+0x345 getnewvnode() at getnewvnode+0x24f zfs_znode_cache_constructor() at zfs_znode_cache_constructor+0x43 zfs_znode_alloc() at zfs_znode_alloc+0x38 zfs_mknode() at zfs_mknode+0x259 zfs_freebsd_create() at zfs_freebsd_create+0x661 VOP_CREATE_APV() at VOP_CREATE_APV+0xb3 vn_open_cred() at vn_open_cred+0x473 kern_openat() at kern_openat+0x179 [...] This should be impossible. If we are that deep in zfsvfs_teardown(), it mea= ns that we hold the z_teardown_lock exclusively. And we do as 'show alllocks' output confirms. But if we are holding this lock exclusively we shouldn't be that deep in create code path, because we need hold this lock as reader. It isn't visible in 'show alllocks' output, because this lock is special (rrwlock.c). I see three possibilities: 1. We are looking at different file systems here. But where is deadlock coming from then? 2. There is a bug in rrwlock.c. Highly unlikely I think. 3. My thinking is incorrect somewhere. Let me do some more thinking and I'll get back to you (possibly with a patch that will help us to find right possibility). --=20 Pawel Jakub Dawidek http://www.wheelsystems.com pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --BI5RvnYi6R4T2M87 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAkuX134ACgkQForvXbEpPzRsuACgzsjOtg3CjoVm65QoYNmS6GKg LasAoN0poZ4eavwo2Pl/LCiRUCGb67Vm =LFmy -----END PGP SIGNATURE----- --BI5RvnYi6R4T2M87--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100310173143.GD1715>