From owner-freebsd-bugs@FreeBSD.ORG Wed Oct 26 20:30:26 2005 Return-Path: X-Original-To: freebsd-bugs@hub.freebsd.org Delivered-To: freebsd-bugs@hub.freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CC9A116A41F for ; Wed, 26 Oct 2005 20:30:26 +0000 (GMT) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [216.136.204.21]) by mx1.FreeBSD.org (Postfix) with ESMTP id 8D73843D46 for ; Wed, 26 Oct 2005 20:30:26 +0000 (GMT) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.13.3/8.13.3) with ESMTP id j9QKUQU9074223 for ; Wed, 26 Oct 2005 20:30:26 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.13.3/8.13.1/Submit) id j9QKUQnp074222; Wed, 26 Oct 2005 20:30:26 GMT (envelope-from gnats) Date: Wed, 26 Oct 2005 20:30:26 GMT Message-Id: <200510262030.j9QKUQnp074222@freefall.freebsd.org> To: freebsd-bugs@FreeBSD.org From: Frank Mayhar Cc: Subject: Re: kern/87861: "panic: initiate_write_inodeblock_ufs2: already started" on 6.0-RC1 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Frank Mayhar List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Oct 2005 20:30:27 -0000 The following reply was made to PR kern/87861; it has been noted by GNATS. From: Frank Mayhar To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/87861: "panic: initiate_write_inodeblock_ufs2: already started" on 6.0-RC1 Date: Wed, 26 Oct 2005 13:28:33 -0700 More information. I built the kernel with INVARIANTS and managed to catch a KASSERT() at the beginning of bundirty(): panic: bundirty: buffer 0xd6d6cc00 still on queue 1 Queue 1 is the QUEUE_CLEAN queue, for (from the commentary) "non-B_DELWRI buffers." The buffer in question looks like: (kgdb) print $buf $4 = (struct buf *) 0xd6d6cc00 (kgdb) print *$buf $5 = { b_bufobj = 0xc4dff2e0, b_bcount = 0x4000, b_caller1 = 0x0, b_data = 0xda0c5000 "", b_error = 0x10, b_iocmd = 0x2, b_ioflags = 0x2, b_iooffset = 0xed9a84000, b_resid = 0x4000, b_iodone = 0, b_blkno = 0x76cd420, b_offset = 0xed9a84000, b_bobufs = { tqe_next = 0xd6e265f0, tqe_prev = 0xc4dff2f4 }, b_left = 0x0, b_right = 0x0, b_vflags = 0x0, b_freelist = { tqe_next = 0xd6d16960, tqe_prev = 0xc06f75a8 }, b_qindex = 0x1, b_flags = 0xa084, b_xflags = 0x21, b_lock = { lk_interlock = 0xc06a8368, lk_flags = 0x40000, lk_sharecount = 0x0, lk_waitcount = 0x0, lk_exclusivecount = 0x1, lk_prio = 0x50, lk_wmesg = 0xc0655b94 "bufwait", lk_timo = 0x0, lk_lockholder = 0xfffffffe, lk_newlock = 0x0 }, b_bufsize = 0x4000, b_runningbufspace = 0x0, b_kvabase = 0xda0c5000 "", b_kvasize = 0x4000, b_lblkno = 0x76cd420, b_vp = 0xc4dff220, b_dirtyoff = 0x0, b_dirtyend = 0x0, b_rcred = 0x0, b_wcred = 0x0, b_saveaddr = 0xda0c5000, b_pager = { pg_reqpage = 0x0 }, b_cluster = { cluster_head = { tqh_first = 0xd6e257d8, tqh_last = 0xd6dbeb70 }, cluster_entry = { tqe_next = 0xd6e257d8, tqe_prev = 0xd6dbeb70 } }, b_pages = {0xc29d8868, 0xc29d91b0, 0xc29dacf8, 0xc29c1840, 0x0 }, b_npages = 0x4, b_dep = { lh_first = 0x0 } } (kgdb) print *$buf->b_bufobj $8 = { bo_mtx = 0xc4dff29c, bo_clean = { bv_hd = { tqh_first = 0xd6cda498, tqh_last = 0xd6e13e30 }, bv_root = 0xd6dcd908, bv_cnt = 0xf }, bo_dirty = { bv_hd = { tqh_first = 0xd6d6cc00, tqh_last = 0xd6d2a380 }, bv_root = 0xd6e81bd8, bv_cnt = 0x27 }, bo_numoutput = 0x7, bo_flag = 0x1, bo_ops = 0xc0694b9c, bo_bsize = 0x200, bo_object = 0xc596f210, bo_synclist = { le_next = 0xc50623f0, le_prev = 0xc2d6b320 }, bo_private = 0xc47e0480, __bo_vnode = 0xc4dff220 } Somewhat more interesting is this error message from just before the panic. There are 200+ of these messages, but this one appears to be associated with this buffer, since the offset and length match: g_vfs_done():da6s1e[WRITE(offset=63781224448, length=16384)]error = 16 Error 16 is EBUSY. Unfortunately the stack is messed up (apparently because of a panic cascade) so I can't extract a backtrace for this one. -- Frank Mayhar frank@exit.com http://www.exit.com/ Exit Consulting http://www.gpsclock.com/ http://www.exit.com/blog/frank/