From owner-freebsd-net@FreeBSD.ORG Wed Feb 16 22:36:55 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 947E7106564A for ; Wed, 16 Feb 2011 22:36:55 +0000 (UTC) (envelope-from przemyslaw@frasunek.com) Received: from lagoon.freebsd.lublin.pl (lagoon.freebsd.lublin.pl [IPv6:2a02:2928:a::3]) by mx1.freebsd.org (Postfix) with ESMTP id 24D818FC0A for ; Wed, 16 Feb 2011 22:36:55 +0000 (UTC) Received: from [IPv6:2a02:2928:a:ffff:d41d:ea8b:7ad2:4966] (unknown [IPv6:2a02:2928:a:ffff:d41d:ea8b:7ad2:4966]) by lagoon.freebsd.lublin.pl (Postfix) with ESMTPSA id 868FA23944A; Wed, 16 Feb 2011 23:36:53 +0100 (CET) Message-ID: <4D5C5187.9040702@frasunek.com> Date: Wed, 16 Feb 2011 23:36:55 +0100 From: Przemyslaw Frasunek Organization: frasunek.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; pl; rv:1.9.2.13) Gecko/20101207 Thunderbird/3.1.7 MIME-Version: 1.0 To: freebsd-net@freebsd.org References: <4D3011DB.9050900@frasunek.com> <4D30458D.30007@sentex.net> <4D306421.1050501@rdtc.ru> In-Reply-To: <4D306421.1050501@rdtc.ru> X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Cc: Eugene Grosbein , Mike Tancsa Subject: Re: Netgraph/mpd5 stability issues X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Feb 2011 22:36:55 -0000 > On 14.01.2011 18:46, Mike Tancsa wrote: > I also have very loaded mpd/PPPoE servers that panic all the time: > > http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/153255 > http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/153671 I've just got yet another panic on mpd5 box after about 30 days of uptime. This time it seems to be unrelated with Netgraph: (kgdb) bt #0 doadump () at pcpu.h:196 #1 0xc0836ac7 in boot (howto=260) at ../../../kern/kern_shutdown.c:418 #2 0xc0836d99 in panic (fmt=Variable "fmt" is not available. ) at ../../../kern/kern_shutdown.c:574 #3 0xc0b5ef1c in trap_fatal (frame=0xc524fa2c, eva=56) at ../../../i386/i386/trap.c:950 #4 0xc0b5f1a0 in trap_pfault (frame=0xc524fa2c, usermode=0, eva=56) at ../../../i386/i386/trap.c:863 #5 0xc0b5fb95 in trap (frame=0xc524fa2c) at ../../../i386/i386/trap.c:541 #6 0xc0b42e7b in calltrap () at ../../../i386/i386/exception.s:166 #7 0xc0a683d0 in softdep_disk_io_initiation (bp=0xd96921e0) at ../../../ufs/ffs/ffs_softdep.c:3785 #8 0xc0a6e9ef in ffs_geom_strategy (bo=0xc58d10c0, bp=0xd96921e0) at buf.h:436 #9 0xc08a8830 in bufwrite (bp=0xd96921e0) at buf.h:429 #10 0xc0a6e3b8 in ffs_bufwrite (bp=0xd96921e0) at ../../../ufs/ffs/ffs_vfsops.c:1893 #11 0xc08a3af8 in vfs_bio_awrite (bp=0xd96921e0) at buf.h:417 #12 0xc08ad598 in vop_stdfsync (ap=0xc524fcd4) at ../../../kern/vfs_default.c:466 #13 0xc07bd3dc in devfs_fsync (ap=0xc524fcd4) at ../../../fs/devfs/devfs_vnops.c:499 #14 0xc0b743a2 in VOP_FSYNC_APV (vop=0xc0cd0e00, a=0xc524fcd4) at vnode_if.c:1007 #15 0xc08bdeb8 in sched_sync () at vnode_if.h:538 #16 0xc080e9f9 in fork_exit (callout=0xc08bd7b0 , arg=0x0, frame=0xc524fd38) at ../../../kern/kern_fork.c:811 #17 0xc0b42ef0 in fork_trampoline () at ../../../i386/i386/exception.s:271 (kgdb) frame 7 #7 0xc0a683d0 in softdep_disk_io_initiation (bp=0xd96921e0) at ../../../ufs/ffs/ffs_softdep.c:3785 3785 LIST_INSERT_AFTER(wk, &marker, wk_list); (kgdb) print *wk $4 = {wk_mp = 0x8, wk_list = {le_next = 0x30, le_prev = 0x3}, wk_type = 1, wk_state = 0} (kgdb) print *bp $5 = {b_bufobj = 0xc58d10c0, b_bcount = 16384, b_caller1 = 0x0, b_data = 0xde759000 "ưA\002", b_error = 0, b_iocmd = 2 '\002', b_ioflags = 2 '\002', b_iooffset = 16186277888, b_resid = 0, b_iodone = 0, b_blkno = 31613824, b_offset = 16186277888, b_bobufs = { tqe_next = 0xd96e4c5c, tqe_prev = 0xd9559d94}, b_left = 0x0, b_right = 0xd96e4c5c, b_vflags = 1, b_freelist = {tqe_next = 0xd955a2ec, tqe_prev = 0xd960ecf0}, b_qindex = 2, b_flags = 2684485668, b_xflags = 2 '\002', b_lock = {lk_object = {lo_name = 0xc0c114c5 "bufwait", lo_type = 0xc0c114c5 "bufwait", lo_flags = 70844416, lo_witness_data = { lod_list = {stqe_next = 0x0}, lod_witness = 0x0}}, lk_interlock = 0xc0d3c030, lk_flags = 262144, lk_sharecount = 0, lk_waitcount = 0, lk_exclusivecount = 1, lk_prio = 80, lk_timo = 0, lk_lockholder = 0xfffffffe, lk_newlock = 0x0}, b_bufsize = 16384, b_runningbufspace = 16384, b_kvabase = 0xde759000 "ưA\002", b_kvasize = 16384, b_lblkno = 31613824, b_vp = 0xc58d1000, b_dirtyoff = 0, b_dirtyend = 0, b_rcred = 0x0, b_wcred = 0x0, b_saveaddr = 0xde759000, b_pager = {pg_reqpage = 0}, b_cluster = {cluster_head = { tqh_first = 0xd955d234, tqh_last = 0xd9509d5c}, cluster_entry = { tqe_next = 0xd955d234, tqe_prev = 0xd9509d5c}}, b_pages = {0xc3843bd8, 0xc3842ab0, 0xc3843dd0, 0xc3843e18, 0x0 }, b_npages = 4, b_dep = {lh_first = 0xc672ca80}, b_fsprivate1 = 0x0, b_fsprivate2 = 0x0, b_fsprivate3 = 0x0, b_pin_count = 0} (kgdb) print *bp->b_dep->lh_first $17 = {wk_mp = 0x8, wk_list = {le_next = 0x30, le_prev = 0x3}, wk_type = 1, wk_state = 0} (kgdb) frame 12 #12 0xc08ad598 in vop_stdfsync (ap=0xc524fcd4) at ../../../kern/vfs_default.c:466 466 vfs_bio_awrite(bp); (kgdb) list 461 ("bp %p wrong b_bufobj %p should be %p", 462 bp, bp->b_bufobj, &vp->v_bufobj)); 463 if ((bp->b_flags & B_DELWRI) == 0) 464 panic("fsync: not dirty"); 465 if ((vp->v_object != NULL) && (bp->b_flags & B_CLUSTEROK)) { 466 vfs_bio_awrite(bp); 467 } else { 468 bremfree(bp); 469 bawrite(bp); 470 } (kgdb) print *bp->b_dep->lh_first $24 = {wk_mp = 0x8, wk_list = {le_next = 0x30, le_prev = 0x3}, wk_type = 1, wk_state = 0}