Date: Mon, 25 Jun 2001 14:51:24 -0500 From: Alfred Perlstein <bright@sneakerz.org> To: freebsd-stable@freebsd.org Cc: dillon@freebsd.org, tegge@freebsd.org Subject: -stable weird panics Message-ID: <20010625145124.D64836@sneakerz.org>
next in thread | raw e-mail | index | archive | help
I'm trying to get some people at Netzero using FreeBSD, actually
they're trying to use FreeBSD and I'm trying to get it to stop
crashing on them. (see my sys_pipe.c commits)
Anyhow, they keep dying on getnewvnode like so:
#5 0xc031d9bf in trap (frame={tf_fs = -1070858216, tf_es = -1007878128,
tf_ds = -1069809648, tf_edi = 0, tf_esi = -1002983424,
tf_ebp = -62104564, tf_isp = -62104632, tf_ebx = 0,
tf_edx = -1744880709, tf_ecx = 42, tf_eax = 0, tf_trapno = 12,
tf_err = 2, tf_eip = -1070480145, tf_cs = 8, tf_eflags = 66050,
tf_esp = -1003572608, tf_ss = -1071798700}) at ../../i386/i386/trap.c:443
#6 0xc031c4ef in generic_bzero ()
#7 0xc02ae6af in ffs_vget (mp=0xc425ee00, ino=135970, vpp=0xfc4c5ca0)
at ../../ufs/ffs/ffs_vfsops.c:1069
#8 0xc02a376f in ffs_valloc (pvp=0xfb9dcec0, mode=33188, cred=0xc47b1d00,
vpp=0xfc4c5ca0) at ../../ufs/ffs/ffs_alloc.c:600
#9 0xc02b628f in ufs_makeinode (mode=33188, dvp=0xfb9dcec0, vpp=0xfc4c5edc,
cnp=0xfc4c5ef0) at ../../ufs/ufs/ufs_vnops.c:2088
#10 0xc02b3c00 in ufs_create (ap=0xfc4c5dfc) at ../../ufs/ufs/ufs_vnops.c:190
#11 0xc02b658d in ufs_vnoperate (ap=0xfc4c5dfc)
at ../../ufs/ufs/ufs_vnops.c:2373
#12 0xc01e20ac in vn_open (ndp=0xfc4c5ec8, fmode=2562, cmode=420)
at vnode_if.h:106
#13 0xc01de2f4 in open (p=0xfc49b2a0, uap=0xfc4c5f80)
at ../../kern/vfs_syscalls.c:995
#7 0xc02ae6af in ffs_vget (mp=0xc425ee00, ino=135970, vpp=0xfc4c5ca0)
at ../../ufs/ffs/ffs_vfsops.c:1069
1069 error = getnewvnode(VT_UFS, mp, ffs_vnodeop_p, &vp);
(kgdb) list
1064 */
1065 MALLOC(ip, struct inode *, sizeof(struct inode),
1066 ump->um_malloctype, M_WAITOK);
1067
1068 /* Allocate a new vnode/inode. */
1069 error = getnewvnode(VT_UFS, mp, ffs_vnodeop_p, &vp);
1070 if (error) {
1071 if (ffs_inode_hash_lock < 0)
1072 wakeup(&ffs_inode_hash_lock);
1073 ffs_inode_hash_lock = 0;
It really looks like the zalloc() call in getnewvnode() is returning
NULL, and instead of bailing getnewvnode() tries to zero a NULL pointer.
I'm weirded out that zalloc is failing because I think we've tuned the
kernel memory for quite a large pool:
options VM_KMEM_SIZE="(400*1024*1024)"
options VM_KMEM_SIZE_MAX="(400*1024*1024)"
however..
% vmstat -m -M /var/qmail/crash/vmcore.6
Memory Totals: In Use Free Requests
17408K 137K 7909365
% vmstat -z -M /var/qmail/crash/vmcore.6
ZONE used total mem-use
PIPE 55 408 8/63K
SWAPMETA 0 0 0/0K
tcpcb 303 371 160/197K
unpcb 4 128 0/8K
ripcb 0 21 0/3K
tcpcb 0 0 0/0K
udpcb 41 84 7/15K
socket 354 441 66/82K
KNOTE 1 128 0/8K
NFSNODE 99464 99480 31082/31087K
NFSMOUNT 26 35 13/18K
VNODE 105046 105046 19696/19696K
NAMEI 2 48 2/48K
VMSPACE 97 320 18/60K
PROC 101 294 41/119K
DP fakepg 0 0 0/0K
PV ENTRY 33850 524263 925/14335K
MAP ENTRY 1057 2593 49/121K
KMAP ENTRY 824 1148 38/53K
MAP 7 10 0/1K
VM OBJECT 66326 66406 6218/6225K
------------------------------------------
TOTAL 58330/72146K
So why is zalloc dying when it looks like only about 90 megs of
kernel memory is allocated?
Anyhow, I've added a check in getnewvnode to return ENOMEM if zalloc
fails, my concern is that other parts of the kernel are going to
blow up immediately after that is caught because it looks like
the majority of places don't expect zalloc to fail.
Any suggestions will be helpful, any requests for more information
will happily be attempted.
thanks,
-Alfred
here's another core, this time in getnewvnode called from NFS:
(kgdb) bt
#0 dumpsys () at ../../kern/kern_shutdown.c:469
#1 0xc01af6ef in boot (howto=256) at ../../kern/kern_shutdown.c:309
#2 0xc01afab9 in panic (fmt=0xc0386b6f "page fault")
at ../../kern/kern_shutdown.c:556
#3 0xc031e224 in trap_fatal (frame=0xf1b76b90, eva=0)
at ../../i386/i386/trap.c:951
#4 0xc031de95 in trap_pfault (frame=0xf1b76b90, usermode=0, eva=0)
at ../../i386/i386/trap.c:844
#5 0xc031d9bf in trap (frame={tf_fs = -1070858216, tf_es = -1008140272,
tf_ds = -1069809648, tf_edi = 0, tf_esi = -18221184,
tf_ebp = -239637504, tf_isp = -239637572, tf_ebx = 0,
tf_edx = -1744880709, tf_ecx = 42, tf_eax = 0, tf_trapno = 12,
tf_err = 2, tf_eip = -1070480145, tf_cs = 8, tf_eflags = 66050,
tf_esp = -1005287872, tf_ss = -1071798700}) at ../../i386/i386/trap.c:443
#6 0xc031c4ef in generic_bzero ()
#7 0xc022a09d in nfs_nget (mntp=0xc4225600, fhp=0xc3a32858, fhsize=32,
npp=0xf1b76c9c) at ../../nfs/nfs_node.c:145
#8 0xc024faa3 in nfs_lookup (ap=0xf1b76d68) at ../../nfs/nfs_vnops.c:942
#9 0xc01d99c5 in lookup (ndp=0xf1b76ec8) at vnode_if.h:52
#10 0xc01d9503 in namei (ndp=0xf1b76ec8) at ../../kern/vfs_lookup.c:153
#11 0xc01e2173 in vn_open (ndp=0xf1b76ec8, fmode=1, cmode=384)
at ../../kern/vfs_vnops.c:137
#12 0xc01de2f4 in open (p=0xf1b19440, uap=0xf1b76f80)
at ../../kern/vfs_syscalls.c:995
(kgdb) up
#7 0xc022a09d in nfs_nget (mntp=0xc4225600, fhp=0xc3a32858, fhsize=32,
npp=0xf1b76c9c) at ../../nfs/nfs_node.c:145
145 error = getnewvnode(VT_NFS, mntp, nfsv2_vnodeop_p, &nvp);
(kgdb) list
140 */
141 np = zalloc(nfsnode_zone);
142 if (np == NULL)
143 error = ENFILE;
144 else
145 error = getnewvnode(VT_NFS, mntp, nfsv2_vnodeop_p, &nvp);
146 if (error) {
147 if (nfs_node_hash_lock < 0)
148 wakeup(&nfs_node_hash_lock);
149 nfs_node_hash_lock = 0;
(i thought that zalloc was failing here, but it doesn't seem to be)
% vmstat -m -M /var/qmail/crash/vmcore.5
Memory Totals: In Use Free Requests
51586K 1707K 39661043
% vmstat -z -M /var/qmail/crash/vmcore.5
ZONE used total mem-use
PIPE 53 918 8/143K
SWAPMETA 0 0 0/0K
tcpcb 298 399 158/211K
unpcb 4 128 0/8K
ripcb 0 21 0/3K
tcpcb 0 0 0/0K
udpcb 35 84 6/15K
socket 342 462 64/86K
KNOTE 0 128 0/8K
NFSNODE 446814 446832 139629/139635K
NFSMOUNT 26 35 13/18K
VNODE 450712 450712 84508/84508K
NAMEI 1 152 1/152K
VMSPACE 93 576 17/108K
PROC 97 588 39/238K
DP fakepg 0 0 0/0K
PV ENTRY 92980 524263 2542/14335K
MAP ENTRY 1028 5228 48/245K
KMAP ENTRY 1401 2253 65/105K
MAP 7 10 0/1K
VM OBJECT 281413 281530 26382/26393K
------------------------------------------
TOTAL 253486/266219K
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010625145124.D64836>
