Date: Wed, 7 Nov 2007 08:46:35 +0100 From: Martijn Plak <martijn@plak.net> To: freebsd-fs@freebsd.org Subject: Re: ZFS kmem_map too small. Message-ID: <5ED261E6-A85A-4D7D-A01A-4688802FECB9@plak.net> In-Reply-To: <20071106125552.GO5268@garage.freebsd.pl> References: <20071005000046.GC92272@garage.freebsd.pl> <20071008121523.GM2327@garage.freebsd.pl> <20071105215035.GC26730@heff.fud.org.nz> <2e77fc10711051531k41e7224dq6aaedb35cad8d9f2@mail.gmail.com> <6214AB9C-9F9B-4B9D-8B05-0B3DF5F6C16D@SARENET.ES> <20071106100015.GB5268@garage.freebsd.pl> <9A147CEC-D5E6-4117-8C69-16E40DB45B22@SARENET.ES> <20071106125552.GO5268@garage.freebsd.pl>
next in thread | previous in thread | raw e-mail | index | archive | help
My 7.0-BETA2 (+ updates) system is crashing regularly on a kmem_map
too small panic too.
Yesterday I applied the vm_kern.c.2.patch patch and turned off all of
my sysctl tweaks.
At first, the system was stable for a day. During that day, I ran
make -j4 buildworld several times without trouble.
Then started a network transfer. This was a usenet download, on 4 TCP
channels, for a total of 10Mbps, from a server at a 'ping distance'
of about 20ms, using hellanzb.py. My guess was that this sort of
transfer puts some memory pressure on the kernel space. Running that
in parallel with the buildworld crashed the system again.
I hope this helps the investigation. Let me know if I can run other
tests.
- Martijn
=== sudo cat /var/crash/info.2
Dump header from device /dev/da0
Architecture: i386
Architecture Version: 2
Dump Length: 370655232B (353 MB)
Blocksize: 512
Dumptime: Tue Nov 6 22:00:28 2007
Hostname: dupont.plak.net
Magic: FreeBSD Kernel Dump
Version String: FreeBSD 7.0-BETA2 #6: Mon Nov 5 14:25:33 UTC 2007
root@dupont.plak.net:/usr/obj/usr/src/sys/DUPONT
Panic String: kmem_malloc(90112): kmem_map too small: 295931904
total allocated
Dump Parity: 2925288312
Bounds: 2
Dump Status: good
Some hardware info:
Intel P4 3GHz hyperthreading processor,
1GB RAM,
intel i915 chipset,
4 SATA disks on i915 (ICH6),
1 compact flash on IDE to boot the system.
Here is some information from the system, after reboot.
=== uname -a
FreeBSD dupont.plak.net 7.0-BETA2 FreeBSD 7.0-BETA2 #6: Mon Nov 5
14:25:33 UTC 2007 root@dupont.plak.net:/usr/obj/usr/src/sys/
DUPONT i386
=== diff /usr/src/sys/i386/conf/GENERIC /usr/src/sys/i386/conf/DUPONT
21,22d20
< cpu I486_CPU
< cpu I586_CPU
24c22
< ident GENERIC
---
> ident DUPONT
=== zfs list
NAME USED AVAIL REFER MOUNTPOINT
raid 552G 325G 1.50K none
raid/data 547G 325G 471G /data
raid/data/store1 76.5G 325G 76.5G /data/store1
raid/data/store2 84.8M 325G 84.8M /data/store2
raid/sys 4.65G 325G 26.9K none
raid/sys/home 131M 325G 131M legacy
raid/sys/root 135M 325G 135M legacy
raid/sys/tmp 56.9K 325G 56.9K legacy
raid/sys/usr 2.92G 325G 2.92G legacy
raid/sys/var 1.47G 325G 1.47G legacy
raid/sys2 300M 325G 26.9K none
raid/sys2/root 178M 325G 178M legacy
raid/sys2/tmp 35.2K 325G 35.2K legacy
raid/sys2/usr 121M 325G 121M legacy
raid/sys2/var 291K 325G 291K legacy
=== zpool status
pool: raid
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
raid ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad4 ONLINE 0 0 0
ad5 ONLINE 0 0 0
ad6 ONLINE 0 0 0
ad7 ONLINE 0 0 0
errors: No known data errors
=== cat /boot/loader.conf
# Load ZFS and load root system from the RAID-Z array.
zfs_load="YES"
vfs.root.mountfrom="zfs:raid/sys/root"
# Tune ZFS and VM parameters.
#vfs.zfs.arc_max="64M"
#kern.maxvnodes="50000"
#vm.kmem_size_max="512M"
#vm.kmem_size="512M"
=== sysctl vm | grep kmem
vm.kmem_size_scale: 3
vm.kmem_size_max: 335544320
vm.kmem_size_min: 0
vm.kmem_size: 335544320
=== sysctl vfs.zfs
vfs.zfs.arc_min: 16777216
vfs.zfs.arc_max: 251658240
vfs.zfs.mdcomp_disable: 0
vfs.zfs.prefetch_disable: 0
vfs.zfs.zio.taskq_threads: 0
vfs.zfs.recover: 0
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.debug: 0
=== sysctl kstat.zfs
kstat.zfs.misc.arcstats.hits: 602394
kstat.zfs.misc.arcstats.misses: 29556
kstat.zfs.misc.arcstats.demand_data_hits: 475248
kstat.zfs.misc.arcstats.demand_data_misses: 7156
kstat.zfs.misc.arcstats.demand_metadata_hits: 105827
kstat.zfs.misc.arcstats.demand_metadata_misses: 10413
kstat.zfs.misc.arcstats.prefetch_data_hits: 140
kstat.zfs.misc.arcstats.prefetch_data_misses: 4620
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 21179
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 7367
kstat.zfs.misc.arcstats.mru_hits: 242176
kstat.zfs.misc.arcstats.mru_ghost_hits: 3548
kstat.zfs.misc.arcstats.mfu_hits: 338953
kstat.zfs.misc.arcstats.mfu_ghost_hits: 2849
kstat.zfs.misc.arcstats.deleted: 40513
kstat.zfs.misc.arcstats.recycle_miss: 61805
kstat.zfs.misc.arcstats.mutex_miss: 66
kstat.zfs.misc.arcstats.evict_skip: 49115
kstat.zfs.misc.arcstats.hash_elements: 3528
kstat.zfs.misc.arcstats.hash_elements_max: 7423
kstat.zfs.misc.arcstats.hash_collisions: 15472
kstat.zfs.misc.arcstats.hash_chains: 290
kstat.zfs.misc.arcstats.hash_chain_max: 4
kstat.zfs.misc.arcstats.p: 124527736
kstat.zfs.misc.arcstats.c: 126306424
kstat.zfs.misc.arcstats.c_min: 16777216
kstat.zfs.misc.arcstats.c_max: 251658240
kstat.zfs.misc.arcstats.size: 126307328
=== sysctl -a | grep vnode
kern.maxvnodes: 52280
kern.minvnodes: 17426
vm.stats.vm.v_vnodepgsout: 60
vm.stats.vm.v_vnodepgsin: 3719
vm.stats.vm.v_vnodeout: 43
vm.stats.vm.v_vnodein: 3719
vfs.freevnodes: 17385
vfs.wantfreevnodes: 17426
vfs.numvnodes: 18986
debug.sizeof.vnode: 272
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5ED261E6-A85A-4D7D-A01A-4688802FECB9>
