Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 31 Jul 2012 15:27:57 +0200 (CEST)
From:      Kasper Sacharias Eenberg <kasper@cabo.dk>
To:        freebsd-fs@freebsd.org
Subject:   Kernel panick 'vm_map_entry_create' when copying to ZFS volume shared over NFS.
Message-ID:  <29509988.197565.1343741277744.JavaMail.root@zmbox01>
In-Reply-To: <20759432.197480.1343738543702.JavaMail.root@zmbox01>

next in thread | previous in thread | raw e-mail | index | archive | help
Problem:
On copying files from vmware sphere client to a FreeBSD NFS shared 
ZFS volume, I get the unwanted error 'vm_map_entry_create'.
A screenshot of the KVM viewer displaying the error can be found here: http://i.imgur.com/T1AnZ.jpg

I can't really go into detail about how the copying is done, but it shouldn't matter. 
The problem is reproducible, it's happened 3 times so far.

I'm about to test with and even lower vfs.zfs.arc_max, since it happens at 
about 14->15 GB arc_size, I've also removed the log-ssd, which is the device reset warning in the screenshot.
I encountered the same error.
Having to tune arc_max manually to get a working system seems quite silly. (If it works..)

The harddrives are 7 OCZ Vertex4 SSD's, in raidz. The log drive is of the same type.

Sorry about the length of the mail, I've added all I though might help.
The system logs contain nothing.

Does anyone have any input?



Software:
FreeBSD 9.0-RELEASE with added Intel ixgbe-2.4.4 driver module loaded (and used).


ZFS volume information
NAME              PROPERTY              VALUE
pool-ssd/data001  type                  filesystem
pool-ssd/data001  creation              Tue Jun 19 10:04 2012
pool-ssd/data001  used                  99.9G
pool-ssd/data001  available             2.63T
pool-ssd/data001  referenced            35.8G
pool-ssd/data001  compressratio         1.54x
pool-ssd/data001  mounted               yes
pool-ssd/data001  quota                 none
pool-ssd/data001  reservation           none
pool-ssd/data001  recordsize            8K
pool-ssd/data001  mountpoint            /pool-ssd/data001
pool-ssd/data001  sharenfs              -mapall=root -network <IP> -mask 255.255.255.0
pool-ssd/data001  checksum              on
pool-ssd/data001  compression           lzjb
pool-ssd/data001  atime                 off
pool-ssd/data001  devices               on
pool-ssd/data001  exec                  on
pool-ssd/data001  setuid                on
pool-ssd/data001  readonly              off
pool-ssd/data001  jailed                off
pool-ssd/data001  snapdir               hidden
pool-ssd/data001  aclmode               discard
pool-ssd/data001  aclinherit            restricted
pool-ssd/data001  canmount              on
pool-ssd/data001  xattr                 off
pool-ssd/data001  copies                1
pool-ssd/data001  version               5
pool-ssd/data001  utf8only              off
pool-ssd/data001  normalization         none
pool-ssd/data001  casesensitivity       sensitive
pool-ssd/data001  vscan                 off
pool-ssd/data001  nbmand                off
pool-ssd/data001  sharesmb              off
pool-ssd/data001  refquota              none
pool-ssd/data001  refreservation        none
pool-ssd/data001  primarycache          all
pool-ssd/data001  secondarycache        all
pool-ssd/data001  usedbysnapshots       64.1G
pool-ssd/data001  usedbydataset         35.8G
pool-ssd/data001  usedbychildren        0
pool-ssd/data001  usedbyrefreservation  0
pool-ssd/data001  logbias               latency
pool-ssd/data001  dedup                 off
pool-ssd/data001  mlslabel              
pool-ssd/data001  sync                  standard
pool-ssd/data001  refcompressratio      1.25x


A second before the crash:
------------------------------------------------------------------------
ZFS Subsystem ReportTue Jul 31 13:58:34 2012
------------------------------------------------------------------------
System Memory:

0.13%30.15MiB Active,0.02%5.82MiB Inact
67.53%15.69GiB Wired,0.00%520.00KiB Cache
32.31%7.51GiB Free,0.00%404.00KiB Gap

Real Installed:24.00GiB
Real Available:99.88%23.97GiB
Real Managed:96.94%23.24GiB

Logical Total:24.00GiB
Logical Used:68.69%16.49GiB
Logical Free:31.31%7.51GiB

Kernel Memory:13.26GiB
Data:99.85%13.24GiB
Text:0.15%20.09MiB

Kernel Memory Map:20.24GiB
Size:65.31%13.22GiB
Free:34.69%7.02GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Memory Throttle Count:0

ARC Misc:
Deleted:15
Recycle Misses:0
Mutex Misses:0
Evict Skips:0

ARC Size:62.93%13.99GiB
Target Size: (Adaptive)100.00%22.24GiB
Min Size (Hard Limit):12.50%2.78GiB
Max Size (High Water):8:122.24GiB

ARC Size Breakdown:
Recently Used Cache Size:58.97%13.11GiB
Frequently Used Cache Size:41.03%9.12GiB

ARC Hash Breakdown:
Elements Max:1.70m
Elements Current:100.00%1.70m
Collisions:1.20m
Chain Max:14
Chains:439.54k
------------------------------------------------------------------------



A second before the crash:
------------------------------------------------------------------------
ZFS Subsystem ReportTue Jul 31 14:40:26 2012
------------------------------------------------------------------------

System Memory:

0.12%28.21MiB Active,0.02%5.27MiB Inact
69.62%16.18GiB Wired,0.00%40.00KiB Cache
30.24%7.03GiB Free,0.00%444.00KiB Gap

Real Installed:24.00GiB
Real Available:99.88%23.97GiB
Real Managed:96.94%23.24GiB

Logical Total:24.00GiB
Logical Used:70.70%16.97GiB
Logical Free:29.30%7.03GiB

Kernel Memory:13.70GiB
Data:99.86%13.69GiB
Text:0.14%20.09MiB

Kernel Memory Map:20.19GiB
Size:67.65%13.66GiB
Free:32.35%6.53GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Memory Throttle Count:0

ARC Misc:
Deleted:15
Recycle Misses:0
Mutex Misses:0
Evict Skips:0

ARC Size:96.34%14.45GiB
Target Size: (Adaptive)100.00%15.00GiB
Min Size (Hard Limit):12.50%1.88GiB
Max Size (High Water):8:115.00GiB

ARC Size Breakdown:
Recently Used Cache Size:89.54%13.43GiB
Frequently Used Cache Size:10.46%1.57GiB

ARC Hash Breakdown:
Elements Max:1.74m
Elements Current:100.00%1.74m
Collisions:1.24m
Chain Max:14
Chains:443.12k

------------------------------------------------------------------------





/boot/loader.conf 
---------------------
ahci_load="yes"
zfs_load="YES"
if_vlan_load="YES"
vfs.root.mountfrom="zfs:zroot"
ixgbe_load="YES"

boot_multicons="YES"
boot_serial="YES"
comconsole_speed="115200"
console="comconsole,vidconsole"

hw.igb.rxd="4096"
hw.igb.txd="4096"
kern.ipc.nmbclusters="262144"
kern.ipc.nmbjumbop="262144"

# Disable ZFS prefetching
# http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
# Increases overall speed of ZFS, but when disk flushing/writes occur,
# system is less responsive (due to extreme disk I/O).
# NOTE: 8.0-RC1 disables this by default on systems <= 4GB RAM anyway
vfs.zfs.prefetch_disable="1"

# Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
# on 2010/05/24.
# http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
#vfs.zfs.zio.use_uma="0"

# Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
# should increase throughput and decrease the "bursty" stalls that
# happen during immense I/O with ZFS.
# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
vfs.zfs.txg.timeout="5"

# Below was added to test whether limiting arc memory worked.
vfs.zfs.arc_max="15G"
---------------------



Kind regards / Med venlig hilsen
Kasper Sacharias Eenberg
Cabo A/S
Klosterport 4a, 4. sal
8000 Aarhus C




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?29509988.197565.1343741277744.JavaMail.root>