Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 19 Oct 2014 10:30:32 -0500
From:      "James R. Van Artsdalen" <james-freebsd-fs2@jrv.org>
Cc:        freebsd-fs@freebsd.org, Xin Li <delphij@delphij.net>, d@delphij.net, current@freebsd.org
Subject:   Re: zfs recv hangs in kmem arena
Message-ID:  <5443D918.9090307@jrv.org>
In-Reply-To: <54409CFE.8070905@jrv.org>
References:  <54250AE9.6070609@jrv.org> <543FAB3C.4090503@jrv.org> <543FEE6F.5050007@delphij.net> <54409050.4070401@jrv.org> <544096B3.20306@delphij.net> <54409CFE.8070905@jrv.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Removing kern.maxfiles from loader.conf still hangs in "kmem arena".

I tried using a memstick image of -CURRENT made from the release/
process and this also hangs in "kmem arena"

An uninvolved server of mine hung Friday night in state"kmem arena"
during periodic's "zpool history".  After a reboot it did not hang
Saturday night.

On 10/16/2014 11:37 PM, James R. Van Artsdalen wrote:
> On 10/16/2014 11:10 PM, Xin Li wrote:
>> On 10/16/14 8:43 PM, James R. Van Artsdalen wrote:
>>> On 10/16/2014 11:12 AM, Xin Li wrote:
>>>>> On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
>>>>>> FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2
>>>>>> #2 r272070M: Wed Sep 24 17:36:56 CDT 2014
>>>>>> james@BLACKIE.housenet.jrv:/usr/obj/usr/src/sys/GENERIC
>>>>>> amd64
>>>>>>
>>>>>> With current STABLE10 I am unable to replicate a ZFS pool
>>>>>> using zfs send/recv without zfs hanging in state "kmem
>>>>>> arena", within the first 4TB or so (of a 23TB Pool).
>>>> What does procstat -kk 1176 (or the PID of your 'zfs' process
>>>> that stuck in that state) say?
>>>>
>>>> Cheers,
>>>>
>>> SUPERTEX:/root# ps -lp 866 UID PID PPID CPU PRI NI   VSZ   RSS
>>> MWCHAN   STAT TT      TIME COMMAND 0 866  863   0  52  0 66800
>>> 29716 kmem are D+    1  57:40.82 zfs recv -duvF BIGTOX
>>> SUPERTEX:/root# procstat -kk 866 PID    TID COMM             TDNAME
>>> KSTACK 866 101573 zfs              -                mi_switch+0xe1
>>> sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d
>>> kmem_malloc+0x33 keg_alloc_slab+0xcd keg_fetch_slab+0x151
>>> zone_fetch_slab+0x7e zone_import+0x40 uma_zalloc_arg+0x34e
>>> arc_get_data_buf+0x31a arc_buf_alloc+0xaa dmu_buf_will_fill+0x169
>>> dmu_write+0xfc dmu_recv_stream+0xd40 zfs_ioc_recv+0x94e
>>> zfsdev_ioctl+0x5ca
>> Do you have any special tuning in your /boot/loader.conf?
>>
>> Cheers,
>>
> Below.  I had forgotten some of this was there.
>
> After sending the previous message I ran kgdb to see if I could get a
> backtrace with function args.  I didn't see how to do it for this proc,
> but during all this the process un-blocked and started running again.
>
> The process blocked again in kmem arena after a few minutes.
>
>
> SUPERTEX:/root# cat /boot/loader.conf
> zfs_load="YES"           # ZFS
> vfs.root.mountfrom="zfs:SUPERTEX/UNIX"        # Specify root partition
> in a way the
>                 # kernel understands
> kern.maxfiles="32K"        # Set the sys. wide open files limit
> kern.ktrace.request_pool="512"
> #vfs.zfs.debug=1
> vfs.zfs.check_hostid=0
>
> loader_logo="beastie"        # Desired logo: fbsdbw, beastiebw, beastie,
> none
> boot_verbose="YES"        # -v: Causes extra debugging information to be
> printed
> geom_mirror_load="YES"        # RAID1 disk driver (see gmirror(8))
> geom_label_load="YES"        # File system labels (see glabel(8))
> ahci_load="YES"
> siis_load="YES"
> mvs_load="YES"
> coretemp_load="YES"        # Intel Core CPU temperature monitor
> #console="comconsole"
> kern.msgbufsize="131072"    # Set size of kernel message buffer
>
> kern.geom.label.gpt.enable=0
> kern.geom.label.gptid.enable=0
> kern.geom.label.disk_ident.enable=0
> SUPERTEX:/root#
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5443D918.9090307>