From owner-freebsd-bugs@FreeBSD.ORG Fri Feb 8 06:30:01 2013 Return-Path: Delivered-To: freebsd-bugs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 3DEAA1F6 for ; Fri, 8 Feb 2013 06:30:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 310A47C1 for ; Fri, 8 Feb 2013 06:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r186U1Ff006103 for ; Fri, 8 Feb 2013 06:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r186U1ec006100; Fri, 8 Feb 2013 06:30:01 GMT (envelope-from gnats) Date: Fri, 8 Feb 2013 06:30:01 GMT Message-Id: <201302080630.r186U1ec006100@freefall.freebsd.org> To: freebsd-bugs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/175950: Possible deadlock in zfs after long uptime X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andriy Gapon List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Feb 2013 06:30:01 -0000 The following reply was made to PR kern/175950; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, pascal.guitierrez@gmail.com Cc: Subject: Re: kern/175950: Possible deadlock in zfs after long uptime Date: Fri, 08 Feb 2013 08:26:03 +0200 Thank you for the report. You seem to have run into a problem that is triggered by either low available physical memory or low KVA. r242859 on stable/8 branch may reduce chances of the problem. Unfortunately, there is no complete resolution at the moment. If you are interested, some technical details could be found here: http://article.gmane.org/gmane.os.freebsd.stable/84981 Could you please also provide output of the following sysctls (for completeness sake): vm.stats vm.kmem_size vm.kmem_map_size vm.kmem_map_free P.S. The bad thread in your report: 1963 100308 nfsd nfsd: service mi_switch+0x176 sleepq_wait+0x42 _sleep+0x317 arc_lowmem+0x77 kmem_malloc+0xc1 uma_large_malloc+0x4a malloc+0xd7 arc_get_data_buf+0xb5 arc_read_nolock+0x1ec arc_read+0x93 dbuf_read+0x452 dmu_tx_check_ioerr+0x9a dmu_tx_count_write+0x29c dmu_tx_hold_write+0x4a zfs_freebsd_write+0x372 VOP_WRITE_APV+0xb2 nfsrv_write+0x969 nfssvc_program+0x1a6 -- Andriy Gapon