From owner-freebsd-current@freebsd.org Mon May 28 08:10:52 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9069EF71DBE for ; Mon, 28 May 2018 08:10:52 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 25CBC8237B for ; Mon, 28 May 2018 08:10:52 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1fNDEs-0004hH-Ft; Mon, 28 May 2018 11:10:46 +0300 Date: Mon, 28 May 2018 11:10:46 +0300 From: Slawa Olhovchenkov To: Alexander Leidinger Cc: Kirill Ponomarev , freebsd-current@freebsd.org Subject: Re: Deadlocks / hangs in ZFS Message-ID: <20180528081046.GL1926@zxy.spb.ru> References: <20180522101749.Horde.Wxz9gSxx1xArxkYMQqTL0iZ@webmail.leidinger.net> <20180522122924.GC1954@zxy.spb.ru> <20180522161632.Horde.ROSnBoZixBoE9ZBGp5VBQgZ@webmail.leidinger.net> <20180522144055.GD1954@zxy.spb.ru> <20180527194159.v54ox3vlthpuvx4q@jo> <20180527220612.GK1926@zxy.spb.ru> <20180528090201.Horde._E4JZcuEaZHfj_BNzWjci2O@webmail.leidinger.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180528090201.Horde._E4JZcuEaZHfj_BNzWjci2O@webmail.leidinger.net> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 May 2018 08:10:52 -0000 On Mon, May 28, 2018 at 09:02:01AM +0200, Alexander Leidinger wrote: > Quoting Slawa Olhovchenkov (from Mon, 28 May 2018 > 01:06:12 +0300): > > > On Sun, May 27, 2018 at 09:41:59PM +0200, Kirill Ponomarev wrote: > > > >> On 05/22, Slawa Olhovchenkov wrote: > >> > > It has been a while since I tried Karl's patch the last time, and I > >> > > stopped because it didn't apply to -current anymore at some point. > >> > > Will what is provided right now in the patch work on -current? > >> > > >> > I am mean yes, after s/vm_cnt.v_free_count/vm_free_count()/g > >> > I am don't know how to have two distinct patch (for stable and > >> current) in one review. > >> > >> I'm experiencing these issues sporadically as well, would you mind > >> to publish this patch for fresh current? > > > > Week ago I am adopt and publish patch to fresh current and stable, is > > adopt need again? > > I applied the patch in the review yesterday to rev 333966, it applied > OK (with some fuzz). I will try to reproduce my issue with the patch. > > Some thoughts I had after looking a little bit at the output of top... > half of the RAM of my machine is in use, the other half is listed as > free. Swap gets used while there is plenty of free RAM. I have NUMA in > my kernel (it's 2 socket Xeon system). I don't see any NUMA specific > code in the diff (and I don't expect something there), but could it be > that some NUMA related behavior comes into play here too? Does it make > sense to try without NUMA in the kernel? Good question, NUMA in FreeBSD too new, nobody know it. For Linux, some effectt exists: exhaust all memory in one NUMA domain can cause memory deficit (swap/allocation failure/etc) simultaneous with many free memory in other NUMA domain. Yes, try w/o NUMA, this is may be interesting for NUMA developers.