From owner-freebsd-stable@freebsd.org Fri Oct 21 10:20:59 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 361D4C1AD31 for ; Fri, 21 Oct 2016 10:20:59 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E979480D for ; Fri, 21 Oct 2016 10:20:58 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1bxWwV-000967-3f; Fri, 21 Oct 2016 13:20:51 +0300 Date: Fri, 21 Oct 2016 13:20:51 +0300 From: Slawa Olhovchenkov To: Steven Hartland Cc: freebsd-stable@freebsd.org Subject: Re: zfs, a directory that used to hold lot of files and listing pause Message-ID: <20161021102051.GH57876@zxy.spb.ru> References: <4d9269af-ed64-bb73-eb7f-98a3f5ffd5a2@norma.perm.ru> <40fa9fd6-15aa-d8f7-b958-8783e763e6bc@multiplay.co.uk> <577ab7b2-46c1-5cf0-b6ad-50895978d957@norma.perm.ru> <38a84fce-fd97-a6a1-5820-89e578d76f0b@multiplay.co.uk> <91f34889-7f4f-661d-c88f-8034402c39cd@norma.perm.ru> <19a45e94-65e6-53a9-202d-c048055460d1@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <19a45e94-65e6-53a9-202d-c048055460d1@multiplay.co.uk> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Oct 2016 10:20:59 -0000 On Fri, Oct 21, 2016 at 11:02:57AM +0100, Steven Hartland wrote: > > Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free > > ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other > > Swap: 4096M Total, 4096M Free > > > > PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND > > 600 root 39 0 27564K 5072K nanslp 1 295.0H 24.56% monit > > 0 root -17 0 0K 2608K - 1 75:24 0.00% > > kernel{zio_write_issue} > > 767 freeswitch 20 0 139M 31668K uwait 0 48:29 0.00% > > freeswitch{freeswitch} > > 683 asterisk 20 0 806M 483M uwait 0 41:09 0.00% > > asterisk{asterisk} > > 0 root -8 0 0K 2608K - 0 37:43 0.00% > > kernel{metaslab_group_t} > > [... others lines are just 0% ...] > This looks like you only have ~4Gb ram which is pretty low for ZFS I > suspect vfs.zfs.prefetch_disable will be 1, which will crash the > performance. ZFS prefetch affect performance dpeneds of workload (independed of RAM size): for some workloads wins, for some workloads lose (for my workload prefetch is lose and manualy disabled with 128GB RAM). Anyway, this system have only 24MB in ARC by 2.3GB free, this is may be too low for this workload.