From owner-freebsd-current@FreeBSD.ORG Wed Mar 18 14:46:53 2015 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3678E432; Wed, 18 Mar 2015 14:46:53 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0EFC95FA; Wed, 18 Mar 2015 14:46:53 +0000 (UTC) Received: from ralph.baldwin.cx (pool-173-54-116-245.nwrknj.fios.verizon.net [173.54.116.245]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 06770B997; Wed, 18 Mar 2015 10:46:52 -0400 (EDT) From: John Baldwin To: freebsd-current@freebsd.org Subject: Re: [PATCH] Convert the VFS cache lock to an rmlock Date: Wed, 18 Mar 2015 10:17:22 -0400 Message-ID: <6376695.VOvhinOncy@ralph.baldwin.cx> User-Agent: KMail/4.14.2 (FreeBSD/10.1-STABLE; KDE/4.14.2; amd64; ; ) In-Reply-To: <20150313053203.GC9153@dft-labs.eu> References: <20150313053203.GC9153@dft-labs.eu> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Wed, 18 Mar 2015 10:46:52 -0400 (EDT) Cc: alc@freebsd.org, Mateusz Guzik , Ryan Stone X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Mar 2015 14:46:53 -0000 On Friday, March 13, 2015 06:32:03 AM Mateusz Guzik wrote: > On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote: > > Below is partial results from a profile of a parallel (-j7) "buildworld" on > > a 6-core machine that I did after the introduction of pmap_advise, so this > > is not a new profile. The results are sorted by total waiting time and > > only the top 20 entries are listed. > > > > Well, I ran stuff on lynx2 in the zoo on fresh -head with debugging > disabled (MALLOC_PRODUCTION included) and got quite different results. > > The machine is Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz > 2 package(s) x 10 core(s) x 2 SMT threads > > 32GB of ram > > Stuff was built in a chroot with world hosted on zfs. > > > max wait_max total wait_total count avg wait_avg > > cnt_hold cnt_lock name > > > > 1027 208500 16292932 1658585700 5297163 3 313 0 > > 3313855 kern/vfs_cache.c:629 (rw:Name Cache) > > > > 208564 186514 19080891106 1129189627 355575930 53 3 0 > > 1323051 kern/vfs_subr.c:2099 (lockmgr:ufs) > > > > 169241 148057 193721142 419075449 13819553 14 30 0 > > 110089 kern/vfs_subr.c:2210 (lockmgr:ufs) > > > > 187092 191775 1923061952 257319238 328416784 5 0 0 > > 5106537 kern/vfs_cache.c:488 (rw:Name Cache) > > > > make -j 12 buildworld on freshly booted system (i.e. the most namecache insertions): > > 32 292 3042019 33400306 8419725 0 3 0 2578026 kern/sys_pipe.c:1438 (sleep mutex:pipe mutex) > 170608 152572 642385744 27054977 202605015 3 0 0 1306662 kern/vfs_subr.c:2176 (lockmgr:zfs) You are using ZFS, Alan was using UFS. It would not surprise me that those would perform quite differently, and it would not surprise me that UFS is more efficient in terms of its interactions with the VM. -- John Baldwin