From owner-freebsd-smp@FreeBSD.ORG Fri May 6 18:48:53 2005 Return-Path: Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8C93116A4D4; Fri, 6 May 2005 18:48:53 +0000 (GMT) Received: from obsecurity.dyndns.org (CPE0050040655c8-CM00111ae02aac.cpe.net.cable.rogers.com [69.194.102.111]) by mx1.FreeBSD.org (Postfix) with ESMTP id 56F7143D68; Fri, 6 May 2005 18:48:53 +0000 (GMT) (envelope-from kris@obsecurity.org) Received: by obsecurity.dyndns.org (Postfix, from userid 1000) id 89AB252048; Fri, 6 May 2005 11:48:52 -0700 (PDT) Date: Fri, 6 May 2005 11:48:52 -0700 From: Kris Kennaway To: Kris Kennaway Message-ID: <20050506184852.GA62656@xor.obsecurity.org> References: <20050506183529.GA46411@xor.obsecurity.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="jRHKVT23PllUwdXP" Content-Disposition: inline In-Reply-To: <20050506183529.GA46411@xor.obsecurity.org> User-Agent: Mutt/1.4.2.1i cc: smp@FreeBSD.org cc: current@FreeBSD.org Subject: Re: Benchmarking mpsafevfs with parallel tarball extraction X-BeenThere: freebsd-smp@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: FreeBSD SMP implementation group List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 May 2005 18:48:53 -0000 --jRHKVT23PllUwdXP Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Fri, May 06, 2005 at 11:35:29AM -0700, Kris Kennaway wrote: > I might be bumping into the bandwidth of md here - when I ran less > rigorous tests with lower concurrency of extractions I seemed to be > getting marginally better performance (about an effective concurrency > of 2.2 for both 3 and 10 simultaneous extractions - so at least it > doesn't seem to degrade badly). Or this might be reflecting VFS lock > contention (which there is certainly a lot of, according to mutex > profiling traces). I suspect that I am hitting the md bandwidth: # dd if=/dev/zero of=/dev/md0 bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 9.501760 secs (55177988 bytes/sec) which is a lot worse than I expected (even for a 400MHz CPU). For some reason I get better performance writing to a filesystem mounted on this md: # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.943042 secs (66005946 bytes/sec) # rm foo # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.126929 secs (73564364 bytes/sec) # rm foo # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.237668 secs (72438804 bytes/sec) If the write bandwidth is only 50-70MB/sec, then it won't be hard to saturate, so I won't probe the full scalability of mpsafevfs here. Kris --jRHKVT23PllUwdXP Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (FreeBSD) iD8DBQFCe7wUWry0BWjoQKURAjXSAJsEEw3Y4ZbPatJUiKBe6tvp8tiE0wCfaqSa DW+OVE3ZLxL/NCKrxB5ToIk= =vojN -----END PGP SIGNATURE----- --jRHKVT23PllUwdXP--