Date: Wed, 06 Jul 2011 13:11:22 -0500 From: Nathan Whitehorn <nwhitehorn@freebsd.org> To: Steve Kargl <sgk@troutmask.apl.washington.edu> Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, FreeBSD Current <freebsd-current@freebsd.org>, "Hartmann, O." <ohartman@zedat.fu-berlin.de>, arrowdodger <6yearold@gmail.com>, freebsd-questions@freebsd.org Subject: Re: Heavy I/O blocks FreeBSD box for several seconds Message-ID: <4E14A54A.4050106@freebsd.org> In-Reply-To: <20110706180001.GA69157@troutmask.apl.washington.edu> References: <20110706170132.GA68775@troutmask.apl.washington.edu> <5080.1309971941@critter.freebsd.dk> <20110706180001.GA69157@troutmask.apl.washington.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
On 07/06/11 13:00, Steve Kargl wrote: > On Wed, Jul 06, 2011 at 05:05:41PM +0000, Poul-Henning Kamp wrote: >> In message<20110706170132.GA68775@troutmask.apl.washington.edu>, Steve Kargl w >> rites: >> >>> I periodically ran the same type test in the 2008 post over the >>> last three years. Nothing has changed. I even set up an account >>> on one node in my cluster for jeffr to use. He was too busy to >>> investigate at that time. >> >> Isn't this just the lemming-syncer hurling every dirty block over >> the cliff at the same time ? > > I don't know the answer. Of course, having no experience in > processing scheduling, I don't understand the question either ;-) > > AFAICT, it is a cpu affinity issue. If I launch n+1 MPI images > on a system with n cpus/cores, then 2 (and sometimes 3) images > are stuck on a cpu and those 2 (or 3) images ping-pong on that > cpu. I recall trying to use renice(8) to force some load > balancing, but vaguely remember that it did not help. I've seen exactly this problem with multi-threaded math libraries, as well. Using parallel GotoBLAS on FreeBSD gives terrible performance because the threads keep migrating between CPUs, causing frequent cache misses. -Nathan
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4E14A54A.4050106>