Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 Sep 2009 12:55:04 PDT
From:      Dieter <freebsd@sopwith.solgatos.com>
To:        freebsd-performance@freebsd.org
Subject:   Re: A specific example of a disk i/o problem
Message-ID:  <200909301955.TAA20656@sopwith.solgatos.com>
In-Reply-To: Your message of "Wed, 30 Sep 2009 08:30:34 %2B0200." <3bbf2fe10909292330t753bcad1r69ae67d7e898ee35@mail.gmail.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
> >> > My question is why is FreeBSD's disk i/o performance so bad?

> > Here is a specific demo of one disk i/o problem I'm seeing.  Should be
> > easy to reproduce?
> >
> > http://lists.freebsd.org/pipermail/freebsd-performance/2008-July/003533.html
> >
> > This was over a year ago, so add 7.1 to the list of versions with the problem.
> > I believe that the
> > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1148109, size: 4096
> > messages I'm getting are the same problem.  A user process is hogging
> > the bottleneck (disk buffer cache?) and the swapper/pager is getting starved.
> 
> Sorry, do you have a PR/describing e-mail with this issue? Can you be
> a bit more precise?

I have not submitted a PR for this particular problem. (yet)

The hardware seems to work fine.  A single process can access a disk at
full speed, over 100 MB/s for recent 7200 rpm SATA drives.  Same for
the nforce4-ultra (chipset), JMB363 (PCIe card), or SiI 3132 (PCIe card)
controllers.  Same for Hitachi, Seagate, Samsung, or WD drives.

CPU bound processes play well together.  The problem is when I run
a disk i/o bound process like cat, dd, etc.  The i/o bound process
sucks up some resource and other processes get starved for disk i/o
not just for milliseconds, but for seconds, even minutes.  The example
in .../2008-July/003533.html uses a single disk, but the problem
also occurs across disks and across controllers.  Coming up with
a demo using multiple disks that would be easy for someone else
to duplicate is more difficult, which is why the demo uses a single
disk.  It happens with both reading and writing.  I don't think
it has anything to do with the filesystem (FFS with softdeps).
It doesn't matter which process starts first.  Given the behaviour,
the bottleneck must be something that is common to all the disks,
such as the disk cache.

The BSD kernel has changed significantly since I took the internals
class, so my understanding of the internals is somewhat obsolete.
But my best guess is that the bottleneck is some kernel disk cache
or disk job queue that the i/o bound job fills up and keeps filled
up, and other processes rarely get a chance to get their i/o requests
in.

Nice, even idprio, has little if any effect.  On the machines that Unix
grew up on (PDP11, VAX) the CPU was nearly always the scarce resource,
so the scheduler doesn't penalize a process for using lots of i/o.
This is a serious problem on current hardware.  There is no way
to keep one process's i/o from interferring with another process.

> The problem reported in the earlier post, however, is interesting and
> worths more analysis.

Can anyone reproduce it?

> More speficially, would you be interested in reproducing and playing a
> bit with some diagnostic tool/configurations I can point you at?

I would welcome info on diagnosing/config/tuning/etc.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200909301955.TAA20656>