Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 May 2012 14:24:27 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        Kees Jan Koster <kjkoster@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: FreeBSD 9.0 hangs on heavy I/O
Message-ID:  <CAOjFWZ7MKE9GJQbUW2OxDSdrNV7Z%2Bp8MApSSb_YWtzWAmYq6_A@mail.gmail.com>
In-Reply-To: <17320979-2ED2-4DF2-97E9-09035F4DD3BB@gmail.com>
References:  <BD5D6BB6-8CFF-456A-B03E-05454EB03AB6@gmail.com> <CAOjFWZ40LX%2B8Lw15mHDG8F3nN0aex5EpqVdjPxRPS89t1Fqkiw@mail.gmail.com> <CAOjFWZ7964oeTNZqADj4cRt3kkdOf5Mwyx8GQDnJnZ8vyONckg@mail.gmail.com> <17320979-2ED2-4DF2-97E9-09035F4DD3BB@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, May 29, 2012 at 2:12 PM, Kees Jan Koster <kjkoster@gmail.com> wrote=
:
>>> You may want to play around with gshed, the GEOM Scheduler.
>>>
>>> Matt Dillon did a bunch of tests comparing FreeBSD+UFS to
>>> DragonflyBSD+HAMMER and found that FreeBSD starves read threads in
>>> order to satisfy write threads (or the other way around?). =C2=A0But,
>>> adding gsched into the mix helped things immensely, allowing mixed
>>> reads/writes to better shares disk I/O resources.
>>>
>>> I'll see if I can dig up a link to his testing e-mail messages.
>>
>> Here's the post, part of a thread on benchmarking RAID controllers:
>>
>> http://leaf.dragonflybsd.org/mailarchive/kernel/2011-07/msg00034.html
>
> I looked at "sysctl kern.geom.confdot" (another ridiculously useful featu=
re) to see where the scheduler should be placed.
>
> The way I was thinking, I should place a scheduler in such a way that wri=
tes to one physical device (ada3 in my case) do not cause reads on another =
device to stall (e.g. ada2, where the database lives). However, it looks li=
ke the GEOM tree is actually a GEOM bush, with a separate tree for each dev=
ice.
>
> Am I missing something? Is there a way to schedule across devices? Is the=
 bush a tree after all, maybe?

There are others much better versed in the ways of GEOM than I, and
hopefully they will jump in to simplify/clarify things.  :)

The way I understand things is that GEOM is a per-device stack of GEOM
classes, with the physical device at the bottom, and the VM/block/I/O
(?) system at the top.  Thus, unless you use one of the multi-device
GEOM classes (graid, gmirror, gstripe, gvinum), then each stack is
independent of the others.

Meaning gsched only works for a single stack (ie, a single device).

Granted, I haven't played with gsched yet (most of our high-I/O
systems are ZFS), so there may be a way to use it across-GEOMs.
--=20
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ7MKE9GJQbUW2OxDSdrNV7Z%2Bp8MApSSb_YWtzWAmYq6_A>