Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 9 May 2012 00:15:32 +0200
From:      Michael Gmelin <freebsd@grem.de>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS resilvering strangles IO
Message-ID:  <44759017-6FAC-4982-B382-CE17DED83262@grem.de>
In-Reply-To: <CAFqOu6hxww5a1CLwYOZZcZNkJVhwH2eUXmtJKNwm6ohNmcqP0Q@mail.gmail.com>
References:  <73F8D020-04F3-44B2-97D4-F08E3B253C32@grem.de> <CAFHbX1K0--P-Sh0QdLszEs0V1ocWoe6Jp_SY9H%2BVJd1AQw2XKA@mail.gmail.com> <180B72CE-B285-4702-B16D-0714AA07022C@grem.de> <alpine.GSO.2.01.1205081625470.9406@freddy.simplesystems.org> <CAOjFWZ7ik_sUmUaw4im729dc-2Toq2j_z_oxiqUpzc4x_TOujQ@mail.gmail.com> <CAFqOu6hxww5a1CLwYOZZcZNkJVhwH2eUXmtJKNwm6ohNmcqP0Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On May 9, 2012, at 00:06, Artem Belevich wrote:

> On Tue, May 8, 2012 at 2:33 PM, Freddie Cash <fjwcash@gmail.com> =
wrote:
>> On Tue, May 8, 2012 at 2:31 PM, Bob Friesenhahn
>> <bfriesen@simple.dallas.tx.us> wrote:
>>> On Tue, 8 May 2012, Michael Gmelin wrote:
>>>>=20
>>>> Do you think it would make sense to try to play with =
zfs_resilver_delay
>>>> directly in the ZFS kernel module?
>>>=20
>>> This may be the wrong approach if the issue is really that there are =
too
>>> many I/Os queued for the device.  Finding a tunable which reduces =
the
>>> maximum number of I/Os queued for a disk device may help reduce =
write
>>> latencies by limiting the backlog.
>>>=20
>>> On my Solaris 10 system, I accomplished this via a tunable in =
/etc/system:
>>> set zfs:zfs_vdev_max_pending =3D 5
>>>=20
>>> What is the equivalent for FreeBSD?
>>=20
>> Setting vfs.zfs.vdev_max_pending=3D"4" in /boot/loader.conf (or =
whatever
>> value you want).  The default is 10.
>=20

Do you think this will actually make a difference. As far as I
understand my primary problem is not latency but throughput. Simple
example is dd if=3D/dev/zero of=3Dfilename bs=3D1m, which gave me =
500kb/s.
Latency might be an additional problem (or am I mislead and a shorter
queue would raise the processes chance to get data through?).

> You may also want to look at vfs.zfs.scrub_limit sysctl. According to
> description it's "Maximum scrub/resilver I/O queue" which sounds like
> something that may help in this case.
>=20
> --Artem

Very good point, thank you. I also found this entry in the FreeBSD
forums indicating that this might ease the pain (even though he's also
talking about scrub, not resilver, hopefully the tunable does both as
indicated in the comments):

http://forums.freebsd.org/showthread.php?t=3D31628

/* maximum scrub/resilver I/O queue per leaf vdev */ int
zfs_scrub_limit =3D 10;

TUNABLE_INT("vfs.zfs.scrub_limit", &zfs_scrub_limit);
SYSCTL_INT(_vfs_zfs, OID_AUTO, scrub_limit, CTLFLAG_RDTUN,
&zfs_scrub_limit, 0, "Maximum scrub/resilver I/O queue");   =20

I will try lowering the value zfs_scrub_limit to 6 in loader.conf
and replace the drive once more later this month.

--=20
Michael




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?44759017-6FAC-4982-B382-CE17DED83262>