Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 16 Jul 2013 14:09:23 +0000
From:      Ivailo Tanusheff <Ivailo.Tanusheff@skrill.com>
To:        Daniel Kalchev <daniel@digsys.bg>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   RE: ZFS vdev I/O questions
Message-ID:  <9d3cf0be165d4351acc5e757de3868ec@DB3PR07MB059.eurprd07.prod.outlook.com>
In-Reply-To: <51E54799.8070700@digsys.bg>
References:  <51E5316B.9070201@digsys.bg> <20130716115305.GA40918@mwi1.coffeenet.org> <51E54799.8070700@digsys.bg>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi danbo :)

Isn't this some kind of pool fragmentation? Because this is usually the cas=
e in such slow parts of the disk systems. I think your pool is getting full=
 and it is heavily fragmented, that's why you have more data for each reque=
st on a different vdev.
But this has nothing to do with the single, slow device :(

Best regards,
Ivailo Tanusheff

-----Original Message-----
From: owner-freebsd-fs@freebsd.org [mailto:owner-freebsd-fs@freebsd.org] On=
 Behalf Of Daniel Kalchev
Sent: Tuesday, July 16, 2013 4:16 PM
To: freebsd-fs@freebsd.org
Subject: Re: ZFS vdev I/O questions


On 16.07.13 14:53, Mark Felder wrote:
> On Tue, Jul 16, 2013 at 02:41:31PM +0300, Daniel Kalchev wrote:
>> I am observing some "strange" behaviour with I/O spread on ZFS vdevs=20
>> and thought I might ask if someone has observed it too.
>>
> --SNIP--
>
>> Drives da0-da5 were Hitachi Deskstar 7K3000 (Hitachi HDS723030ALA640,=20
>> firmware MKAOA3B0) -- these are 512 byte sector drives, but da0 has=20
>> been replaced by Seagate Barracuda 7200.14 (AF) (ST3000DM001-1CH166,=20
>> firmware
>> CC24) -- this is an 4k sector drive of a new generation (notice the=20
>> relatively 'old' firmware, that can't be upgraded).
> --SNIP--
>

As you can see, the initial burst is to all vdevs, saturating drives at 100=
%. Then vdev 3 completes, then the Hitachi drives of vdev 1 complete with t=
he Seagate drive writing some more and then for few more seconds, only vdev=
 2 drives are writing. It seems the amount of data is the same, just vdev 2=
 writes the data slower. However, drives in vdev 2 and vdev 3 are the same.=
 They should have the same performance characteristics (and as long as the =
drives are not 100% saturated, all vdevs complete more or less at the same =
time). At other times, some other vdev would complete last -- it is never t=
he same vdev that is 'slow'.

Could this be DDT/metadata specific issue? Is the DDT/metadata vdev-specifi=
c? The pool initially had only two vdevs and after vdev 3 was added, most o=
f the written data had no dedup enabled. Also, the ZIL was added later and =
initial metadata could be fragmented. But.. why should this affect writing?=
 The zpool is indeed pretty full, but performance should degrade for all vd=
evs (which are more or less equally full).

Daniel
_______________________________________________
freebsd-fs@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9d3cf0be165d4351acc5e757de3868ec>