Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 09 Mar 2011 11:56:49 +0100
From:      Matthias Andree <matthias.andree@gmx.de>
To:        freebsd-stable@freebsd.org
Subject:   Re: ZFS performance as the FS fills up?
Message-ID:  <4D775CF1.1010501@gmx.de>
In-Reply-To: <20110308114810.GA37554@icarus.home.lan>
References:  <BD2C8439-AC1E-49CB-ABE2-9F7F0B292933@punkt.de> <20110308114810.GA37554@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
Am 08.03.2011 12:48, schrieb Jeremy Chadwick:
> On Tue, Mar 08, 2011 at 12:26:49PM +0100, Patrick M. Hausen wrote:
>> we use a big JBOD and ZFS with raidz2 as the target
>> for our nightly Amanda backups.
>>
>> I already suspected that the fact that the FS was > 90% full might
>> be the cause of our backup performance continously decreasing.
>>
>> I just added another vdev - 6 disks of 750 GB each, raidz2 and the
>> FS usage is back to 71% currently. This was while backups were
>> running and write performance instantly skyrocketed compared to
>> the values before.
>>
>> So, is it possible to name a reasonable amount of free space to
>> keep on a raidz2 volume? On last year's EuroBSDCon I got
>> the impression that with recent (RELENG_8) ZFS merges
>> I could get away with using around 90%.
> 
> I'm in no way attempting to dissuade you from your efforts to figure out
> a good number for utilisation, but when I hear of disks -- no matter how
> many -- being 90% full, I immediately conclude performance is going to
> suck simply because the outer "tracks" on a disk contains more sectors
> than the inner "tracks".  This is the reason for performance degradation
> as the seek offset increases, resulting in graphs like this:

Whatever.  I've experienced similar massive performance decrease even on
a non-redundant single-disk ZFS setup after the ZFS (8-STABLE between
8.0 and before 8.2) had filled up once.

Even clearing the disk down to 70% didn't get my /usr (which was a ZFS
mount) snappy again.  The speed decrease was one to two orders of
magnitude in excess of what you'd attribute to the CLV or
sectors-per-track change across the disk.

What I heard from my 7200/min WD RE3 drive (which seeks rather fast for
a 7200/min drive - I think it was the fastest seeking 7200/min drive
when I bought it) it was seeking and thrashing heads like hell even on
single-threaded bulk reads of large files, and I suppose there was
fragmentation and/or non-caching of metadata afoot, and it was far worse
than any decrease in constant linear velocity or sectors-per-track of
the disk tracks could explain, and the relevant ZFS ARC related options
didn't rectify that either, so I reverted to GJOURNAL-enabled UFS which
gave me a much better performance on a 5400/min disk than I've ever had
with a halfway filled ZFS on the 7200/min RAID-class disk.  And bulk
transfer rates of both drives are beyond any doubt.

In other words, the file system didn't recover speed (I'm not sure if
that's a zfs or zpool feature), and I attribute that (and the failure to
rm files from a 100% full file system) to the write-ahead-logging
behaviour of ZFS.

Any comments?


-- 
Matthias Andree



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4D775CF1.1010501>