Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Oct 2003 11:08:02 -0700
From:      Ken Marx <kmarx@vicor.com>
To:        Julian Elischer <julian@vicor.com>
Cc:        mckusick@beastie.mckusick.com
Subject:   Re: 4.8 ffs_dirpref problem
Message-ID:  <3F981902.90607@vicor.com>
In-Reply-To: <20031023171932.9F36C7A425@mail.vicor-nb.com>
References:  <20031023171932.9F36C7A425@mail.vicor-nb.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for the reply,

We actually *did* try -s 4096 yesterday (not quite what you suggested)
with spotty results: Sometimes it seemed to go more quickly, but often
not.

Let me clarify our test: We have a 1.5gb tar file from our production
raid that fairly represents the distribution of data. We hit the
performance problem when we get to dirs with lots of small-ish files.
But, as Julian mentioned, we typically have many flavors of file
sizes and populations.

Admittedly, our untar'ing test isn't necessarily representitive
of what happens in production - we were just trying to fill the
disk and recreate the problem here. We *did* at least hit a noticeable
problem, and we believe it's the same behavior that's hitting production.

I just tried your exact suggested settings on an fs that was
already 96% full, and still experienced the very sluggish
behavior on exactly the same type of files/dirs.

Our untar typically takes around 60-100 sec of system time
when things are going ok; 300-1000+ sec when the sluggishness occurs.
This time tends to increase as we get closer to 99%. Sometimes
as high as 4000+ secs.

I wasn't clear from your mail if I should newfs the entire
fs and start over, or if I could have expected the settings
to make a difference for any NEW data.

I can do this latter if you think it's required. The test will
then take several hours to run since we need at least 85% disk usage
to start seeing the problem.

Thanks!
k

Julian Elischer wrote:
>>From mckusick@beastie.mckusick.com  Wed Oct 22 22:30:03 2003
>>X-Original-To: julian@vicor-nb.com
>>Delivered-To: julian@vicor-nb.com
>>To: Ken Marx <kmarx@vicor.com>
>>Subject: Re: 4.8 ffs_dirpref problem 
>>Cc: freebsd-fs@freebsd.org, cburrell@vicor.com, davep@vicor.com,
>>	jpl@vicor.com, jrh@vicor.com, julian@vicor-nb.com, VicPE@aol.com,
>>	julian@vicor.com, Grigoriy Orlov <gluk@ptci.ru>
>>In-Reply-To: Your message of "Wed, 22 Oct 2003 12:57:53 PDT."
>>             <20031022195753.27C707A49F@mail.vicor-nb.com> 
>>Date: Wed, 22 Oct 2003 16:37:54 -0700
>>From: Kirk McKusick <mckusick@beastie.mckusick.com>
> 
> 
>>I believe that you can dsolve your problem by tuning the existing
>>algorithm using tunefs. There are two parameters to control dirpref,
>>avgfilesize (which defaults to 16384) and filesperdir (which defaults
>>to 50). I suggest that you try using an avgfilesize of 4096 and
>>filesperdir of 1500. This is done by running tunefs on the unmounted
>>(or at least mounted read-only) filesystem as:
> 
> 
>>	tunefs -f 4096 -s 1500 /dev/<disk for my broken filesystem>
> 
> 
> On the same filesystem are directories that contain 1GB files
> and others that contain maybe 100 100K files (images)
> 
> 
> 
>>Note that this affects future layout, so needs to be done before you
>>put any data into the filesystem. If you are building the filesystem
>>from scratch, you can use:
> 
> 
> would this have an effect on an existing filesystem with respect to new data
> being added to it?
> 
> 
> 
> 
> 
>>	newfs -g 4096 -h 1500 ...
>>
>>to set these fields. Please let me know if this solves your problem.
>>If it does not, I will ask Grigoriy Orlov <gluk@ptci.ru> if he has
>>any ideas on how to proceed.
> 
> 
>>	Kirk McKusick
> 
> 
>>=-=-=-=-=-=-=
> 
> 
> 

-- 
Ken Marx, kmarx@vicor-nb.com
It's too costly to get lean and mean and analyze progress on the diminishing 
expectations.
		- http://www.bigshed.com/cgi-bin/speak.cgi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3F981902.90607>