Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 Jan 2013 09:15:04 -0600
From:      Kevin Day <toasty@dragondata.com>
To:        "Ronald Klop" <ronald-freebsd8@klop.yi.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Improving ZFS performance for large directories
Message-ID:  <AB6FA392-D80D-4280-8B16-FB931D2AD35C@dragondata.com>
In-Reply-To: <op.wrpyzok18527sy@ronaldradial.versatec.local>
References:  <19DB8F4A-6788-44F6-9A2C-E01DEA01BED9@dragondata.com> <op.wrpyzok18527sy@ronaldradial.versatec.local>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jan 30, 2013, at 4:20 AM, "Ronald Klop" <ronald-freebsd8@klop.yi.org> =
wrote:

> On Wed, 30 Jan 2013 00:20:15 +0100, Kevin Day <toasty@dragondata.com> =
wrote:
>=20
>>=20
>> I'm trying to improve performance when using ZFS in large (>60000 =
files) directories. A common activity is to use "getdirentries" to =
enumerate all the files in the directory, then "lstat" on each one to =
get information about it. Doing an "ls -l" in a large directory like =
this can take 10-30 seconds to complete. Trying to figure out why, I =
did:
>>=20
>> ktrace ls -l /path/to/large/directory
>> kdump -R |sort -rn |more
>=20
> Does ls -lf /pat/to/large/directory make a difference. It makes ls not =
to sort the directory so it can use a more efficient way of traversing =
the directory.
>=20
> Ronald.

Nope, the sort seems to add a trivial amount of extra time to the entire =
operation. Nearly all the time is spent in lstat() or getdirentries(). =
Good idea though!

-- Kevin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AB6FA392-D80D-4280-8B16-FB931D2AD35C>