Date: Tue, 29 Jan 2013 15:42:28 -0800 From: Matthew Ahrens <mahrens@delphix.com> To: Kevin Day <toasty@dragondata.com> Cc: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Re: Improving ZFS performance for large directories Message-ID: <CAJjvXiE%2B8OMu_yvdRAsWugH7W=fhFW7bicOLLyjEn8YrgvCwiw@mail.gmail.com> In-Reply-To: <19DB8F4A-6788-44F6-9A2C-E01DEA01BED9@dragondata.com> References: <19DB8F4A-6788-44F6-9A2C-E01DEA01BED9@dragondata.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jan 29, 2013 at 3:20 PM, Kevin Day <toasty@dragondata.com> wrote:
> I'm prepared to try an L2arc cache device (with secondarycache=metadata),
You might first see how long it takes when everything is cached. E.g. by
doing this in the same directory several times. This will give you a lower
bound on the time it will take (or put another way, an upper bound on the
improvement available from a cache device).
> but I'm having trouble determining how big of a device I'd need. We've got
> >30M inodes now on this filesystem, including some files with extremely
> long names. Is there some way to determine the amount of metadata on a ZFS
> filesystem?
For a specific filesystem, nothing comes to mind, but I'm sure you could
cobble something together with zdb. There are several tools to determine
the amount of metadata in a ZFS storage pool:
- "zdb -bbb <pool>"
but this is unreliable on pools that are in use
- "zpool scrub <pool>; <wait for scrub to complete>; echo '::walk
spa|::zfs_blkstats' | mdb -k"
the scrub is slow, but this can be mitigated by setting the global
variable zfs_no_scrub_io to 1. If you don't have mdb or equivalent
debugging tools on freebsd, you can manually look at
<spa_t>->spa_dsl_pool->dp_blkstats.
In either case, the "LSIZE" is the size that's required for caching (in
memory or on a l2arc cache device). At a minimum you will need 512 bytes
for each file, to cache the dnode_phys_t.
--matt
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJjvXiE%2B8OMu_yvdRAsWugH7W=fhFW7bicOLLyjEn8YrgvCwiw>
