Date: Tue, 10 May 2005 15:11:40 +0200 From: Michael Schuh <michael.schuh@gmail.com> To: Charles Swiger <cswiger@mac.com> Cc: freebsd-stable@freebsd.org Subject: SOLVED Disk-Performace issue? Message-ID: <1dbad315050510061122442717@mail.gmail.com> In-Reply-To: <393c3aa463b5360a3d9fbdca81f1cdce@mac.com> References: <1dbad315050510034688a7fb@mail.gmail.com> <393c3aa463b5360a3d9fbdca81f1cdce@mac.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hello, thanks to all who gave me any suggestion on my request. The Tip from Charles was only the beginning. The last step was to setting vfs.ufs.dirhash_maxmem via sysctl to an higher value, in my case 20MB. The copying from all 523000 files has used over 7MB dirhash_mem. Now after setting the ufs.dirhash_maxmem i have the performance from 4-5 MByte/s. i thanks all people that gave me the Power to serve :-))) regards Michael 2005/5/10, Charles Swiger <cswiger@mac.com>: > On May 10, 2005, at 6:46 AM, Michael Schuh wrote: > > Now i have 2 Directories with ~500.000-600.000 files with an size of > > ~5kByte. > > by copying the files from one disk to another or an direktory on the > > same disk > > (equal behavior), i can see this behavior: > > [ ... ] > > Can anyone explain me from where this behavior can come? > > Come thie eventually from the filesytem, or from my disks, so that > > these are to hot? (I think not) >=20 > Directories are kept as lists. Adding files to the end of a list takes > a longer time, as the list gets bigger. There is a kernel option > called DIRHASH (UFS_DIRHASH?) which can be enabled which will help this > kind of situation out significantly, but even with it, you aren't going > to get great performance when you put a half-million files into a > single directory. >=20 > Try breaking this content up into one or two levels of subdirectories. > See the way the Squid cache works... >=20 > -- > -Chuck >=20 >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1dbad315050510061122442717>