From owner-freebsd-hackers Fri Apr 6 16:29:42 2001 Delivered-To: freebsd-hackers@freebsd.org Received: from Awfulhak.org (awfulhak.demon.co.uk [194.222.196.252]) by hub.freebsd.org (Postfix) with ESMTP id 1A8D337B423; Fri, 6 Apr 2001 16:29:26 -0700 (PDT) (envelope-from brian@Awfulhak.org) Received: from hak.lan.Awfulhak.org (root@hak.lan.Awfulhak.org [172.16.0.12]) by Awfulhak.org (8.11.3/8.11.3) with ESMTP id f36NTXU19077; Sat, 7 Apr 2001 00:29:33 +0100 (BST) (envelope-from brian@lan.Awfulhak.org) Received: from hak.lan.Awfulhak.org (brian@localhost [127.0.0.1]) by hak.lan.Awfulhak.org (8.11.3/8.11.3) with ESMTP id f36NTRl01485; Sat, 7 Apr 2001 00:29:27 +0100 (BST) (envelope-from brian@hak.lan.Awfulhak.org) Message-Id: <200104062329.f36NTRl01485@hak.lan.Awfulhak.org> X-Mailer: exmh version 2.3.1 01/18/2001 with nmh-1.0.4 To: Mike Smith Cc: Attila Nagy , freebsd-hackers@FreeBSD.ORG, brian@Awfulhak.org Subject: Re: ffs dirpref speedup In-Reply-To: Message from Mike Smith of "Fri, 06 Apr 2001 15:53:12 PDT." <200104062253.f36MrCF03454@mass.dis.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Sat, 07 Apr 2001 00:29:27 +0100 From: Brian Somers Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG There's a (maybe overly simple) avoidance algorithm for this. It attempts to ensure that it doesn't end up filling up cgs with directories and no files. There are also two tunables int32_t fs_avgfilesize; /* expected average file size */ int32_t fs_avgfpdir; /* expected # of files per directory */ aimed at being able to tune things such as squid caches (and I would guess the ports tree). > We discussed this a while back; it has some interesting (and in some > cases) undesirable side-effects. FFS tries to balance directories across > CGs in order to balance the use of CGs for file allocation. The approach > being advocated here will tend to use CGs one at a time, resulting in > poor distribution of files and corresponding fragmentation problems. > > > I wonder whether FreeBSD has this improvement or not. > > > > With softupdates or async mounted filesystems it seems that the speedup is > > very big... > > > > Thanks, > > -------------------------------------------------------------------------- > > Attila Nagy e-mail: Attila.Nagy@fsn.hu > > Budapest Polytechnic (BMF.HU) @work: +361 210 1415 (194) > > H-1084 Budapest, Tavaszmezo u. 15-17. cell.: +3630 306 6758 > > ---------- Forwarded message ---------- > > Date: Sat, 7 Apr 2001 02:02:21 +0400 > > From: Grigoriy Orlov > > To: source-changes@cvs.openbsd.org > > Subject: Re: CVS: cvs.openbsd.org: src > > > > On Fri, Apr 06, 2001 at 10:27:55PM +0100, Brian Somers wrote: > > > > CVSROOT: /cvs > > > > Module name: src > > > > Changes by: gluk@cvs.openbsd.org 2001/04/06 14:43:31 > > > > > > > > Modified files: > > > > sys/ufs/ffs : fs.h ffs_alloc.c ffs_vfsops.c > > > > sbin/fsck_ffs : setup.c > > > > sbin/tunefs : tunefs.c > > > > > > > > Log message: > > > > Replace FFS directory preference algorithm(dirpref) by new one. > > > > It allocates directory inode in the same cylinder group as a parent > > > > directory in. This speedup file/directory intensive operations on > > > > a big file systems in times. > > > > > > > > Don't forget to recompile fsck_ffs with updated fs.h or you will get > > > > "VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST ALTERNATE" at > > > > the next boot. In any case you can ignore this error safely. > > > > > > > > Requested by deraadt@ > > > > > > Do you have any numbers or statistics ? Just curious as to how > > > big/small the gain is.... > > > > These results is old and I improve algorithm after these tests was done. > > Nevertheless they show how big may be perfomance speedup. I have done two > > file/directory intensive tests on a two OpenBSD systems with old and new > > dirpref algorithm. The first test is "tar -xzf ports.tar.gz", the second > > is "rm -rf ports". There ports.tar.gz - port collection from OpenBSD 2.8 > > release, it contains 6596 dirs and 13868 files. The test systems are: > > > > 1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for > > test is at wd1. Size of test file system is 8 Gb, number of cg=991, > > size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current > > from Dec 2000 with BUFCACHEPERCENT=35 > > > > 2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system > > at wd0, file system for test is at wd1. Size of test file system is 40 Gb, > > number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k > > OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50 > > > > Test Results > > > > tar -xzf ports.tar.gz rm -rf ports > > mode old dirpref new dirpref speedup old dirpref new dirpref speedup > > First system > > normal 667 472 1.41 477 331 1.44 > > async 285 144 1.98 130 14 9.29 > > sync 768 616 1.25 477 334 1.43 > > softdep 413 252 1.64 241 38 6.34 > > Second system > > normal 329 81 4.06 263.5 93.5 2.81 > > async 302 25.7 11.75 112 2.26 49.56 > > sync 281 57.0 4.93 263 90.5 2.9 > > softdep 341 40.6 8.4 284 4.76 59.66 > > > > "old dirpref" and "new dirpref" columns give a test time in seconds. > > speedup - speed increasement in times, ie. old dirpref / new dirpref. > > ----- > > > > If you want a more detailed algorithm description, please sent mail > > to me directly. > > > > Grigoriy. > > > > > > > > > > To Unsubscribe: send mail to majordomo@FreeBSD.org > > with "unsubscribe freebsd-hackers" in the body of the message > > -- > ... every activity meets with opposition, everyone who acts has his > rivals and unfortunately opponents also. But not because people want > to be opponents, rather because the tasks and relationships force > people to take different points of view. [Dr. Fritz Todt] > V I C T O R Y N O T V E N G E A N C E -- Brian Don't _EVER_ lose your sense of humour ! To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message