Date: Wed, 6 Oct 1999 18:41:03 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: Andrzej Bialecki <abial@webgiro.com> Cc: freebsd-hackers@FreeBSD.ORG Subject: Re: Non-standard FFS parameters Message-ID: <199910070141.SAA90163@apollo.backplane.com> References: <Pine.BSF.4.05.9910070025010.4745-100000@freja.webgiro.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:> :* what maximum value can I use for -i (bytes per inode) parmeter? I
:> :aalready tried 16mln ...
:>
:> I wouldn't go that high. Try 262144. Here's an example:
:
:Why? I only need a couple o hundred inodes on this fs..
Because you don't gain anything by going higher. Once you get over a
certain number fsck's run time is short enough that you no longer care.
Even if you cannot contemplate using more then a few hundred files,
restricting the number of inodes to just that only narrows your options
later on. The only time you might really care is if you are generating,
say, a boot floppy.
:> test3:/root# newfs -i 262144 -f 8192 -b 65536 /dev/rvn1c
:> /dev/rvn1c: 83886080 sectors in 2560 cylinders of 1 tracks, 32768 sectors
:> 40960.0MB in 160 cyl groups (16 c/g, 256.00MB/g, 1024 i/g)
:
:Well, yes, but you used non-standar blocksize which you yourself don't
:recommend. With standard 8192/1024 this command creates millions of
:inodes which I don't need - what's worse, they cause fsck to run for
:hours instead of seconds.
The -i parameter controls the number of inodes, not the block size.
The block size is irrelevant.
:> The higher the bytes per inode the fewer the inodes and the faster
:> fsck will run if you have to recover the filesystem. Too high a
:> bytes-per-inode will screw up the filesystem's ability to manage
:> the cylinder groups, though.
:
:Why? I thought this parameter describes (indirectly) only the total number
:of inodes in the FS, which is otherwise set proportionally to FS size,
:assuming it will be filled with very small files (2048B IIRC).
:
:I suspect it might have something to do with the placement policy (which
:CG to use to put additional blocks belonging to the file), but I don't see
:any immediate connection...
UFS/FFS allocates inodes and blocks in the bitmap statistically. The
algorithm works best when there are plenty inodes in the cylinder group
and enough cylinder groups such that the block bitmaps are not too
large, because the algorithm will become less efficient in such cases.
:> There may be problems specifying larger block sizes, though nothing
:> that we can't fix.
:
:What kind of problems? Will it simply not work, or will it corrupt the
:FS?
:
:Thanks a lot for these comments!
:
:Andrzej Bialecki
:// <abial@webgiro.com> WebGiro AB, Sweden (http://www.webgiro.com)
Well, the kernel itself has a 256KB block size limit. The types of
problems that will occur with large block sizes are mostly going to
be related to the fact that the buffer cache is not tuned to deal
with large block sizes, not even in -current. So it will not be
very efficient. Also, caching large blocks creates inefficiencies in
the VM system because the VM system likes to cache page-sized chunks
(i.e. 4K on i386). The buffer cache is much less efficient dealing
with large buffers which have had holes poked into them due to the VM
caching algorithms.
The disks will not be able to transfer file data any faster using large
blocks verses the default, so beyond a certain point the performance
simply stops improving.
I would recommend a 16K or 32K block size and the only real reason for
doing it that way is to reduce the number of indirect blockmap blocks
required to maintain the file.
-Matt
Matthew Dillon
<dillon@backplane.com>
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199910070141.SAA90163>
