Date: Thu, 28 Dec 2006 19:24:45 +0100 From: Ivan Voras <ivoras@fer.hr> To: freebsd-geom@freebsd.org Subject: Re: gstripe performance scaling with many disks Message-ID: <en125k$l2e$1@sea.gmane.org> In-Reply-To: <20061228171858.GA11296@qlovarnika.bg.datamax> References: <20061228171858.GA11296@qlovarnika.bg.datamax>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] Vasil Dimov wrote: > Can someone explain this? > The tendency is for performace drop when increasing the number of disks > in a stripe but there are some local peaks/extremums when using 8, 11 > and 16 disks. I'll take a shot at this: Since maximum kernel reads are still limited to 128 KB/s, by adding more drives you're making individual requests shorter. I.e. with one drive, it gets 128 KB requests, with two, each gets 64 KB, with 16, each gets 8 KB. So network & kernel latency becomes visible. AFAIK there's unofficial (still?) GEOM_CACHE class which tries to get around this by requesting & caching 128K from each drive. Search the lists, it's mentioned somewhere. > > Yes, I have read > http://lists.freebsd.org/pipermail/freebsd-geom/2006-November/001705.html > > kern.geom.stripe.fast is set to 1. While you're playing with this, you could set vfs.read_max to 32 or higher and see if it helps. [-- Attachment #2 --] -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFlAvzldnAQVacBcgRApOtAKDPqJ/HJfmf6M6jvctjitjzfkHVsgCgv59d ftyB+tS4eELnv1k02rWxpTM= =sgLO -----END PGP SIGNATURE-----
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?en125k$l2e$1>
