Date: Thu, 28 Dec 2006 19:18:58 +0200 From: Vasil Dimov <vd@FreeBSD.org> To: freebsd-geom@freebsd.org Subject: gstripe performance scaling with many disks Message-ID: <20061228171858.GA11296@qlovarnika.bg.datamax>
next in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
Hi,
I wanted to do some measuring of gstripe performance and I created 17
devices each of which can read with approximately 1MiB/sec.
(For the purpose I use ggate[dc] with bandwidth limitation between
client and server. Any suggestions for simpler setup are welcome :-)
So here are the results:
Read speed is measured with "dd of=/dev/null bs=1m". It is imperfect
and not very close to reality but it is simple and can easily be
changed if necessary.
First of all I ensure that devices do not collide when used
simultaneously (numbers are in bytes/sec):
% ./simple_read.sh
single read:
1056381
parallel read:
min: 1056164
max: 1056836
avg: 1056599
%
(parallel means simultaneously reading from all 17 disks).
Then I run this script I crafted. I hope the output is self-explanatory
("exp" stands for "expected" and is calculated by
NUMBER_OF_DISKS * SINGLE_DISK_READ_SPEED):
% ./stripe_test.sh
2 disks: 2080579 b/s (exp 2113778), avg disk load: 1040289 (98.4%)
3 disks: 3047572 b/s (exp 3170667), avg disk load: 1015857 (96.1%)
4 disks: 3970992 b/s (exp 4227556), avg disk load: 992748 (93.9%)
5 disks: 4679840 b/s (exp 5284445), avg disk load: 935968 (88.5%)
6 disks: 5460233 b/s (exp 6341334), avg disk load: 910038 (86.1%)
7 disks: 6390730 b/s (exp 7398223), avg disk load: 912961 (86.3%)
8 disks: 7654336 b/s (exp 8455112), avg disk load: 956792 (90.5%)
9 disks: 7707020 b/s (exp 9512001), avg disk load: 856335 (81.0%)
10 disks: 8188495 b/s (exp 10568890), avg disk load: 818849 (77.4%)
11 disks: 9478435 b/s (exp 11625779), avg disk load: 861675 (81.5%)
12 disks: 9457988 b/s (exp 12682668), avg disk load: 788165 (74.5%)
13 disks: 9653010 b/s (exp 13739557), avg disk load: 742539 (70.2%)
14 disks: 9649053 b/s (exp 14796446), avg disk load: 689218 (65.2%)
15 disks: 10162721 b/s (exp 15853335), avg disk load: 677514 (64.1%)
16 disks: 12659054 b/s (exp 16910224), avg disk load: 791190 (74.8%)
17 disks: 12506097 b/s (exp 17967113), avg disk load: 735652 (69.6%)
%
Can someone explain this?
The tendency is for performace drop when increasing the number of disks
in a stripe but there are some local peaks/extremums when using 8, 11
and 16 disks.
Yes, I have read
http://lists.freebsd.org/pipermail/freebsd-geom/2006-November/001705.html
kern.geom.stripe.fast is set to 1.
The scripts can be downloaded from
http://people.freebsd.org/~vd/geom_test/
I intend to extend this test by:
* test graid3
* measure with something other than dd(1)
* measure write speed
--
Vasil Dimov
gro.DSBeerF@dv
%
Laugh at your problems: everybody else does.
[-- Attachment #2 --]
-----BEGIN PGP SIGNATURE-----
iD8DBQFFk/yCFw6SP/bBpCARAteTAJwJ2TpuDDmqaG9SQ5O0Be3aDSgS1QCg2AZV
Om0G3pliqpOO8V4pTuwkBCI=
=ASce
-----END PGP SIGNATURE-----
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20061228171858.GA11296>
