Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Dec 2006 14:05:18 +0200
From:      Vasil Dimov <vd@FreeBSD.org>
To:        freebsd-geom@freebsd.org
Subject:   Re: gstripe performance scaling with many disks
Message-ID:  <20061229120517.GA12877@qlovarnika.bg.datamax>
In-Reply-To: <20061228171858.GA11296@qlovarnika.bg.datamax>
References:  <20061228171858.GA11296@qlovarnika.bg.datamax>

next in thread | previous in thread | raw e-mail | index | archive | help

--tKW2IUtsqtDRztdT
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi,

Thank you very much for your answers!

Here is what further tests showed:

First of all I switched from dd to raidtest. I had to tune raidtest in
order to use it, see ports/107311. I am testing only reading with 8
concurrent processes (raidtest test -n 8).

 2 disks:  2103646 b/s (exp  2107584), avg disk load: 1051823 ( 99.8%)
 3 disks:  3134534 b/s (exp  3161376), avg disk load: 1044844 ( 99.1%)
 4 disks:  4153974 b/s (exp  4215168), avg disk load: 1038493 ( 98.5%)
 5 disks:  5199917 b/s (exp  5268960), avg disk load: 1039983 ( 98.6%)
 6 disks:  6141678 b/s (exp  6322752), avg disk load: 1023613 ( 97.1%)
 7 disks:  7193116 b/s (exp  7376544), avg disk load: 1027588 ( 97.5%)
 8 disks:  8219609 b/s (exp  8430336), avg disk load: 1027451 ( 97.5%)
 9 disks:  9080762 b/s (exp  9484128), avg disk load: 1008973 ( 95.7%)
10 disks: 10241349 b/s (exp 10537920), avg disk load: 1024134 ( 97.1%)
11 disks: 11077983 b/s (exp 11591712), avg disk load: 1007089 ( 95.5%)
12 disks: 11851009 b/s (exp 12645504), avg disk load:  987584 ( 93.7%)
13 disks: 12663548 b/s (exp 13699296), avg disk load:  974119 ( 92.4%)
14 disks: 13821213 b/s (exp 14753088), avg disk load:  987229 ( 93.6%)
15 disks: 14283895 b/s (exp 15806880), avg disk load:  952259 ( 90.3%)
16 disks: 15057168 b/s (exp 16860672), avg disk load:  941073 ( 89.3%)
17 disks: 16171889 b/s (exp 17914464), avg disk load:  951287 ( 90.2%)

It shows the same tendency as with dd(1).
Changing vfs.read_max from 8 to 32 does not produce different results.

This are gstat screen shots during the test:

1 drive:
dT: 0.501s  w: 0.500s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    8     12     12   1003  548.1      0      0    0.0   96.6| ggate100

8 drives:
dT: 0.501s  w: 0.500s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    6    116    116   1028   63.4      0      0    0.0   97.6| ggate100
    2    118    118   1080   50.6      0      0    0.0  102.0| ggate101
    4    114    114   1042   45.9      0      0    0.0  100.8| ggate102
    4    110    110    982   39.3      0      0    0.0   96.0| ggate103
    4    116    116   1027   43.7      0      0    0.0   98.6| ggate104
    6    116    116   1056   58.4      0      0    0.0  101.6| ggate105
    8    124    124   1029   60.0      0      0    0.0   98.0| ggate107
    7    122    122   1051   61.9      0      0    0.0  106.6| ggate106
    8    130    130   8230   62.8      0      0    0.0   99.6| stripe/stest

17 drives:
dT: 0.563s  w: 0.500s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    5    185    178    848   18.9      0      0    0.0   88.8| ggate100
    4    183    176    831   19.8      0      0    0.0   87.9| ggate101
    4    181    174    822   18.8      0      0    0.0   87.0| ggate102
    4    165    165    801   17.7      0      0    0.0   86.6| ggate103
    2    167    167    820   17.3      0      0    0.0   85.0| ggate104
    1    176    176    840   18.8      0      0    0.0   86.4| ggate105
    4    167    167    812   23.5      0      0    0.0   85.5| ggate107
    3    167    167    802   19.9      0      0    0.0   85.9| ggate108
    1    172    172    841   19.9      0      0    0.0   86.5| ggate109
    2    170    170    852   28.5      0      0    0.0   87.4| ggate110
    4    172    172    847   21.4      0      0    0.0   87.1| ggate111
    4    174    174    838   22.3      0      0    0.0   87.0| ggate112
    5    163    163    789   16.2      0      0    0.0   86.8| ggate113
    4    162    162    802   15.7      0      0    0.0   86.4| ggate114
    3    162    162    823   17.0      0      0    0.0   86.2| ggate115
    3    170    170    825   18.2      0      0    0.0   86.6| ggate116
    5    169    169    810   25.7      0      0    0.0   86.1| ggate106
    8    245    238  13573   27.5      0      0    0.0   90.1| stripe/stest

I have not tried geom_cache...

--=20
Vasil Dimov
gro.DSBeerF@dv
%
If the code and the comments disagree, then both are probably wrong.
                -- Norm Schryer

--tKW2IUtsqtDRztdT
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----

iD8DBQFFlQR9Fw6SP/bBpCARAoBIAJ4srn9bRpyLZyy8lJxgAIqJUinDBQCg2o07
1qCl6ArTT4kcuA014CGGZ2o=
=oSEe
-----END PGP SIGNATURE-----

--tKW2IUtsqtDRztdT--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20061229120517.GA12877>