Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Dec 2006 16:24:00 +0200
From:      Vasil Dimov <vd@FreeBSD.org>
To:        "R\. B\. Riddick" <arne_woerner@yahoo.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gstripe performance scaling with many disks
Message-ID:  <20061229142400.GA17217@qlovarnika.bg.datamax>
In-Reply-To: <20061229125545.12935.qmail@web30305.mail.mud.yahoo.com>
References:  <20061229120517.GA12877@qlovarnika.bg.datamax> <20061229125545.12935.qmail@web30305.mail.mud.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--YiEDa0DAkWCtVeE4
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 29, 2006 at 04:55:44AM -0800, R. B. Riddick wrote:
> --- Vasil Dimov <vd@FreeBSD.org> wrote:
> > Here is what further tests showed:
> >=20
> > First of all I switched from dd to raidtest. I had to tune raidtest in
> > order to use it, see ports/107311. I am testing only reading with 8
> > concurrent processes (raidtest test -n 8).
> >
> Interesting...
> But the degradation is much less now... Not 25% (or what was it) but just
> 10%...

Yes, that seems to be the case. And the oscillations are less expressive.

> Did u try a different stripe size (-s 65536) with more concurrency (-n 20=
),
> too?

Here it is:

stripe size: 4096, read processes: 8
(this was my initial test with raidtest)
 2 disks:  2109917 b/s (exp  2110906), avg disk load: 1054958 ( 99.9%)
 3 disks:  3139464 b/s (exp  3166359), avg disk load: 1046488 ( 99.1%)
 4 disks:  4194968 b/s (exp  4221812), avg disk load: 1048742 ( 99.3%)
 5 disks:  5146921 b/s (exp  5277265), avg disk load: 1029384 ( 97.5%)
 6 disks:  6226682 b/s (exp  6332718), avg disk load: 1037780 ( 98.3%)
 7 disks:  7187536 b/s (exp  7388171), avg disk load: 1026790 ( 97.2%)
 8 disks:  8145568 b/s (exp  8443624), avg disk load: 1018196 ( 96.4%)
 9 disks:  9179785 b/s (exp  9499077), avg disk load: 1019976 ( 96.6%)
10 disks: 10065401 b/s (exp 10554530), avg disk load: 1006540 ( 95.3%)
11 disks: 11006498 b/s (exp 11609983), avg disk load: 1000590 ( 94.8%)
12 disks: 11878842 b/s (exp 12665436), avg disk load:  989903 ( 93.7%)
13 disks: 12905593 b/s (exp 13720889), avg disk load:  992737 ( 94.0%)
14 disks: 13670094 b/s (exp 14776342), avg disk load:  976435 ( 92.5%)
15 disks: 14474347 b/s (exp 15831795), avg disk load:  964956 ( 91.4%)
16 disks: 15474211 b/s (exp 16887248), avg disk load:  967138 ( 91.6%)
17 disks: 16021676 b/s (exp 17942701), avg disk load:  942451 ( 89.2%)

stripe size: 64*1024, read processes: 8
 2 disks:  2037981 b/s (exp  2112580), avg disk load: 1018990 ( 96.4%)
 3 disks:  2792740 b/s (exp  3168870), avg disk load:  930913 ( 88.1%)
 4 disks:  3686512 b/s (exp  4225160), avg disk load:  921628 ( 87.2%)
 5 disks:  4133447 b/s (exp  5281450), avg disk load:  826689 ( 78.2%)
 6 disks:  4370325 b/s (exp  6337740), avg disk load:  728387 ( 68.9%)
 7 disks:  5241010 b/s (exp  7394030), avg disk load:  748715 ( 70.8%)
 8 disks:  5249938 b/s (exp  8450320), avg disk load:  656242 ( 62.1%)
 9 disks:  5458054 b/s (exp  9506610), avg disk load:  606450 ( 57.4%)
10 disks:  6381395 b/s (exp 10562900), avg disk load:  638139 ( 60.4%)
11 disks:  6409845 b/s (exp 11619190), avg disk load:  582713 ( 55.1%)
12 disks:  6539793 b/s (exp 12675480), avg disk load:  544982 ( 51.5%)
13 disks:  7261850 b/s (exp 13731770), avg disk load:  558603 ( 52.8%)
14 disks:  6814684 b/s (exp 14788060), avg disk load:  486763 ( 46.0%)
15 disks:  7535144 b/s (exp 15844350), avg disk load:  502342 ( 47.5%)
16 disks:  6971418 b/s (exp 16900640), avg disk load:  435713 ( 41.2%)
17 disks:  7880572 b/s (exp 17956930), avg disk load:  463563 ( 43.8%)

stripe size: 4096, read processes: 20
 2 disks:  2107385 b/s (exp  2112176), avg disk load: 1053692 ( 99.7%)
 3 disks:  3143703 b/s (exp  3168264), avg disk load: 1047901 ( 99.2%)
 4 disks:  4206919 b/s (exp  4224352), avg disk load: 1051729 ( 99.5%)
 5 disks:  5167176 b/s (exp  5280440), avg disk load: 1033435 ( 97.8%)
 6 disks:  6262062 b/s (exp  6336528), avg disk load: 1043677 ( 98.8%)
 7 disks:  7271021 b/s (exp  7392616), avg disk load: 1038717 ( 98.3%)
 8 disks:  8260114 b/s (exp  8448704), avg disk load: 1032514 ( 97.7%)
 9 disks:  9238876 b/s (exp  9504792), avg disk load: 1026541 ( 97.2%)
10 disks: 10147589 b/s (exp 10560880), avg disk load: 1014758 ( 96.0%)
11 disks: 11063027 b/s (exp 11616968), avg disk load: 1005729 ( 95.2%)
12 disks: 12298836 b/s (exp 12673056), avg disk load: 1024903 ( 97.0%)
13 disks: 12893838 b/s (exp 13729144), avg disk load:  991833 ( 93.9%)
14 disks: 13927065 b/s (exp 14785232), avg disk load:  994790 ( 94.1%)
15 disks: 14851486 b/s (exp 15841320), avg disk load:  990099 ( 93.7%)
16 disks: 15630142 b/s (exp 16897408), avg disk load:  976883 ( 92.5%)
17 disks: 16685858 b/s (exp 17953496), avg disk load:  981521 ( 92.9%)

stripe size: 64*1024, read processes: 20
 2 disks:  2089750 b/s (exp  2111630), avg disk load: 1044875 ( 98.9%)
 3 disks:  3081869 b/s (exp  3167445), avg disk load: 1027289 ( 97.2%)
 4 disks:  3866341 b/s (exp  4223260), avg disk load:  966585 ( 91.5%)
 5 disks:  4541626 b/s (exp  5279075), avg disk load:  908325 ( 86.0%)
 6 disks:  5529365 b/s (exp  6334890), avg disk load:  921560 ( 87.2%)
 7 disks:  6311299 b/s (exp  7390705), avg disk load:  901614 ( 85.3%)
 8 disks:  6363864 b/s (exp  8446520), avg disk load:  795483 ( 75.3%)
 9 disks:  6934731 b/s (exp  9502335), avg disk load:  770525 ( 72.9%)
10 disks:  7622329 b/s (exp 10558150), avg disk load:  762232 ( 72.1%)
11 disks:  7806745 b/s (exp 11613965), avg disk load:  709704 ( 67.2%)
12 disks:  8921822 b/s (exp 12669780), avg disk load:  743485 ( 70.4%)
13 disks:  9380174 b/s (exp 13725595), avg disk load:  721551 ( 68.3%)
14 disks:  9453859 b/s (exp 14781410), avg disk load:  675275 ( 63.9%)
15 disks: 10319599 b/s (exp 15837225), avg disk load:  687973 ( 65.1%)
16 disks: 10074550 b/s (exp 16893040), avg disk load:  629659 ( 59.6%)
17 disks: 10527268 b/s (exp 17948855), avg disk load:  619251 ( 58.6%)

In summary increasing the stripe size results in performance drop while
increasing the number of read processes results in performance raise.
(I also tested with stripe size 128*1024). My guess is that this is
because raidtest often generates read requests with size less than the
stripe size and then it reads from just one disk, or at most from
several ones. It rarely (never?) generates a request that has read size
17*stripesize or greater so that all disks are used in parallel.

Btw I found strange behavior in graid3, I will post this in a separate
thread...

--=20
Vasil Dimov
gro.DSBeerF@dv
%
Death liked black. It went with anything. It went with everything,
sooner or later.
    -- (Terry Pratchett, Soul Music)

--YiEDa0DAkWCtVeE4
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----

iD8DBQFFlSUAFw6SP/bBpCARAqGjAJ9nR3gsGoTGQG1E3dwIyCA3HOkusgCgkC9N
HicWOKEA6W5QykBT7b01Hmw=
=k2ig
-----END PGP SIGNATURE-----

--YiEDa0DAkWCtVeE4--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20061229142400.GA17217>