From owner-freebsd-geom@FreeBSD.ORG Thu Dec 28 17:37:50 2006 Return-Path: X-Original-To: freebsd-geom@freebsd.org Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D8CD716A47E for ; Thu, 28 Dec 2006 17:37:50 +0000 (UTC) (envelope-from vd@datamax.bg) Received: from jengal.datamax.bg (jengal.datamax.bg [82.103.104.21]) by mx1.freebsd.org (Postfix) with ESMTP id 98AE213C487 for ; Thu, 28 Dec 2006 17:37:50 +0000 (UTC) (envelope-from vd@datamax.bg) Received: from qlovarnika.bg.datamax (qlovarnika.bg.datamax [192.168.10.2]) by jengal.datamax.bg (Postfix) with SMTP id C93F8B833; Thu, 28 Dec 2006 19:18:58 +0200 (EET) Received: (nullmailer pid 11696 invoked by uid 1002); Thu, 28 Dec 2006 17:18:58 -0000 Date: Thu, 28 Dec 2006 19:18:58 +0200 From: Vasil Dimov To: freebsd-geom@freebsd.org Message-ID: <20061228171858.GA11296@qlovarnika.bg.datamax> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ew6BAiZeqk4r7MaW" Content-Disposition: inline Subject: gstripe performance scaling with many disks X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: vd@FreeBSD.org List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Dec 2006 17:37:50 -0000 --ew6BAiZeqk4r7MaW Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi, I wanted to do some measuring of gstripe performance and I created 17 devices each of which can read with approximately 1MiB/sec. (For the purpose I use ggate[dc] with bandwidth limitation between client and server. Any suggestions for simpler setup are welcome :-) So here are the results: Read speed is measured with "dd of=3D/dev/null bs=3D1m". It is imperfect and not very close to reality but it is simple and can easily be changed if necessary. First of all I ensure that devices do not collide when used simultaneously (numbers are in bytes/sec): % ./simple_read.sh single read: 1056381 parallel read: min: 1056164 max: 1056836 avg: 1056599 % (parallel means simultaneously reading from all 17 disks). Then I run this script I crafted. I hope the output is self-explanatory ("exp" stands for "expected" and is calculated by NUMBER_OF_DISKS * SINGLE_DISK_READ_SPEED): % ./stripe_test.sh 2 disks: 2080579 b/s (exp 2113778), avg disk load: 1040289 (98.4%) 3 disks: 3047572 b/s (exp 3170667), avg disk load: 1015857 (96.1%) 4 disks: 3970992 b/s (exp 4227556), avg disk load: 992748 (93.9%) 5 disks: 4679840 b/s (exp 5284445), avg disk load: 935968 (88.5%) 6 disks: 5460233 b/s (exp 6341334), avg disk load: 910038 (86.1%) 7 disks: 6390730 b/s (exp 7398223), avg disk load: 912961 (86.3%) 8 disks: 7654336 b/s (exp 8455112), avg disk load: 956792 (90.5%) 9 disks: 7707020 b/s (exp 9512001), avg disk load: 856335 (81.0%) 10 disks: 8188495 b/s (exp 10568890), avg disk load: 818849 (77.4%) 11 disks: 9478435 b/s (exp 11625779), avg disk load: 861675 (81.5%) 12 disks: 9457988 b/s (exp 12682668), avg disk load: 788165 (74.5%) 13 disks: 9653010 b/s (exp 13739557), avg disk load: 742539 (70.2%) 14 disks: 9649053 b/s (exp 14796446), avg disk load: 689218 (65.2%) 15 disks: 10162721 b/s (exp 15853335), avg disk load: 677514 (64.1%) 16 disks: 12659054 b/s (exp 16910224), avg disk load: 791190 (74.8%) 17 disks: 12506097 b/s (exp 17967113), avg disk load: 735652 (69.6%) % Can someone explain this? The tendency is for performace drop when increasing the number of disks in a stripe but there are some local peaks/extremums when using 8, 11 and 16 disks. Yes, I have read http://lists.freebsd.org/pipermail/freebsd-geom/2006-November/001705.html kern.geom.stripe.fast is set to 1. The scripts can be downloaded from http://people.freebsd.org/~vd/geom_test/ I intend to extend this test by: * test graid3 * measure with something other than dd(1) * measure write speed --=20 Vasil Dimov gro.DSBeerF@dv % Laugh at your problems: everybody else does. --ew6BAiZeqk4r7MaW Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- iD8DBQFFk/yCFw6SP/bBpCARAteTAJwJ2TpuDDmqaG9SQ5O0Be3aDSgS1QCg2AZV Om0G3pliqpOO8V4pTuwkBCI= =ASce -----END PGP SIGNATURE----- --ew6BAiZeqk4r7MaW--