From owner-freebsd-current@FreeBSD.ORG Wed Nov 17 23:17:24 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3D09416A4CE; Wed, 17 Nov 2004 23:17:24 +0000 (GMT) Received: from mail.mcneil.com (mcneil.com [24.199.45.54]) by mx1.FreeBSD.org (Postfix) with ESMTP id EF39343D39; Wed, 17 Nov 2004 23:17:23 +0000 (GMT) (envelope-from sean@mcneil.com) Received: from localhost (localhost.mcneil.com [127.0.0.1]) by mail.mcneil.com (Postfix) with ESMTP id 9AB14F20AD; Wed, 17 Nov 2004 15:17:21 -0800 (PST) Received: from mail.mcneil.com ([127.0.0.1]) by localhost (server.mcneil.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 22381-07; Wed, 17 Nov 2004 15:17:19 -0800 (PST) Received: from mcneil.com (mcneil.com [24.199.45.54]) by mail.mcneil.com (Postfix) with ESMTP id A0F8EF18DB; Wed, 17 Nov 2004 15:17:19 -0800 (PST) From: Sean McNeil To: Emanuel Strobl In-Reply-To: <200411172357.47735.Emanuel.Strobl@gmx.net> References: <200411172357.47735.Emanuel.Strobl@gmx.net> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-hcHwgO70AgenjjKR2nD0" Date: Wed, 17 Nov 2004 15:17:19 -0800 Message-Id: <1100733439.21798.36.camel@server.mcneil.com> Mime-Version: 1.0 X-Mailer: Evolution 2.0.2 FreeBSD GNOME Team Port X-Virus-Scanned: by amavisd-new at mcneil.com cc: freebsd-current@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: serious networking (em) performance (ggate and NFS) problem X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Nov 2004 23:17:24 -0000 --=-hcHwgO70AgenjjKR2nD0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote: > Dear best guys, >=20 > I really love 5.3 in many ways but here're some unbelievable transfer rat= es,=20 > after I went out and bought a pair of Intel GigaBit Ethernet Cards to sol= ve=20 > my performance problem (*laugh*): >=20 > (In short, see *** below) >=20 > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI= =20 > Desktop adapter MT) connected directly without a switch/hub and "device=20 > polling" compiled into a custom kernel with HZ set to 256 and=20 > kern.polling.enabled set to "1": >=20 > LOCAL: > (/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2) > test3:~#7: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D16k > ^C10524+0 records in > 10524+0 records out > 172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec) > -> > ^^^^^^^^ ~ 52MB/s > NFS(udp,polling): > (/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled) > test2:/#21: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D16k > ^C1858+0 records in > 1857+0 records out > 30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec) > -> ^^^^^^^ ~ 3,4MB/s >=20 > This example shows that using NFS over GigaBit Ethernet decimates perform= ance=20 > by the factor of 15, in words fifteen! >=20 > GGATE with MTU 16114 and polling: > test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:/dev#29: mount /dev/ggate0 /samsung/ > test2:/dev#30: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D16k > ^C2564+0 records in > 2563+0 records out > 41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec) > -> ^^^^^^^ ~ 2,6MB/s >=20 > GGATE without polling and MTU 16114: > test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1 > ggate0 > test2:~#13: mount /dev/ggate0 /samsung/ > test2:~#14: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D128k > ^C1282+0 records in > 1281+0 records out > 167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec) > -> ^^^^^^^^ ~ 15MB/s > .....and with 1m blocksize: > test2:~#17: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D1m > ^C61+0 records in > 60+0 records out > 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec) > -> ^^^^^^^^ ~ 13,6MB/s >=20 > I can't imagine why there seems to be a absolute limit of 15MB/s that can= be=20 > transfered over the network > But it's even worse, here two excerpts of NFS (udp) with jumbo Frames=20 > (mtu=3D16114): > test2:~#23: mount 10.0.0.2:/samsung /samsung/ > test2:~#24: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D1m > ^C89+0 records in > 88+0 records out > 92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec) > -> ^^^^^^^ ~7MB/s > .....and with 64k blocksize: > test2:~#25: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D64k > ^C848+0 records in > 847+0 records out > 55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec) >=20 > And with TCP-NFS (and Jumbo Frames): > test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#31: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D64k > ^C1921+0 records in > 1920+0 records out > 125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec) > -> ^^^^^^^^ ~ 17MB/s >=20 > Again NFS (udp) but with MTU 1500: > test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/ > test2:~#10: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D8k > ^C12020+0 records in > 12019+0 records out > 98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec) > -> ^^^^^^^ ~ 10MB/s > And TCP-NFS with MTU 1500: > test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/ > test2:~#13: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D8k > ^C19352+0 records in > 19352+0 records out > 158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec) > -> ^^^^^^^^ ~ 13MB/s >=20 > GGATE with default MTU of 1500, polling disabled: > test2:~#14: dd if=3D/dev/zero of=3D/samsung/testfile bs=3D64k > ^C971+0 records in > 970+0 records out > 63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec) > -> ^^^^^^^^ ~ 10M/s >=20 >=20 > Conclusion: >=20 > *** >=20 > - It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS vi= a TCP=20 > is. >=20 > - em seems to have problems with MTU greater than 1500 >=20 > - UDP seems to have performance disadvantages over TCP regarding NFS whic= h=20 > should be vice versa AFAIK >=20 > - polling and em (GbE) with HZ=3D256 is definitly no good idea, even 10Ba= se-2=20 > can compete >=20 > - NFS over TCP with MTU of 16114 gives the maximum transferrate for large= =20 > files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd= =20 > expect with my test equipment. >=20 > - overall network performance (regarding large file transfers) is horribl= e >=20 > Please, if anybody has the knowledge to dig into these problems, let me k= now=20 > if I can do any tests to help getting ggate and NFS useful in fast 5.3-st= able=20 > environments. I am very interested in this as I have similar issues with the re driver. It it horrible when operating at gigE vs. 100BT. Have you tried plugging the machines into a 100BT instead? Cheers, Sean --=-hcHwgO70AgenjjKR2nD0 Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (FreeBSD) iD8DBQBBm9v/yQsGN30uGE4RAta1AKDpkmg7f8fr8fPb79RLQdFncL7x0ACfZynv OBS1581vPaKgaB9QZjxu4fM= =7134 -----END PGP SIGNATURE----- --=-hcHwgO70AgenjjKR2nD0--