From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 15:32:15 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CFDDA16A4CE for ; Fri, 22 Apr 2005 15:32:15 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4B7DF43D1D for ; Fri, 22 Apr 2005 15:32:15 +0000 (GMT) (envelope-from seancody@gmail.com) Received: by rproxy.gmail.com with SMTP id z35so733966rne for ; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=k3oLDHXL9IcLryNZsPBEKgfA4HLKdEX/eb+CJCkFgSzdU6aSkzuJ9+1YBqIsbIn2hc/FNKxnMliUaUCFViZEoIJuc7YRq8nVZ8c/4+C5a65AX2tZRoJ0ZwH6e+lTAF0NF2c0dkJYMOEwJLDlAMnIyQNFflGvWx9mcWumyz1c9zA= Received: by 10.38.75.59 with SMTP id x59mr3630433rna; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) Received: by 10.38.181.13 with HTTP; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) Message-ID: <136272710504220832793dfc3d@mail.gmail.com> Date: Fri, 22 Apr 2005 10:32:14 -0500 From: Sean To: freebsd-performance@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Subject: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Sean List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 15:32:16 -0000 I've been experimenting with the idea of doing channel bonding as a=20 means of improving the performance of some heavily used file servers. Currently I am using a single Intel 1000MT interface on each file server and it has rather lack luster performance. I've set two ports of my switch to 'shared' (an Extreme=20 BlackDiamond 6800) and am using an Intel 1000MT Dual Port for=20 the bonding interfaces. The performance increase with I see is marginally better than=20 just the one interface (70MB/s [bonded] vs 60MB/s [single]) which=20 is slightly disappointing. I am using ifstat and iostat (for disk throughput, 30MB/s on a 3ware 7500-12 yet again disappointing) to monitor and a variant of tcpblast to generate traffic. I'm using 4 other machines (on the same blade on the switch) to generate the traffice to the bonded interface all are similar hardware with=20 varrying versions of FreeBSD. In order to get the numbers as high as I have I've enabled polling (some stability issues being=20 used under SMP). Before I dropped everything and moved over to trying out ng_fec I wanted to get a few opinions on other things I can check or try. =20 These servers typically have anywhere between 20-100 clients reading=20 and writing many large files as fast as they can. So far the machines only perform well when there are fewer than 20 clients. The whole point of the experiment is increase performance of our current resources instead of buying more servers. I really don't know=20 what to expect (in terms of performance) from this but just based on=20 the 'ratings' on the individual parts this machine is not preforming=20 very well. In case anyone has any ideas I've included the 'specs' of the hardware=20 below. Hardware: =09Dual Intel Xeon CPU 2.66GHz=20 =09Intel Server SE7501BR2 Motherboard =092X 512 MB Registered ECC DDR RAM =093ware 7500-12 (12x120GB, RAID-5) =09Intel PRO/1000 MT Dual Port (em0,1) =09Intel PRO/1000 MT (On board) (em2) =09 Switch: =09Extreme Black Diamond 6800 =09Gigabit Blade: G24T^3 51052 Kernel: FreeBSD phoenix 5.3-RELEASE FreeBSD 5.3-RELEASE #1: Wed Apr 20 13:33:09 CDT 2005 =20 root@phoenix.franticfilms.com:/usr/src/sys/i386/compile/SMP i386 Channel Bonding commands used: ifconfig em0 up ifconfig em1 up kldload ng_ether.ko ngctl mkpeer em0: one2many upper one ngctl connect em0: em0:upper lower many0 ngctl connect em1: em0:upper lower many1 echo Allow em1 to xmit/recv em0 frames ngctl msg em1: setpromisc 1 ngctl msg em1: setautosrc 0 ngctl msg em0:upper setconfig "{ xmitAlg=3D1 failAlg=3D1 enabledLinks=3D[ 1= 1 ] }" ifconfig em0 A.B.C.D netmask 255.255.255.0 Contents of /etc/sysctl.conf: net.inet.tcp.inflight_enable=3D1 net.inet.tcp.sendspace=3D32767 net.inet.tcp.recvspace=3D32767 net.inet.tcp.delayed_ack=3D0 vfs.hirunningspace=3D10485760 vfs.lorunningspace=3D10485760 net.inet.tcp.local_slowstart_flightsize=3D32767 net.inet.tcp.rfc1323=3D1 kern.maxfilesperproc=3D2048 vfs.vmiodirenable=3D1 kern.ipc.somaxconn=3D4096 kern.maxfiles=3D65536 kern.polling.enable=3D1 --=20 Sean