From owner-freebsd-performance@FreeBSD.ORG Thu Aug 13 05:35:31 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E6D3E1065670 for ; Thu, 13 Aug 2009 05:35:31 +0000 (UTC) (envelope-from krassi@bulinfo.net) Received: from mx.bulinfo.net (mx.bulinfo.net [193.194.156.1]) by mx1.freebsd.org (Postfix) with ESMTP id 47F1D8FC43 for ; Thu, 13 Aug 2009 05:35:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx.bulinfo.net (Postfix) with ESMTP id B6628C5EF; Thu, 13 Aug 2009 08:35:29 +0300 (EEST) Received: from mx.bulinfo.net ([127.0.0.1]) by localhost (mx.bulinfo.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 63792-08; Thu, 13 Aug 2009 08:35:28 +0300 (EEST) Received: from [192.168.2.188] (pythia.bulinfo.net [212.72.195.5]) by mx.bulinfo.net (Postfix) with ESMTP id A2D7FBCF5; Thu, 13 Aug 2009 08:35:28 +0300 (EEST) Message-ID: <4A83A61E.6010009@bulinfo.net> Date: Thu, 13 Aug 2009 08:35:26 +0300 From: Krassimir Slavchev User-Agent: Thunderbird 2.0.0.19 (X11/20090225) MIME-Version: 1.0 To: Nathan Le Nevez References: In-Reply-To: X-Enigmail-Version: 0.95.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Virus-Scanned: amavisd-new at mx.bulinfo.net Cc: "freebsd-performance@freebsd.org" Subject: Re: Very slow I/O performance on HP BL465c X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Aug 2009 05:35:32 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nathan Le Nevez wrote: > I?m fairly certain this is a hardware problem ? swapping the disks from > a known working install on another blade produced the same lousy > performance. Hmm, Check the SAS cables between the controller and the disks tray. Also check the connector's pins. I had such problems on a DL380. Best Regards > > Thanks for your help, time to put a call in with HP although without any > real errors to show them it is going to be a challenge. > > > On 12/08/09 5:57 PM, "Krassimir Slavchev" wrote: > > Is it possible to exchange disks between your blade1 and blade2 servers? > Or to remove disks from one server and connect them to another? > Also compare 'tunefs -p /' outputs > Also compare the read speed of a raw device with e.g. 'dd if=/dev/da0 > of=/dev/null bs=1m count=100' > > Nathan Le Nevez wrote: >> # df -h >> Filesystem Size Used Avail Capacity Mounted on >> /dev/da0s1a 496M 224M 232M 49% / >> devfs 1.0K 1.0K 0B 100% /dev >> /dev/da0s1e 496M 14K 456M 0% /tmp >> /dev/da0s1f 119G 623M 109G 0% /usr >> /dev/da0s1d 4.8G 346K 4.4G 0% /var >> # mount >> /dev/da0s1a on / (ufs, local) >> devfs on /dev (devfs, local) >> /dev/da0s1e on /tmp (ufs, local, soft-updates) >> /dev/da0s1f on /usr (ufs, local, soft-updates) >> /dev/da0s1d on /var (ufs, local, soft-updates) > >> / - Throughput 6.59862 MB/sec 4 procs >> /usr - Throughput 14.487 MB/sec 4 procs > > > >> On 12/08/09 3:41 PM, "Krassimir Slavchev" wrote: > >> Looks okay. >> How your disks are partitioned and from where you are running dbench. >> Look at the -D option. For example I have: >> / without soft updates -> Throughput 72.7276 MB/sec 4 >> procs >> /var with soft updates -> Throughput 286.528 MB/sec 4 procs > >> Are you sure that you are not running dbench on zfs or encrypted >> partition? > >> Nathan Le Nevez wrote: >>> # vmstat -i >>> interrupt total rate >>> irq1: atkbd0 18 0 >>> irq5: ohci0 ohci1+ 1 0 >>> irq19: ciss0 144916 3 >>> irq21: uhci0 22 0 >>> cpu0: timer 80002970 1999 >>> irq256: bce0 17042 0 >>> cpu2: timer 79994902 1999 >>> cpu1: timer 79994975 1999 >>> cpu3: timer 79995009 1999 >>> cpu6: timer 79994957 1999 >>> cpu5: timer 79995046 1999 >>> cpu4: timer 79995041 1999 >>> cpu7: timer 79995057 1999 >>> Total 640129956 16000 > >>> # camcontrol tags da0 >>> (pass0:ciss0:0:0:0): device openings: 254 > >>> Just for clarification, both systems are running amd64. > >>> Thanks, > >>> Nathan > >>> -----Original Message----- >>> From: Krassimir Slavchev [mailto:krassi@bulinfo.net] >>> Sent: Tuesday, 11 August 2009 9:45 PM >>> To: Nathan Le Nevez >>> Cc: freebsd-performance@freebsd.org >>> Subject: Re: Very slow I/O performance on HP BL465c > >>> Hi, > >>> What is the output of 'vmstat -i' and 'camcontrol tags da0' ? >>> I have a ML350 running 7-STABLE with same controller and disks and >>> performance is almost same as your good server. > >>> Nathan Le Nevez wrote: >>>> Hi, > >>>> I'm running 7.2-p3 on 2x HP BL465c blade servers, one of which >> performs very >>>> poorly. Both have the same RAID controller and 2 x 146GB 10k SAS > disks >>>> configured in RAID-1. Both controllers have write-cache enabled. Both >>>> servers are running the same BIOS and firmware versions. Neither >> servers are >>>> running any services other than sshd. > >>>> Blade with good performance (2 x Opteron 2218, 8GB RAM): > >>>> ciss0: port 0x4000-0x40ff mem >>>> 0xfdf80000-0xfdffffff,0xfdf70000-0xfdf77fff irq 19 at device 8.0 >> on pci80 >>>> ciss0: [ITHREAD] >>>> da0 at ciss0 bus 0 target 0 lun 0 >>>> da0: AFPi xCePdU D#i2r Leacuntc >> hAcecde!s >>>> s SCSI-5 device >>>> da0: 135.168MB/s transfers >>>> dSaM0P:: CAoP CPU #3 Launched! >>>> mmand Queueing Enabled >>>> da0: 139979MB (286677120 512 byte sectors: 255H 32S/T 35132C) > >>>> Blade with bad performance (2 x Opteron 2352, 16GB RAM): > >>>> ciss0: port 0x4000-0x40ff mem >>>> 0xfdf80000-0xfdffffff,0xfdf70000-0xfdf77fff irq 19 at device 8.0 >> on pci80 >>>> ciss0: [ITHREAD] >>>> da0 at ciss0 bus 0 target 0 lun 0 >>>> da0: Fixed Direct Access SCSI-5 device >>>> da0: 135.168MB/s transfers >>>> da0: Command Queueing Enabled >>>> da0: 139979MB (286677120 512 byte sectors: 255H 32S/T 35132C) > >>>> # dbench -t 10 1 2 3 4 >>>> blade1 183.456 MB/sec 236.86 MB/sec 299.28 MB/sec >> 192.675 MB/sec >>>> blade2 6.97931 MB/sec 9.42293 MB/sec 10.2482 MB/sec >> 12.407 MB/sec > >>>> Any help/ideas would be greatly appreciated. I have run through >> all the >>>> Insight diagnostics tools and it fails to find anything wrong with >> the slow >>>> server. > >>>> Cheers, >>>> Nathan > >>>> _______________________________________________ >>>> freebsd-performance@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>>> To unsubscribe, send any mail to >> "freebsd-performance-unsubscribe@freebsd.org" > > > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (FreeBSD) iD8DBQFKg6YexJBWvpalMpkRAt+EAKCIQFP0lAv44TpcUPm2ZC3rP4opSwCfcOZN Gp4q2KxquOyvYJNYvTPk+Sc= =GRjk -----END PGP SIGNATURE-----