From owner-freebsd-current@FreeBSD.ORG Wed Mar 30 20:24:52 2005 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E4DB616A4CE for ; Wed, 30 Mar 2005 20:24:52 +0000 (GMT) Received: from mi.veco.ru (mail.veco.ru [195.161.146.48]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7667043D6D for ; Wed, 30 Mar 2005 20:24:51 +0000 (GMT) (envelope-from aka@veco.ru) Received: from [193.125.120.100] (HELO aka-ppp.veco.ru) by mi.veco.ru (CommuniGate Pro SMTP 4.2.7) with ESMTP id 59957; Thu, 31 Mar 2005 00:24:44 +0400 Date: Thu, 31 Mar 2005 00:24:42 +0400 From: Andrey Koklin X-Priority: 3 (Normal) Message-ID: <1831036333.20050331002442@veco.ru> To: Doug White In-Reply-To: <20050330090813.B64732@carver.gumbysoft.com> References: <20050330191824.4c08acc6.aka@veco.ru> <20050330090813.B64732@carver.gumbysoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit cc: freebsd-current@freebsd.org Subject: Re: ciss(4): speed degradation for Compaq Smart Array [edited] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Andrey Koklin List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Mar 2005 20:24:53 -0000 Doug White wrote: > You've still omitted the array setup, including RAID type and stripe size. Yes, sorry. I missed it in 2nd letter, but said at the first place, that disk systems configured identically as RAID5 (5 disks in array). Standard HP array BIOS has no nuts and bolts to tune array parameters, so the arrays were configured with default options. As I remember, there should be 64K default stripe size, but I'm not quite sure -- to find it there is needed additional array configuration utility. do:~ $ grep da0 /var/run/dmesg.boot da0 at ciss0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-0 device da0: 135.168MB/s transfers da0: 69443MB (142220640 512 byte sectors: 255H 32S/T 17429C) re:~ $ grep da0 /var/run/dmesg.boot da0 at ciss0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-0 device da0: 135.168MB/s transfers da0: 347295MB (711261810 512 byte sectors: 255H 63S/T 44274C) do-test:~ $ grep da0 /var/run/dmesg.boot da0 at ciss0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-0 device da0: 135.168MB/s transfers da0: 138911MB (284490240 512 byte sectors: 255H 32S/T 34864C) > I'd also suggest using a tool like iozone to run your tests instead of dd. > Unless your workload consists of entirely sequential writes this perf test > is worthless. Yes, perhaps, more thorough testing is needed, indeed. While, I think, if you have bad performance with linear transfer, all other tests would be worthless. My old system had linear read transfer near theoretical controller bus limit, while it's 2 and more times less with new. And I hadn't intended to do "parrot" measurements. Just, while upgrading my network statistics server, which should process Cisco flows, I've noticed substantial performance drop. Linear transfer is just an ilustrative example. It can use block sizes of 1k, 64k or 1m, but the result is the same. Same with other non-linear things, like simple tar of /usr/src. Well, if it couldn't be fixed, perhaps, I should think of downgrading system back to 4.11. -- Regards, Andrey