From owner-freebsd-current@FreeBSD.ORG Tue Apr 12 09:50:33 2005 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 83CE616A4CE for ; Tue, 12 Apr 2005 09:50:33 +0000 (GMT) Received: from mi.veco.ru (mail.veco.ru [195.161.146.48]) by mx1.FreeBSD.org (Postfix) with ESMTP id 750FF43D46 for ; Tue, 12 Apr 2005 09:50:31 +0000 (GMT) (envelope-from aka@veco.ru) Received: from [172.19.73.1] (HELO camel.veco.ru) by mi.veco.ru (CommuniGate Pro SMTP 4.2.7) with SMTP id 104104; Tue, 12 Apr 2005 13:50:29 +0400 Date: Tue, 12 Apr 2005 13:50:29 +0400 From: Andrey Koklin To: current@FreeBSD.org Message-Id: <20050412135029.2d81a216.aka@veco.ru> In-Reply-To: <424AFDAA.8010607@elischer.org> References: <20050330191824.4c08acc6.aka@veco.ru> <424AFDAA.8010607@elischer.org> X-Mailer: Sylpheed version 1.0.0rc (GTK+ 1.2.10; i386-portbld-freebsd6.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit cc: Julian Elischer Subject: ciss(4): speed degradation for Compaq Smart Array [3rd edition] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Apr 2005 09:50:33 -0000 On Wed, 30 Mar 2005 11:27:38 -0800 Julian Elischer wrote: [snip] > Thanks for giving more info.. > this shows up some problems though.. > I'm not saying that there is no problem (I actually think there is a > slowdown in 5/6 but > it should be amenable to tuning as we get time to look at it.. The new > disk code is a lot more > dependent on teh scheduler than the old disk code). What I AM saying is that > teh test environment doens't eliminate some of the possible reasons for > speed > differences.. > For example, you don't say if the raid controllers arre set up the same.. > And the disks do not match.. the 74GB drives may be newer and faster.. > > Maybe you should reinstall the 6.0 machine to have a 4.11 partition as > well so that you > can dual boot on the exact same hardware.. THAT would show it if you > used the same > partition for both tests.. (The testing partition should be > a UFS1 filesystem that both can read.) Sorry, I'd got ill, and little later now with the reply. To remember, there was big enough difference in overall transfers under FreeBSD 4.11 and 6.0-CURRENT (5.4 gave results similar to 6.0, so I've omitted it for brevity) I've got one server, to have same hardware: HP Proliant DL380 G2, 2 x P3 1.133GHz, RAM 1280 Mb, SmartArray 5i, 5 x 36Gb Ultra320 10K HDD disks configured as RAID5 with default stripe size (16K?) do-test # bsdlabel da0s1 # /dev/da0s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 4194304 0 4.2BSD 0 0 0 b: 4194304 4194304 swap c: 284490208 0 unused 0 0 # "raw" part, don't edit d: 4194304 8388608 4.2BSD 2048 16384 89 e: 16777216 12582912 4.2BSD 0 0 0 f: 188743680 29360128 4.2BSD 0 0 0 g: 66386400 218103808 4.2BSD 2048 16384 28552 do-test # df -lh Filesystem Size Used Avail Capacity Mounted on /dev/da0s1a 1.9G 53M 1.7G 3% / devfs 1.0K 1.0K 0B 100% /dev /dev/da0s1e 7.7G 1.6G 5.5G 22% /usr /dev/da0s1f 87G 14G 66G 17% /var /dev/da0s1g 31G 3.4G 25G 12% /mnt da0s1a - FreeBSD 6.0-CURRENT da0s1d - FreeBSD 4.11 Both OSes have custom SMP kernels. 6.0 - stripped off debugging, 4BSD scheduler (I'd tried ULE one too, there was 5-10% difference in transfer and CPU load, so I've ommited it) As there was no big geometry factor, all tests use one partition da0s1g, formated as ufs1 and ufs2. 6.0-CURRENT, UFS2 ----------------- # newfs -O2 -U -o time /dev/da0s1g # mount /dev/da0s1g /mnt # # dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 48.481901 secs (22147272 bytes/sec) ... # dd if=/mnt/1Gb-1 of=/dev/null bs=1m 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 23.303288 secs (46076838 bytes/sec) # # bonnie -d /mnt -m '6.0(1)' -s 4096 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6.0(1) 4096 15810 35.8 19404 15.3 12366 11.0 30682 68.9 50639 23.5 1084.9 5.7 6.0-CURRENT, UFS1 ----------------- # newfs -O1 -U -o time /dev/da0s1g # mount /dev/da0s1g /mnt # # dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 44.986316 secs (23868187 bytes/sec) # dd if=/mnt/1Gb-1 of=/dev/null bs=1m 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 21.702390 secs (49475741 bytes/sec) # # bonnie -d /mnt -m '6.0(2)' -s 4096 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6.0(2) 4096 17107 39.8 23879 16.9 13289 11.8 33849 75.9 50417 23.5 1116.5 5.9 6.0-CURRENT, UFS1 (no snap) --------------------------- # newfs -O1 -U -n -o time /dev/da0s1g # mount /dev/da0s1g /mnt # # dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 39.034020 secs (27507846 bytes/sec) # dd if=/mnt/1Gb-1 of=/dev/null bs=1m 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 22.023556 secs (48754244 bytes/sec) # # bonnie -d /mnt -m '6.0(3)' -s 4096 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6.0(3) 4096 20402 45.2 20903 15.8 12674 11.0 32834 73.5 53088 22.3 1072.1 6.4 6.0-CURRENT, UFS1, partition formated under 4.11 ------------------------------------------------ # dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 25.460762 secs (42172415 bytes/sec) # dd if=/dev/zero of=/mnt/1Gb-3 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 26.140447 secs (41075879 bytes/sec) # # bonnie -d /mnt -m '6.0(4)' -s 4096 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6.0(4) 4096 27343 59.8 36447 27.6 17517 15.1 39665 90.4 45941 19.3 1086.4 5.7 4.11-STABLE ----------- # newfs -U -o time /dev/da0s1g # mount /dev/da0s1g /mnt # dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 24.076042 secs (44597938 bytes/sec) ... dd if=/mnt/1Gb-1 of=/dev/null bs=1m 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 12.619832 secs (85083686 bytes/sec) # # bonnie -d /mnt -m '4.11' -s 4096 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 4.11 4096 45359 74.4 47120 24.7 21104 16.2 45216 97.9 85723 31.8 1503.2 5.3 Putting bonnie results together: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6.0(1) 4096 15810 35.8 19404 15.3 12366 11.0 30682 68.9 50639 23.5 1084.9 5.7 6.0(2) 4096 17107 39.8 23879 16.9 13289 11.8 33849 75.9 50417 23.5 1116.5 5.9 6.0(3) 4096 20402 45.2 20903 15.8 12674 11.0 32834 73.5 53088 22.3 1072.1 6.4 6.0(4) 4096 27343 59.8 36447 27.6 17517 15.1 39665 90.4 45941 19.3 1086.4 5.7 4.11 4096 45359 74.4 47120 24.7 21104 16.2 45216 97.9 85723 31.8 1503.2 5.3 Where: (1) - 6.0, UFS2 (2) - 6.0, UFS1 (3) - 6.0, UFS1, no snap (4) - 6.0, UFS1, partition formated under 4.11 Again, simple benchmarks show disk system productivity under 6.0 as about 50-70% of 4.11's one. Not fatal yet, if it wouldn't drop further. Andrey