Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Jul 1998 00:47:19 -0400 (EDT)
From:      Mikhail Teterin <mi@aldan.algebra.com>
To:        grog@lemis.com (Greg Lehey)
Cc:        questions@FreeBSD.ORG
Subject:   Re: ccd questions
Message-ID:  <199807230447.AAA05997@rtfm.ziplink.net>
In-Reply-To: <19980723133831.P8993@freebie.lemis.com> from "Greg Lehey" at "Jul 23, 98 01:38:31 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
Greg Lehey once stated:

=> I'm trying to do this under a day old -stable. There are four disks
=> -- 2 2Gb and 2 4Gb -- involved. I set up two ccd mirroring disks
=> 2Gb and 4Gb big each with different ileave numbers (from 2 to 6000,
=> being powers of 2 and primes). Here are my problems:

=> The 4 disks are on an ahc of their own with an idle tape-drive. I
=> never tested both arrays at the same time. The 3 system disks are
=> on a separate ahc. The machine has a single PPro-200 with 256Kb of
=> cache, 128Mb of RAM, 192Mb of swap split evenly among three system
=> drives.

=> 	All four disks are different, so I do not expect "optimum"
=> 	performance, but my results were still disappointing :(
=>
=> 	According to iozone benchmark, the write speed went down 50%
=> 	compared to when using the disks by themselves -- without
=> 	ccd. I would expect it to stay the same, really -- it is
=> 	about 3.5Mb/sec and is far from saturating the 10Mb/s top
=> 	of this SCSI interface. The ileave number does not seem to
=> 	matter once it is above 32.

=Yes, I'd consider this result disappointing as well.  May I assume
=that the performance improved with increasing the ileave factor?  My
=investigations suggest that 128 is about the optimum, though there's
=not much improvement beyond 32.

Yes, it grows, but very little... And is always 50-60% of the single
disk speed.

=> 	The read speed is about the same -- according to `systat 1
=> 	-iostat' the data is read only from the first disk of an
=> 	array -- I'd expect it to double as the things can be read
=> 	in parallel from each of the drives. Again the ileave number
=> 	does not seem to matter once it is above 32.

=This would only happen if you're runnning multiple processes.
=Otherwise you'll be single-threaded by the test.

Khmmm... Why is not it reading, say 10% of a file from one disk and
the next 10% from the other, in parallel? No buffers, I guess... Oh,
well...

=> Features/stability:
=>
=> 	I tried to create the third ccd to concatenate the two
=> 	mirroring disks into one 6Gb big chunk. It "did not work"
=> 	most of the time, and crashed the system once when it seemed
=> 	to succeed and I started to
=> 		newfs /dev/rccd2c
=> 	Is this combination supposed to work at all?

=I'm not sure what you're trying to do here. What does your
=/etc/ccdconfig look like? Are you trying to join ccds together into
=second-level ccds? I don't see any reason to want to do this, and I'm
=pretty sure nobody expects it to work. In any case, when you have such
=problems, there's little anybody can do without a kernel dump.

Yes, it is like this:
	ccd0	2047	0x05	/dev/sd3s1e /dev/sd5s1e
	ccd1	6000	0x05	/dev/sd4s1e /dev/sd6s1e
	ccd2	0	none	/dev/ccd0c /dev/ccd1c

The reason is to have a system which can be quickly brought back
up in case of a drive failure (any one of the 4 disks can go, even
two different ones can go), while providing one big partition for
a big file-system.

=You might like to try vinum (http://www.lemis.com/vinum.html) and see
=how that compares.  Bear in mind, though, that this is still
=pre-release software.  You shouldn't use it on production systems.

This is a production system. The array(s) will replace several disks
scattered accross several aging SGIs...

Thanks!

	-mi

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199807230447.AAA05997>