Date: Wed, 22 Jun 2005 03:08:58 -0700 From: Sandy Rutherford <sandy@krvarr.bc.ca> To: "Ted Mittelstaedt" <tedm@toybox.placo.com> Cc: freebsd-questions@freebsd.org, Alex Zbyslaw <xfb52@dial.pipex.com> Subject: RE: Yet another RAID Question (YARQ) Message-ID: <17081.14522.350761.161301@szamoca.krvarr.bc.ca> In-Reply-To: <LOBBIFDAGNMAMLGJJCKNMEMHFBAA.tedm@toybox.placo.com> References: <17080.48071.705585.35147@szamoca.krvarr.bc.ca> <LOBBIFDAGNMAMLGJJCKNMEMHFBAA.tedm@toybox.placo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> On Wed, 22 Jun 2005 01:00:09 -0700, >>>>> "Ted Mittelstaedt" <tedm@toybox.placo.com> said: > With a RAID-1 card, mirroring, there are 2 ways to setup reads. > The first way makes the assumption that you are mirroring purely > for fault tolerance. In that case you would NOT see a ANY read from > the second disk. The reason is that every time you read you move the > heads, and the more head movement the quicker the disk wears out. OK. I wasn't aware that some RAID cards allow you to tune reads in this way. Mine, which is a Mylex DAC1100, does not. > Placing exactly the same amount of head movement on both disks > means that if you setup a mirror with new disks of the same model, > which is pretty much how most people do it, the MTBF on both disks > is the same, and if you put equal activity on both disks your making > a very good chance that they will fail at the same time, or very close > to the same time. This assumes a small standard deviation --- much smaller than I would think is reasonable. I don't think that I have ever seen standard deviation data quoted by a manufacturer, which of course makes any MTBF data that they provide worthless. Seagate quotes a MTBF of 1.4 million hours for their 10K Cheetah. That's 160 years! Assuming you actually believe that, there is no way the std dev on that number is less than a month. I would imagine that ~10yrs would be more reasonable. Unless you have better numbers, I would say that setting up RAID 1 as you describe above is just plain silly. BTW, since Seagate offers a 5 year warranty, I don't think that even they believe their own MTBF numbers. Or perhaps they do know the std dev and it's 155 years? > The second way on a mirror is to try to setup reads to enhance speed > in addition to fault tolerance. > With this setup you interleave reads. You read a few blocks from the > first disk, then a few blocks from the second, then a few blocks > from the first, etc. etc. > However, the kicker is that you do this AT THE SAME TIME. The disk heads > are both continuiously reading, because the read speed of the heads > are so much slower than the time it takes to move the data out of the > drive and into main memory, that each disk is 'running dry' so fast > that by alternating the read, your giving the drive a chance to catch > up. So there is never a time the head isn't either reading or seeking > for > the next read, thus the disk drive lights are going to be both on > solid at the same time. They will not be "alternate blinking" Indeed, > if > they really are alternating back and forth, then your read throughput > will be no higher than a continuious read from a single disk. I agree with all of this. However, I do indeed see alternate flickering and the RAID array is sitting right in front of me. I expect this has to do with how the intensity of the activity lights is tied to seek vs read. If it matters, the drives are Cheetahs and they are in a Sun Multipack hot swap box. Anyway, this is all minutia... I think that it is fair to say that the main point of this thread is that if the behaviour of the drives' activity lights is not consistent with your RAID setup, then you should investigate --- regardless of what your RAID admin tool is saying. Would you agree with this? Sandy
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17081.14522.350761.161301>