Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Dec 1999 19:43:09 +0100
From:      Brad Knowles <blk@skynet.be>
To:        David Gilbert <dgilbert@velocet.ca>, Brad Knowles <brad@shub-internet.org>
Cc:        David Gilbert <dgilbert@velocet.ca>, freebsd-current@FreeBSD.ORG
Subject:   Re: AMI MegaRAID datapoint.
Message-ID:  <v0422081bb47ee1f84029@[195.238.1.121]>
In-Reply-To: <14425.6118.899093.621159@trooper.velocet.net>
References:  <14425.2778.943367.365945@trooper.velocet.net> <v0422081ab47ebe7ceb22@[195.238.1.121]> <14425.6118.899093.621159@trooper.velocet.net>

next in thread | previous in thread | raw e-mail | index | archive | help
At 11:48 AM -0500 1999/12/16, David Gilbert wrote:

>  It's a really long thread. I'm not going to repeat it here.
>  Basically, under "enough" load, vinum trashes the kernel stack in such
>  a way that debugging is very tough.

	It sounds like the second RAID-5 bug listed on the page I mentioned:

>>  28 September 1999: We have seen hangs when perform heavy I/O to
>>  RAID-5 plexes. The symptoms are that processes hang waiting on
>>  vrlock and flswai. Use ps lax to display this information.
>>
>>  Technical explanation: A deadlock arose between code locking stripes
>>  on a RAID-5 plex (vrlock) and code waiting for buffers to be freed
>>  (flswai).
>>
>>  Status: Being fixed.

	I believe that I have seen this bug myself, but in my only 
serious attempt to replicate it, I managed to create what appears to 
be a new third bug which he had never seen before.  Yes, I've already 
given all debugging information to Greg.

>  I got the MegaRAID 1400 because the DPT V drivers weren't available.

	Understandable.  I didn't know that the AMI MegaRAID controller 
was even an option, otherwise I would have looked at it.

>  The MegaRAID should be roughly equivlanet to the DPT V.

	I'd really like to see these two benchmarked head-to-head, or at 
least under sufficiently similar circumstances that we can be 
reasonably comfortable with how well one performs relative to the 
other.

>                                                           Do go with
>  LVD if you can.

	The drives are using SCA attachment mechanisms, but I believe 
that electronically they are LVD.

>  I have done benchmarking with bonnie instead of rawIO.  The output is
>  as follows:

	I have never been impressed with the benchmarking that bonnie is 
capable of.  In my experience, rawio is a much better tool, because 
it handles coordinating large numbers of child processes, doesn't 
lose information in communications between the parent and the child 
processes, by-passes all the filesystem overhead, etc....

	If you want to do filesystem level benchmarking, I've been more 
impressed by what I've seen out of Postmark.

-- 
   These are my opinions -- not to be taken as official Skynet policy
  ____________________________________________________________________
|o| Brad Knowles, <blk@skynet.be>            Belgacom Skynet NV/SA |o|
|o| Systems Architect, News & FTP Admin      Rue Col. Bourg, 124   |o|
|o| Phone/Fax: +32-2-706.11.11/12.49         B-1140 Brussels       |o|
|o| http://www.skynet.be                     Belgium               |o|
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
  Unix is like a wigwam -- no Gates, no Windows, and an Apache inside.
   Unix is very user-friendly.  It's just picky who its friends are.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?v0422081bb47ee1f84029>