Date: Thu, 30 Jun 2005 17:18:15 -0400 From: "Simon" <simon@optinet.com> To: "Danny Howard" <dannyman@toldme.com> Cc: Bob Bomar <bob@ibsd.us>, "questions@freebsd.org" <questions@freebsd.org>, "hardware@freebsd.org" <hardware@freebsd.org> Subject: Re: RAID Cards Message-ID: <20050630211444.19C6343D48@mx1.FreeBSD.org> In-Reply-To: <20050630205744.GN33728@ratchet.nebcorp.com>
next in thread | previous in thread | raw e-mail | index | archive | help
It's not only CPU factor, I don't trust software RAID. As for monitoring, I can tell whether or not a drive is dead via SAFTE chip and all SCSI RAID cards support SAFTE and a proper SCSI server would have SAFTE support. As for SATA, the 3ware cards have 3dm tool to monitor the array. -Simon On Thu, 30 Jun 2005 13:57:44 -0700, Danny Howard wrote: >On Thu, Jun 30, 2005 at 04:48:18PM -0400, Simon wrote: > >> Just because there is no monitoring tool available due to lack of >> support, doesn't mean the card itself is bad. I much prefer hardware >> implementation than software. True hardware RAID frees up a lot of >> CPU time if you have heavy IO and software just can't keep up if you >> utilize CPU intensive apps. > >When you have a dual Xeon setup, you are more likely to be bound by disk >than CPU. > >And a RAID that you can not monitor is a BAD RAID. > >The biggest thing that bothers me about my current environment is that I >have remotely-deployed machines with RAIDs and I can't tell when a disk >goes bad unless I visit the datacenter. Last time I was there I had a >RAID card throwing an audible alarm, even though nothing was wrong. I >had to reboot a critical system to fix that. > >If you can implement it in software, then its worth the headaches you'll >avoid with hardware dependencies. If you're concerned at CPU overhead, >spend the cash you would have spent on a RAID card and upgrade your CPU. > >Sincerely, >-danny > >-- >http://dannyman.toldme.com/ >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050630211444.19C6343D48>