From owner-freebsd-questions@FreeBSD.ORG Fri Jun 27 11:42:32 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5E7A210656BE for ; Fri, 27 Jun 2008 11:42:32 +0000 (UTC) (envelope-from freebsd-questions@m.gmane.org) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by mx1.freebsd.org (Postfix) with ESMTP id D720B8FC1D for ; Fri, 27 Jun 2008 11:42:31 +0000 (UTC) (envelope-from freebsd-questions@m.gmane.org) Received: from list by ciao.gmane.org with local (Exim 4.43) id 1KCCLW-0002LJ-20 for freebsd-questions@freebsd.org; Fri, 27 Jun 2008 11:42:30 +0000 Received: from pool-138-88-130-166.esr.east.verizon.net ([138.88.130.166]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 27 Jun 2008 11:42:30 +0000 Received: from nightrecon by pool-138-88-130-166.esr.east.verizon.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 27 Jun 2008 11:42:30 +0000 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-questions@freebsd.org From: Michael Powell Followup-To: gmane.os.freebsd.questions Date: Fri, 27 Jun 2008 07:43:58 -0400 Lines: 35 Message-ID: References: <20080626092558.1a17d7d2@gom.home> <4863E58C.2060602@webrz.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: pool-138-88-130-166.esr.east.verizon.net Sender: news Subject: Re: to scsi or not to scsi X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: nightrecon@verizon.net List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2008 11:42:32 -0000 Jos Chrispijn wrote: >> prad wrote: >> i've heard scsi hard drives are really good. >> i've also seen at least one site which claims that ide easily >> outperform scsi. >> [snip] > Prad, > > Have a look at this URL: http://www.pugetsystems.com/articles.php?id=19 > While I found this interesting, I also felt some data points could have been added. I believe these have some bearing for decision making as they better define the choice based upon what task, or purpose, the system is being called upon to perform. The concept that SATA subsystems will tend to consume more CPU cycles because SCSI controllers have onboard processors is somewhat nullified when considering controllers such as the Areca 1210 and the 3Ware type of products. One historical difference wrt to desktop type machines that manufacturers stuck RAID controllers on in order to have marketing buzzwords is that they were essentially useless for performance purposes. They were all hung off the Southbridge and were hamstrung by the maximum bus throughput between the South and North Bridge. The PCI-X bus was designed for server boards so this kind of bottleneck would not hamper performance. With the advent of PCI-E 8x slots and controllers this same situation has come to the SATA arena. The next consideration will be purpose: is the box going to be used as an inexpensive disk-to-disk NAS, file serving, or some other generic mode where size and high sequential throughput are primary concerns. Or is it called upon to perform lots of quick random selections of data such as a multithreaded database server? One item that gets lost in the RAID discussion is that, while sequential r/w performance generally goes up as you add more drives to the array, latency also increases. The additional latencies introduced may not matter as much to the sequential throughput scenario but will have more impact on the database server one. So the with sequential file serving it is OK to use 8-9ms seek time drives as we are more interested in sequential throughput and not as concerned with latency. Here SATA is probably a good match. For the high performance database server application you are going to want to use 3-4ms seek time drives to keep latencies under control while adding drives to the array. These are going to be the more expensive high RPM SAS and Fibre Channel drives. If you're already spending $40K/CPU for Oracle what's a few more dollars for Fibre Channel? :-) Can't wait for SSD devices to replace this. Just my $.02 here - I thought I'd toss this out in case anyone might find it interesting. -Mike