From owner-freebsd-scsi Sat Jun 21 10:49:08 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id KAA11503 for freebsd-scsi-outgoing; Sat, 21 Jun 1997 10:49:08 -0700 (PDT) Received: from sendero-ppp.i-connect.net (sendero-ppp.i-Connect.Net [206.190.143.100]) by hub.freebsd.org (8.8.5/8.8.5) with SMTP id KAA11465 for ; Sat, 21 Jun 1997 10:48:55 -0700 (PDT) Received: (qmail 11652 invoked by uid 1000); 21 Jun 1997 17:49:01 -0000 Message-ID: X-Mailer: XFMail 1.2-alpha [p0] on FreeBSD Content-Type: text/plain; charset=iso-8859-8 Content-Transfer-Encoding: 8bit MIME-Version: 1.0 In-Reply-To: <27545.866803027@time.cdrom.com> Date: Sat, 21 Jun 1997 10:49:01 -0700 (PDT) Organization: Atlas Telecom From: Simon Shapiro To: "Jordan K. Hubbard" Subject: Re: Announcement: New DPT RAID Controller Driver Available Cc: Brian Tao , FREEBSD-SCSI , FREEBSD-HACKERS , "Justin T. Gibbs" , dg@root.com Sender: owner-freebsd-scsi@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk Hi "Jordan K. Hubbard"; On 20-Jun-97 you wrote: ... > I would also be suspicious of the cooling properties of a plastic > drive enclosure (it seems like packing it in a mini-Igloo ice chest > would be no worse to me ;-), but given a much different racking > scenario, say 80 drives in a single free-standing rack, I'm more than > willing to believe that vibration becomes a significant problem > requiring creative solutions. If it were my fingers signing the P.O. > on a true drive-array-from-hell, I'd probably favor the vendor > providing the best combination of all-metal construction, air-flow, > power supply quality and vibration isolation. We needed to put 3,000 drives in an array. We ended up with 200 drives per cabinet and 15 cabinets per system. Made finding the CPU array a bit difficult :-) We snickered at the DEC solution and jibed about it a lot worse than you do :-). The DEc engineers were cool about it and kept on saying ``just try it, will you?'' ``Sure we said'' this thing will be out of here in no time. No way these things will ever work. We put a recording thermometer in every disk, in every tray, in every rack. We put the rack in an oven and cooked at 110 F for a week. Ambient + 10 said the spec, ambient + 10 it was. In all spots. All the time. Ambient + 12 with one fan off. The spec said do not phisically remove a fan for more than 2 minutes. In 5 minutes the drives overheated. Then we pulled our heaviest gun. No way a plastic carrier in plastic rack will pass EMI/RFI. Guess again. I think what happened there was that these kids at DEC, knowing nothing about system engineering, and absolutely nothing about mechanical design nor disk drives just got lucky. > Unless, of course, they had something like the AMES wind tunnel > providing forced airflow past the drives, then I suppose the plastic > sled construction wouldn't really matter much, would it? :-) Actually, of the 5-6 designs weevaluated here, the DEC solution is also the quitest. Just to trigger another round of heated and energetic discussion: Our hardware engineers computed, with great precision, that to run 6 drives, one needs at least 300W. DEC engineers obviusly skipped school that day as their P/S are only 150W each. ~No way will you be able to ever hot plug a P/S or replace a disk. Sure. I took a rack, put one power supply and SEVEN (not 6) drives in it and powered it up, booted FreeBSD, and formatted all the drives at the same time. Then I started random I/O with read-write cycle on all drives. 256 instances in all. Worked fine. Oh, BTW, the in-house design, using TWO 300W power supplies crashes EVERY TIME you unplug or plug a disk. As I said, DEC has no clue how to build a disk system. They make lousy CPU's too. Simon