From owner-freebsd-questions@freebsd.org Fri Oct 20 08:41:13 2017 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 02CBEE2B050 for ; Fri, 20 Oct 2017 08:41:13 +0000 (UTC) (envelope-from frank2@fjl.co.uk) Received: from bs1.fjl.org.uk (bs1.fjl.org.uk [84.45.41.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "bs1.fjl.org.uk", Issuer "bs1.fjl.org.uk" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AC3EE6DCFF for ; Fri, 20 Oct 2017 08:41:12 +0000 (UTC) (envelope-from frank2@fjl.co.uk) Received: from [192.168.1.186] (host81-134-87-65.range81-130.btcentralplus.com [81.134.87.65]) (authenticated bits=0) by bs1.fjl.org.uk (8.14.4/8.14.4) with ESMTP id v9K8euRa092267 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Fri, 20 Oct 2017 09:40:56 +0100 (BST) (envelope-from frank2@fjl.co.uk) User-Agent: K-9 Mail for Android In-Reply-To: <43621.128.135.52.6.1508425321.squirrel@cosmo.uchicago.edu> References: <59DBA387.4050108@gmail.com> <20171009191435.145c9dd2.freebsd@edvax.de> <72772933-C642-43DB-AFD6-6B5D40EEF39E@fjl.co.uk> <43621.128.135.52.6.1508425321.squirrel@cosmo.uchicago.edu> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Subject: Re: How to recover data from dead hard drive. From: "Frank Leonhardt (m)" Date: Fri, 20 Oct 2017 09:40:44 +0100 To: galtsev@kicp.uchicago.edu CC: FreeBSD , Carmel NY Message-ID: <4D114C69-A005-492B-B3A4-99A19CDF92E9@fjl.co.uk> X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Oct 2017 08:41:13 -0000 On 19 October 2017 16:02:01 BST, Valeri Galtsev wrote: >>>Personally, I fail to understand why anyone with any "mission >critical" >>>system would not be using some form of RAID. It doesn't make any >sense >>>to me. >>>Even my Laptop is configured to automatically back up data to a cloud >>>service. >>>Even if the drive went south, I could restore all of my data. >> >> I can explain why people aren't using RAID... IME It's because they >think >> they are. But they do it wrong, and only find out when things go >wrong. >> >> Most if the disasters I deal with involve "hardware" RAID cards. I >won't >> single out PERC or MegaRAID because that wouldn't be fair. > >Hm... My mileage is different. I use hardware RAIDs a lot. With great >success, and not a single disaster happened to me. Statistics for my >case >is: between a dozen and two dozens of hardware RAIDs during at least >decade and a half. Some that are still are in production are over 10 >years >old. My favorite 3ware, alas, was eradicated by competitors, second >favorite is Areca, next will be LSI, and it is not a most favorite as >it >has horrible (confusing!) command client interface. > >Sometimes people come from different places and tell "hardware RAID >horror" stories. After detailed review, all of them boil down to either >or >all of: > >1. RAID was not set up correctly. Namely: there were no surface scan >(scrub, or similar) scheduled to happen. Monthly would be enough, I >usually schedule it weekly. I will not go into detail how it leads to >problem, it's been described many times; > >2. notification to sysadmin about failed drive, lost redundancy of RAID >is >not arranged (which is as well incorrectly configured RAID) > >3. inappropriate drives are used. The worst for RAID are "green" drives >that spin down to conserve power. While they spin up when request from >RAID card comes, they just time out... > >4. Enabling cache, while not having battery backup that keeps cache RAM >with all its data in case of power outage > Hi Valeri, My rant wasn't referring to people like you who know what they're doing. I know the type I was referring to exists because I get called in to try to recover the mess, and the problem is very often that they believe that just because they spent a lot of RAID hardware, they are indestructible. It's not a substitute for an administrator with brains! I put "hardware RAID" in quotes, because it's all really software. As you point out, the difference is where the software is run. Fifteen years ago ZFS wasn't an option, so the choice was moot. The environment has also changed a lot. 15 years ago it was still reasonable to keep tape backups, and if you didn't keep some form of offline backup you were a fool. I struggle to justify a tape backup now. But if you keep your data on-line on one array you're still a fool. Replication on another array (on another site) seems to be the way forward. I know exactly what you mean about early OS software RAID. I'd have done the same as you. Now, ZFS is robust and has a lot if advantages. Especially with a striped RAID5, it's amazingly common for a second disk to fail (or be found defective) shortly after the first. Their setups don't allow for it to be taken offline at the first sign of trouble, so they swap the failed drive and this thrashes the hell out of those remaining as it tries to rebuild fast. I liked your list of common mistakes, and you're quite correct. I don't get many people calling up to say their hardware RAID is working fine; only people saying they're broken. Our mileage is probably the same; I was ranting about the people who DO loose critical data through bad practice. Another example I didn't mention was a small company with a Windoze server running a three-way mirror. What could possibly go wrong? Three identical copies of a trashed NTFS root directory, of course... Regards, Frank -- Sent from my Cray X/MP with small fiddling keyboard.