From owner-freebsd-fs@freebsd.org Fri Feb 2 22:34:13 2018 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 49FBDECFE0F for ; Fri, 2 Feb 2018 22:34:13 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: from mail-wr0-x232.google.com (mail-wr0-x232.google.com [IPv6:2a00:1450:400c:c0c::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BE5BB84A73 for ; Fri, 2 Feb 2018 22:34:12 +0000 (UTC) (envelope-from ben.rubson@gmail.com) Received: by mail-wr0-x232.google.com with SMTP id 41so22573561wrc.9 for ; Fri, 02 Feb 2018 14:34:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=gMU4+OzDDLF+KypMzKKV3Cmy0yjHxnMZ16FgnXu6UjA=; b=YFZoyPIO6Y+o0uECV1gzT+npRlvVYGV1XNVJOtH9GX9zMUjg9BMbcC5zlolvDz3oIU Dku2h9CLeKshTfsVpkHICEgG+COK+ZVAei+XET19jtEIFToCMawFq0UtLGbqjg/u6Mo+ spRc8uPfwowHvCZ8uhf0eXmklIGV9+OVM/n9+UcR2NBx6U6/Zz+3zQTNSuXQxAet4EQe MOXjZiRO8dRNPMtzDlW7Ui51wYuWlyy1Aipbyuvl+iJaUh9a16r9CSVdIyK0pbKDluTe cwML5xy42GId1JJMBGAd2C6nnzi+mDwgJ0PeQDKPXhJ5AYs/P1qDIC4XcP46ii+xvITf Ha5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=gMU4+OzDDLF+KypMzKKV3Cmy0yjHxnMZ16FgnXu6UjA=; b=Q8o+KSLMow4p+Gr8FAsiJZ007zDyos6hSny2rWW/JOkN7xLuq09FGx4njPDe2Lq6E1 nyWgB3h+e5ww97DGZubJ2Mg76bGjHoBVrF5JUZ3RuFwponHFlC9A46GYCT+48LjuVtXM amVW+gJiNQ51VtZ7Xj9WHH3rU+4D8FE02wLhiOZ8/HDxbAvN5UquBcGCflW5ckUsNaI+ C2+JruZvVBm33yPAF5TjTwn4H5wbCQ0iNk9/lDhdWsxP6rRFF57C1xVeKLFVGieMaTrX oGm3Dn1hRS0fZSf34z7kvZfQgeeZjCqcFR1DklMR4TOk93979ec3kXkuNG0qCUUOjIYa 4b8w== X-Gm-Message-State: AKwxytdv8Zq0y4cpQ/i5WtHAerrcoUt4t+am6A0ovQ+9YgOC/lM3178D 6ckS2Hkx+iGP5Ji54eAKPqAfJomd X-Google-Smtp-Source: AH8x2275LZsvNlb//8ICEKvR8ibJlTBFPW+lpuigsScLuIN1WciKuzL2vO3SV9j1kVrv893zOWhRSQ== X-Received: by 10.223.164.23 with SMTP id d23mr19117424wra.72.1517610851278; Fri, 02 Feb 2018 14:34:11 -0800 (PST) Received: from bens-mac-1.home (LFbn-MAR-1-467-113.w2-15.abo.wanadoo.fr. [2.15.58.113]) by smtp.gmail.com with ESMTPSA id 81sm1529673wmi.26.2018.02.02.14.34.10 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 02 Feb 2018 14:34:10 -0800 (PST) Content-Type: text/plain; charset=us-ascii; delsp=yes; format=flowed Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... From: Ben RUBSON In-Reply-To: <027070fb-f7b5-3862-3a52-c0f280ab46d1@sorbs.net> Date: Fri, 2 Feb 2018 23:34:10 +0100 Content-Transfer-Encoding: 7bit Message-Id: <42C31457-1A84-4CCA-BF14-357F1F3177DA@gmail.com> References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D47F94.9020404@freebsd.org> <54D4A552.7050502@sorbs.net> <54D4BB5A.30409@freebsd.org> <54D8B3D8.6000804@sorbs.net> <54D8CECE.60909@freebsd.org> <54D8D4A1.9090106@sorbs.net> <54D8D5DE.4040906@sentex.net> <54D8D92C.6030705@sorbs.net> <54D8E189.40201@sorbs.net> <54D924DD.4000205@sorbs.net> <54DCAC29.8000301@sorbs.net> <9c995251-45f1-cf27-c4c8-30a4bd0f163c@sorbs.net> <8282375D-5DDC-4294-A69C-03E9450D9575@gmail.com> <73dd7026-534e-7212-a037-0cbf62a61acd@sorbs.net> <027070fb-f7b5-3862-3a52-c0f280ab46d1@sorbs.net> To: "freebsd-fs@freebsd.org" X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Feb 2018 22:34:13 -0000 On 02 Feb 2018 21:48, Michelle Sullivan wrote: > Ben RUBSON wrote: > >> So disks died because of the carrier, as I assume the second unscathed >> server was OK... > > Pretty much. > >> Heads must have scratched the platters, but they should have been >> parked, so... Really strange. > > You'd have thought... though 2 of the drives look like it was wear and > wear issues (the 2 not showing red lights) just not picked up on the > periodic scrub.... Could be that the recovery showed that one up... you > know - how you can have an array working fine, but one disk dies then > others fail during the rebuild because of the extra workload. Yes... To try to mitigate this, when I add a new vdev to a pool, I spread the new disks I have among the existing vdevs, and construct the new vdev with the remaining new disk(s) + other disks retrieved from the other vdevs. Thus, when possible, avoiding vdevs with all disks at the same runtime. However I only use mirrors, applying this with raid-Z could be a little bit more tricky... Ben