Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Feb 2018 23:34:10 +0100
From:      Ben RUBSON <ben.rubson@gmail.com>
To:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok...
Message-ID:  <42C31457-1A84-4CCA-BF14-357F1F3177DA@gmail.com>
In-Reply-To: <027070fb-f7b5-3862-3a52-c0f280ab46d1@sorbs.net>
References:  <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D47F94.9020404@freebsd.org> <54D4A552.7050502@sorbs.net> <54D4BB5A.30409@freebsd.org> <54D8B3D8.6000804@sorbs.net> <54D8CECE.60909@freebsd.org> <54D8D4A1.9090106@sorbs.net> <54D8D5DE.4040906@sentex.net> <54D8D92C.6030705@sorbs.net> <54D8E189.40201@sorbs.net> <54D924DD.4000205@sorbs.net> <54DCAC29.8000301@sorbs.net> <9c995251-45f1-cf27-c4c8-30a4bd0f163c@sorbs.net> <8282375D-5DDC-4294-A69C-03E9450D9575@gmail.com> <73dd7026-534e-7212-a037-0cbf62a61acd@sorbs.net> <FAB7C3BA-057F-4AB4-96E1-5C3208BABBA7@gmail.com> <027070fb-f7b5-3862-3a52-c0f280ab46d1@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On 02 Feb 2018 21:48, Michelle Sullivan wrote:

> Ben RUBSON wrote:
>
>> So disks died because of the carrier, as I assume the second unscathed  
>> server was OK...
>
> Pretty much.
>
>> Heads must have scratched the platters, but they should have been  
>> parked, so... Really strange.
>
> You'd have thought... though 2 of the drives look like it was wear and  
> wear issues (the 2 not showing red lights) just not picked up on the  
> periodic scrub....  Could be that the recovery showed that one up... you  
> know - how you can have an array working fine, but one disk dies then  
> others fail during the rebuild because of the extra workload.

Yes... To try to mitigate this, when I add a new vdev to a pool, I spread  
the new disks I have among the existing vdevs, and construct the new vdev  
with the remaining new disk(s) + other disks retrieved from the other  
vdevs. Thus, when possible, avoiding vdevs with all disks at the same  
runtime.
However I only use mirrors, applying this with raid-Z could be a little bit  
more tricky...

Ben




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?42C31457-1A84-4CCA-BF14-357F1F3177DA>