Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Nov 2016 21:03:30 +0100
From:      Marek Salwerowicz <marek.salwerowicz@misal.pl>
To:        Gary Palmer <gpalmer@freebsd.org>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: zpool raidz2 stopped working after failure of one drive
Message-ID:  <d5380ad9-8ae9-19f3-2559-81b454ea63f3@misal.pl>
In-Reply-To: <20161120150235.GB99344@in-addr.com>
References:  <aa638ae8-4664-c45f-25af-f9e9337498de@misal.pl> <20161120150235.GB99344@in-addr.com>

next in thread | previous in thread | raw e-mail | index | archive | help
W dniu 2016-11-20 o 16:02, Gary Palmer pisze:
>
>> However, I am concerned by the fact that one drive's failure has blocked
>> completely zpool.
>> Is it normal normal behaviour for zpools ?
>
> What is the setting in
>
> zpool get failmode <poolname>
>
> By default it is "wait", which I suspect is what caused your issues.
> See the man page for zpool for more.

Indeed, it's wait.
However, I was trying to reproduce the problem in a VM by removing SATA 
disk on the fly from raidz2 (while pool was under I/O by bonnie++ ), and 
it worked correctly - drive state is "REMOVED" in "zpool status" command.

But the pool works correctly.


Taking into account this PR, I am wondering if it might be a hardware issue:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191348


> zfsd in 11.0 and later is the current path to hot spare management
> in FreeBSD.  FreeBSD 10.x does not have the ability to automatically use
> hot spares to replace failing drives.

Thanks - I will try it out.

Cheers

Marek


-- 
Marek Salwerowicz
MISAL-SYSTEM
tel. + 48 222 198891




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d5380ad9-8ae9-19f3-2559-81b454ea63f3>