Date: Thu, 07 May 2015 14:05:11 +0100 From: Steven Hartland <killing@multiplay.co.uk> To: Slawa Olhovchenkov <slw@zxy.spb.ru> Cc: freebsd-stable@freebsd.org Subject: Re: zfs, cam sticking on failed disk Message-ID: <554B6307.9020309@multiplay.co.uk> In-Reply-To: <20150507125129.GY62239@zxy.spb.ru> References: <20150507080749.GB1394@zxy.spb.ru> <554B2547.1090307@multiplay.co.uk> <20150507095048.GC1394@zxy.spb.ru> <554B40B6.6060902@multiplay.co.uk> <20150507104655.GT62239@zxy.spb.ru> <554B53E8.4000508@multiplay.co.uk> <20150507120508.GX62239@zxy.spb.ru> <554B5BF9.8020709@multiplay.co.uk> <20150507124416.GD1394@zxy.spb.ru> <554B5EB0.1080208@multiplay.co.uk> <20150507125129.GY62239@zxy.spb.ru>
index | next in thread | previous in thread | raw e-mail
On 07/05/2015 13:51, Slawa Olhovchenkov wrote:
> On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:
>
>>>> Yes in theory new requests should go to the other vdev, but there could
>>>> be some dependency issues preventing that such as a syncing TXG.
>>> Currenly this pool must not have write activity (from application).
>>> What about go to the other (mirror) device in the same vdev?
>>> Same dependency?
>> Yes, if there's an outstanding TXG, then I believe all IO will stall.
> Where this TXG released? When all devices in all vdevs report
> 'completed'? When at the least one device in all vdevs report
> 'completed'? When at the least one device in at least one vdev report
> 'completed'?
When all devices have report completed or failed.
Hence if you pull the disk things should continue as normal, with the
failed device being marked as such.
Regards
Steve
home |
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?554B6307.9020309>
