Date: Thu, 7 May 2015 16:10:07 +0300 From: Slawa Olhovchenkov <slw@zxy.spb.ru> To: Steven Hartland <killing@multiplay.co.uk> Cc: freebsd-stable@freebsd.org Subject: Re: zfs, cam sticking on failed disk Message-ID: <20150507131007.GZ62239@zxy.spb.ru> In-Reply-To: <554B6307.9020309@multiplay.co.uk> References: <20150507095048.GC1394@zxy.spb.ru> <554B40B6.6060902@multiplay.co.uk> <20150507104655.GT62239@zxy.spb.ru> <554B53E8.4000508@multiplay.co.uk> <20150507120508.GX62239@zxy.spb.ru> <554B5BF9.8020709@multiplay.co.uk> <20150507124416.GD1394@zxy.spb.ru> <554B5EB0.1080208@multiplay.co.uk> <20150507125129.GY62239@zxy.spb.ru> <554B6307.9020309@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, May 07, 2015 at 02:05:11PM +0100, Steven Hartland wrote: > > > On 07/05/2015 13:51, Slawa Olhovchenkov wrote: > > On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote: > > > >>>> Yes in theory new requests should go to the other vdev, but there could > >>>> be some dependency issues preventing that such as a syncing TXG. > >>> Currenly this pool must not have write activity (from application). > >>> What about go to the other (mirror) device in the same vdev? > >>> Same dependency? > >> Yes, if there's an outstanding TXG, then I believe all IO will stall. > > Where this TXG released? When all devices in all vdevs report > > 'completed'? When at the least one device in all vdevs report > > 'completed'? When at the least one device in at least one vdev report > > 'completed'? > When all devices have report completed or failed. Thanks for explained. > Hence if you pull the disk things should continue as normal, with the > failed device being marked as such. I am have trouble to phisical access. May be someone can be suggest software method to forced detach device from system.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150507131007.GZ62239>