Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 06 Jan 2015 10:00:39 +1000
From:      Da Rock <freebsd-questions@herveybayaustralia.com.au>
To:        freebsd-questions@freebsd.org
Subject:   Re: ZFS replacing drive issues
Message-ID:  <54AB25A7.4040901@herveybayaustralia.com.au>
In-Reply-To: <54A9E3CC.1010009@hiwaay.net>
References:  <54A9D9E6.2010008@herveybayaustralia.com.au> <54A9E3CC.1010009@hiwaay.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On 05/01/2015 11:07, William A. Mahaffey III wrote:
> On 01/04/15 18:25, Da Rock wrote:
>> I haven't seen anything specifically on this when googling, but I'm 
>> having a strange issue in replacing a degraded drive in ZFS.
>>
>> The drive has been REMOVED from ZFS pool, and so I ran 'zpool replace 
>> <pool> <old device> <new device>'. This normally just works, and I 
>> have checked that I have removed the correct drive via serial number.
>>
>> After resilvering, it still shows that it is in a degraded state, and 
>> that the old and the new drive have been REMOVED.
>>
>> No matter what I do, I can't seem to get the zfs system online and in 
>> a good state.
>>
>> I'm running a raidz1 on 9.1 and zfs is v28.
>>
>> Cheers
>> _______________________________________________
>> freebsd-questions@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to 
>> "freebsd-questions-unsubscribe@freebsd.org"
>>
>
> Someone posted a similar problem a few weeks ago; rebooting fixed it 
> for them (as opposed to trying to get zfs to fix itself w/ management 
> commands), might try that if feasible .... $0.02, no more,l no less ....
>
Sorry, that didn't work unfortunately. I had to wait a bit until I could 
do it between it trying to resilver and workload. It came online at 
first, but then went back to removed when I checked again later.

Any other diags I can do? I've already run smartctl on all the drives 
(5hrs+) and they've come back clean. There's not much to go on in the 
logs either. Do a small number of drives just naturally error when 
placed in a raid or something?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54AB25A7.4040901>