Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 13 Dec 2017 13:25:20 +0200
From:      Daniel Kalchev <daniel@digsys.bg>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        "Rodney W. Grimes" <freebsd-rwg@pdx.rh.cn85.dnsmgr.net>, "O. Hartmann" <ohartmann@walstatt.org>, FreeBSD CURRENT <freebsd-current@freebsd.org>, Alan Somers <asomers@freebsd.org>
Subject:   Re: SMART: disk problems on RAIDZ1 pool: (ada6:ahcich6:0:0:0): CAM status: ATA Status Error
Message-ID:  <B925DA94-D0D2-43FE-AB13-7B0A3F9E9613@digsys.bg>
In-Reply-To: <CAOjFWZ5QePzNoh6=_Wv2yVL2LbmuKkuNGqhMWWcehMnFnJFwWQ@mail.gmail.com>
References:  <20171212231858.294a2cb5@thor.intern.walstatt.dynvpn.de> <201712122255.vBCMtnfZ088889@pdx.rh.CN85.dnsmgr.net> <CAOjFWZ5QePzNoh6=_Wv2yVL2LbmuKkuNGqhMWWcehMnFnJFwWQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


> On 13 Dec 2017, at 1:26, Freddie Cash <fjwcash@gmail.com> wrote:
>=20
> On Tue, Dec 12, 2017 at 2:55 PM, Rodney W. Grimes <
> freebsd-rwg@pdx.rh.cn85.dnsmgr.net> wrote:
>=20
>> Hum, just noticed this.  25k hours power on, 2M load cycles, this is
>> very hard on a hard drive.  Your drive is going into power save mode
>> and unloading the heads.  Infact at a rate of 81 times per hour?
>> Oh, I can not believe that.  Either way we need to get this stopped,
>> it shall wear your drives out.
>>=20
>=20
> =E2=80=8BBelieve it.  :)  The WD Green drives have a head parking =
timeout of 15
> seconds, and no way to disable that anymore.  You used to be able to =
boot
> into DOS and run the tler.exe program from WD to disable the =
auto-parking
> feature, but they removed that ability fairly quickly.
>=20
> The Green drives are meant to be used in systems that spend most of =
their
> time idle.  Trying to use them in an always-on RAID array is just =
asking
> for trouble.  They are only warrantied for a couple hundred thousand =
head
> parkings or something ridiculous like that.  2 million puts it way out =
of
> the warranty coverage.  :(
>=20
> We had 24 of them in a ZFS pool back when they were first released as =
they
> were very inexpensive.  They lead to more downtime and replacement =
costs
> than any other drive we've used since (or even before).  Just don't =
use
> them in any kind of RAID array or always-on system.
>=20

In order to handle drives like this and in general to get rid of load =
cycles, I use smartd on  all my ZFS pools with this piece of config:

DEVICESCAN -a -o off -e apm,off=20

Might not be the best solution, but as it is activated during boot, =
S.M.A.R.T. attribute 193 Load_Cycle_Count does not increase anymore. Not =
fan of WD drives, but have few tens of them=E2=80=A6 all of them =
=E2=80=9Cbehave=E2=80=9D in some way or another.

For the original question, if I do not have spare disk to replace, on a =
raidz1/raidz2 pool I would typically do:

zpool offline poolname disk
dd if=3D/dev/zero of=3D/dev/disk bs=3D1m
zpool replace poolname disk

This effectively fills the disk with zeros, forcing any suspected =
unreadable blocks to be replaced. After this operation, no more pending =
blocks etc. But, on large drives/pools requires few days to complete =
(the last part). Over the years, I have used this procedure on many =
drives, sometimes more than once on the same drive and that posponed =
having to replace the drive and the annoying S.M.A.R.T. message: which =
by itself might not be major problem, but better not have the logs =
filled with warnings all the time.

I feel more confident doing this on raidz2 vdevs anyway..

If I had spare disk and spare port, just

zpool replace poolname disk

Daniel=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B925DA94-D0D2-43FE-AB13-7B0A3F9E9613>