Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 23 Dec 2017 06:29:15 -0600
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-current@freebsd.org
Subject:   Re: SMART: disk problems on RAIDZ1 pool: (ada6:ahcich6:0:0:0): CAMstatus: ATA Status Error
Message-ID:  <c494a4f4-2a7f-66a5-70e9-534da2e87d9e@denninger.net>
In-Reply-To: <20171223122608.4ea4f097@thor.intern.walstatt.dynvpn.de>
References:  <201712131647.vBDGlrf2092528@pdx.rh.CN85.dnsmgr.net> <4d58b06a-0dbe-af05-1bd2-e87929e3b7a5@digiware.nl> <20171223122608.4ea4f097@thor.intern.walstatt.dynvpn.de>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]
On 12/23/2017 05:25, O. Hartmann wrote:
> Am Thu, 14 Dec 2017 12:05:20 +0100
> Willem Jan Withagen <wjw@digiware.nl> schrieb:
>
>> On 13/12/2017 17:47, Rodney W. Grimes wrote:
>>>> On Tue, 12 Dec 2017 14:58:28 -0800
>>>> Cy Schubert <Cy.Schubert@komquats.com> wrote:
>>>> I think people responding to my thread made it clear that the WD Green
>>>> isn't the first-choice-solution for a 20/6 (not 24/7) duty drive and
>>>> the fact, that they have serviced now more than 25000 hours, it would
>>>> be wise to replace them with alternatives.  
>>> I think someone had an apm command that turns off the head park,
>>> that would do wonders for drive life.   On the other hand, I think
>>> if it was my data and I saw that the drive had 2M head load cycles
>>> I would be looking to get out of that driv with any data I could
>>> not easily replace.  If it was well backed up or easily replaced
>>> my worries would be less.  
>> WD made their first series of Green disks green by aggressively turning 
>> them into sleep state. Like when few secs there was nog activity they 
>> would park the head, spin it down, and sleep the disk...
>> Access would need to undo the whole series of command.
>>
>> This could be reset by writing in one of the disks registers. I remember 
>> doing that for my 1,5G WDs (WD15EADS from 2009). That saved a lot of 
>> startups. I still have 'm around, but only use them for things that are 
>> not valuable at all. Some have died over time, but about half of them 
>> still seem to work without much trouble.
>>
>> WD used to have a .exe program to actually do this. But that did not
>> work on later disks. And turning things of on those disks was 
>> impossible/a lot more complex.
>>
>> This type of disk worked quite a long time in my ZFS setup. Like a few 
>> years, but I turned parking of as soon as there was a lot of turmoil 
>> about this in the community.
>> Now I using WD reds for small ZFS systems, and WD red Pro for large 
>> private storage servers. Professional server get HGST He disks, a bit 
>> more expensive, but very little fallout.
>>
>> --WjW
> Hello fellows.
>
> First of all, I managed it over the past week+ to replace all(!) drives with new ones. I
> decided to use this time HGST 4TB Deskstar NAS (HGST HDN726040ALE614) instead of WD RED
> 4TB (WDC WD40EFRX-68N32N0). The one WD RED is about to be replaced in the next days.
>
> Apart from the very long resilvering time (the first drive, the Western Digital WD RED
> 4TB with 64MB cache and 5400 rpm) took 11 h, all HGST drives, although considered faster
> (7200 rpm, 128 MB cache) took 15 - 16 h), everything ran smoothly - except, as mentioned,
> the exorbitant times of recovery.
>
> A very interesting point in this story is: as you could see, the WD Caviar Green 3TB
> drives suffered from a high "193 Load_Cycle_Count" - almost 85 per hour. When replacing
> the drives, I figured out, that one of the four drives was already a Western Digital RED
> 3TB NAS drive, but investigating its  "193 Load_Cycle_Count" revealed, that this drive
> also had a unusual high reload count - see "smartctl -x" output attached. It seems, as
> you already stated, that the APM feature responsible for this isn't available. The drive
> has been purchased Q4/2013.
>
> The HGST drives are very(!) noisy, th ehead movement induces a notable ringing, while the
> WD drive(s) are/were really silent. The power consumption of the HGST drives is higher.
> But apart from that, I'm disappointed about the fact that WD has also implemented this
> "timebomb" Load_Cycle_Count issue.
>
> Thanks a lot for your help and considerations!
>
> Kind regards,
> Oliver
I have a fairly large number of HGST "NAS" drives in service across
multiple locations (several dozens units total.)  I don't like their 5Tb
models much (they're slow comparatively for an unknown reason) but the
4Tb and 6Tb models I have in the field, while noisy and somewhat more
power-hungry (the latter comes from the 7200 rpm speed) have yet to
suffer a failure.


-- 
Karl Denninger
karl@denninger.net <mailto:karl@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/

[-- Attachment #2 --]
0	*H
010
	`He0	*H

00H^Ōc!5
H0
	*H
010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems CA1!0UCuda Systems LLC 2017 CA0
170817164217Z
270815164217Z0{10	UUS10UFlorida10U
Cuda Systems LLC10UCuda Systems CA1%0#UCuda Systems LLC 2017 Int CA0"0
	*H
0
h-5B>[;olӴ0~͎O9}9Ye*$g!ukvʶLzN`jL>MD'7U45CB+kY`bd~b*c3Ny-78ju]9HeuέsӬDؽmgwER?&UURj'}9nWD i`XcbGz\gG=u%\Oi13ߝ4
K44pYQr]Ie/r0+eEޝݖ0C15Mݚ@JSZ(zȏNTa(25DD5.l<g[[ZarQQ%Buȴ~~`IohRbʳڟu2MS8EdFUClCMaѳ!}ș+2k/bųE,n当ꖛ\(8WV8	d]b	yXw	܊:I39
00U]^§Q\ӎ0U#0T039N0b010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems CA1!0UCuda Systems LLC 2017 CA	@Ui0U00U0
	*H
:P U!>vJnio-#ן]WyujǑR̀Q
nƇ!GѦFg\yLxgw=OPycehf[}ܷ['4ڝ\[p6\o.B&JF"ZC{;*o*mcCcLY߾`
t*S!񫶭(`]DHP5A~/NPp6=mhk밣'doA$86hm5ӚS@jެEgl
)0JG`%k35PaC?σ
׳HEt}!P㏏%*BxbQwaKG$6h¦Mve;[o-Iی&
I,Tcߎ#t wPA@l0P+KXBպT	zGv;NcI3&JĬUPNa?/%W6G۟N000k#Xd\=0
	*H
0{10	UUS10UFlorida10U
Cuda Systems LLC10UCuda Systems CA1%0#UCuda Systems LLC 2017 Int CA0
170817212120Z
220816212120Z0W10	UUS10UFlorida10U
Cuda Systems LLC10Ukarl@denninger.net0"0
	*H
0
T[I-ΆϏdn;Å@שy.us~_ZG%<MYd\gvfnsa1'6Egyjs"C [{~_KPn+<*pv#Q+H/7[-vqDV^U>f%GX)H.|l`M(Cr>е͇6#odc"YljҦln8@5SA0&ۖ"OGj?UDWZ5	dDB7k-)9Izs-JAv
J6L$Ն1SmY.Lqw*SH;EF'DĦH]MOgQQ|Mٙג2Z9y@y]}6ٽeY9Y2xˆ$T=eCǺǵbn֛{j|@LLt1[Dk5:$=	`	M00<+00.0,+0 http://ocsp.cudasystems.net:88880	U00	`HB0U0U%0++03	`HB
&$OpenSSL Generated Client Certificate0U%՞V=؁;bzQ0U#0]^§Q\ӎϡ010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems CA1!0UCuda Systems LLC 2017 CAH^Ōc!5
H0U0karl@denninger.net0
	*H
۠A0-j%--$%g2#ޡ1^>{K+uGEv1ş7Af&b&O;.;A5*U)ND2bF|\=]<sˋL!wrw٧>YMÄ3\mWR hSv!_zvl? 3_ xU%\^#O*Gk̍YI_&Fꊛ@&1n”} ͬ:{hTP3B.;bU8:Z=^Gw8!k-@xE@i,+'Iᐚ:fhztX7/(hY` O.1}a`%RW^akǂpCAufgDixUTЩ/7}%=jnVZvcF<M=
2^GKH5魉
_O4ެByʈySkw=5@h.0z>
W1000{10	UUS10UFlorida10U
Cuda Systems LLC10UCuda Systems CA1%0#UCuda Systems LLC 2017 Int CAk#Xd\=0
	`HeE0	*H
	1	*H
0	*H
	1
171223122915Z0O	*H
	1B@w^A&_5Uj3<u8C+ $񑾝I	|XU{8f6j0l	*H
	1_0]0	`He*0	`He0
*H
0*H
0
*H
@0+0
*H
(0	+7100{10	UUS10UFlorida10U
Cuda Systems LLC10UCuda Systems CA1%0#UCuda Systems LLC 2017 Int CAk#Xd\=0*H
	10{10	UUS10UFlorida10U
Cuda Systems LLC10UCuda Systems CA1%0#UCuda Systems LLC 2017 Int CAk#Xd\=0
	*H
M76Iu

d;j4T]_(5|mQ{ӿ
V + bkL}D_% :e&hJ<Xqp8{' AAR`^(:%97ùMa\	n'\7vƨl&~["‚K
EZWo[M
4vfi֋3(S=15!зEFCXBq&^+=&(olγ)1*FLdG;$e
Qy1>}mW)NGW'Dր81^%H3P§Mu	}z4ykkł't"6b`:,d1IfϭR
>9j.>g)U7*nf͕	}y>ݽhPP3^Ex=tN.mA6\
̹ǪPp#nk`F奈gNcG@zv

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?c494a4f4-2a7f-66a5-70e9-534da2e87d9e>