Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 Jun 2022 20:42:42 -1000
From:      "parv/FreeBSD" <parv.0zero9+freebsd@gmail.com>
To:        Robert Huff <roberthuff@rcn.com>
Cc:        questions@freebsd.org
Subject:   Re: hardware recommendation
Message-ID:  <CABObuOp9KSddnzq6Etubagjbut8RQZJ7uwDHNVTm=Fo_ACZXJQ@mail.gmail.com>
In-Reply-To: <25275.14490.634554.58598@jerusalem.litteratus.org>
References:  <25275.14490.634554.58598@jerusalem.litteratus.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--00000000000020af8405e2907e42
Content-Type: text/plain; charset="UTF-8"

On Tue, Jun 28, 2022 at 7:23 AM Robert Huff wrote:

>
>         A disk drive on one of my machines is dying.
>         I'd like to replace it with a _reliable_ (the old one lasted 10+
> years at moderate loads) consumer-grade SATA II or higher drive of at
> least 500 gbytes.
>         Any particular product lines getting consistantly good reviews?
>         And what should I avoid like flesh-eating bacteria?
>

Some anecdotes ...

- My laptops ...
-- ThinkPad X260 c 2017 has been running original 2.5" Seagate 500 GB
disk (ST500LM021);

-- other laptop has 500 GB M.2 WD Blue <something>570 or (750)
NVMe PCIe 3 SSD. For unrelated reasons, I do not use it much.

- At work ...
-- boot+OS SSD ...
---- using mostly Samsung 8x0 EVO SATA III SSDs as boot+OS disks in mostly
CentOS [68]/Rocky Linux machines & a few of FreeBSD 13 machines;

---- recently got 3 machines with 500 GB Samsung 980 M.2 NVMe PCIe 3 SSDs
as boot+OS disks (one machine has 2x M.2 slots, so that got ZFS mirror
with FreeBSD 14; other 2 are single-SSD Rocky Linux ones);

-- Data archive with 4 separate ZFS RAID-Z[23] arrays (all disks were|are
CMR; FreeBSD 1[0-3]) ...
---- about 8 of 32 4TB WD Red NAS disks failed in ~2 years
while in use in 4 separate ZFS RAID-Z2 arrays (8 disks/array);

---- ~1-3 of 24 6TB WD Red Pro NAS disks failed in ~2 years in
2 separate -Z3 arrays (12 disks/array);

---- currently using 6TB WD Red Pro NAS disks in 2 separate -Z2
arrays (6 disks/array, taken from above -Z3 arrays), and 14TB Seagate
Exos16 disks in 2 separate -Z3 arrays (7 disks/vdev, 2 vdevs/array) for
just shy of 2 years;

---- One 14TB Seagate disk was dead on arrival. But I did not find out
within the
return period of the retailer; had to settle for likely refurbished disk
from
Seagate than much preferred outright new-replacement from the retailer.


FWIW; YMMV; etc.


- parv

--00000000000020af8405e2907e42
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon=
t-family:monospace">On Tue, Jun 28, 2022 at 7:23 AM Robert Huff wrote:<br><=
/div></div><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" sty=
le=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddi=
ng-left:1ex"><br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 A disk drive on one of my machines is dying.<br=
>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 I&#39;d like to replace it with a _reliable_ (t=
he old one lasted 10+<br>
years at moderate loads) consumer-grade SATA II or higher drive of at<br>
least 500 gbytes.<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 Any particular product lines getting consistant=
ly good reviews?<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 And what should I avoid like flesh-eating bacte=
ria?<br></blockquote><div><br></div><div><div style=3D"font-family:monospac=
e" class=3D"gmail_default">Some anecdotes ...</div></div><div><div style=3D=
"font-family:monospace" class=3D"gmail_default"><br></div><div style=3D"fon=
t-family:monospace" class=3D"gmail_default">- My laptops ...<br></div><div =
style=3D"font-family:monospace" class=3D"gmail_default">-- ThinkPad X260 c =
2017 has been running original 2.5&quot; Seagate 500 GB</div><div style=3D"=
font-family:monospace" class=3D"gmail_default">disk (ST500LM021);</div><div=
 style=3D"font-family:monospace" class=3D"gmail_default"><br></div><div sty=
le=3D"font-family:monospace" class=3D"gmail_default">-- other laptop has 50=
0 GB M.2 WD Blue &lt;something&gt;570 or (750)</div><div style=3D"font-fami=
ly:monospace" class=3D"gmail_default">NVMe PCIe 3 SSD. For unrelated reason=
s, I do not use it much.<br></div><div style=3D"font-family:monospace" clas=
s=3D"gmail_default"><br></div><div style=3D"font-family:monospace" class=3D=
"gmail_default">- At work ...</div><div style=3D"font-family:monospace" cla=
ss=3D"gmail_default">-- boot+OS SSD ...<br></div><div style=3D"font-family:=
monospace" class=3D"gmail_default">---- using mostly Samsung 8x0 EVO SATA I=
II SSDs as boot+OS disks in mostly</div><div style=3D"font-family:monospace=
" class=3D"gmail_default">CentOS [68]/Rocky Linux machines &amp; a few of F=
reeBSD 13 machines;</div><div style=3D"font-family:monospace" class=3D"gmai=
l_default"><br></div><div style=3D"font-family:monospace" class=3D"gmail_de=
fault">---- recently got 3 machines with 500 GB Samsung 980 M.2 NVMe PCIe 3=
 SSDs</div><div style=3D"font-family:monospace" class=3D"gmail_default">as =
boot+OS disks (one machine has 2x M.2 slots, so that got ZFS mirror</div><d=
iv style=3D"font-family:monospace" class=3D"gmail_default">with FreeBSD 14;=
 other 2 are single-SSD Rocky Linux ones);<br></div><div style=3D"font-fami=
ly:monospace" class=3D"gmail_default"><br></div><div style=3D"font-family:m=
onospace" class=3D"gmail_default">-- Data archive with 4 separate ZFS RAID-=
Z[23] arrays (all disks were|are</div><div style=3D"font-family:monospace" =
class=3D"gmail_default">CMR; FreeBSD 1[0-3]) ...<br></div><div style=3D"fon=
t-family:monospace" class=3D"gmail_default">---- about 8 of 32 4TB WD Red N=
AS disks failed in ~2 years</div><div style=3D"font-family:monospace" class=
=3D"gmail_default">while in use in 4 separate ZFS RAID-Z2 arrays (8 disks/a=
rray);<br></div><div style=3D"font-family:monospace" class=3D"gmail_default=
"><br></div><div style=3D"font-family:monospace" class=3D"gmail_default">--=
-- ~1-3 of 24 6TB WD Red Pro NAS disks failed in ~2 years in</div><div styl=
e=3D"font-family:monospace" class=3D"gmail_default">2 separate -Z3 arrays (=
12 disks/array);</div><div style=3D"font-family:monospace" class=3D"gmail_d=
efault"><br></div><div style=3D"font-family:monospace" class=3D"gmail_defau=
lt">---- currently using 6TB WD Red Pro NAS disks in 2 separate -Z2</div><d=
iv style=3D"font-family:monospace" class=3D"gmail_default">arrays (6 disks/=
array, taken from above -Z3 arrays), and 14TB Seagate</div><div style=3D"fo=
nt-family:monospace" class=3D"gmail_default">Exos16 disks in 2 separate -Z3=
 arrays (7 disks/vdev, 2 vdevs/array) for</div><div style=3D"font-family:mo=
nospace" class=3D"gmail_default">just shy of 2 years;<br></div><div style=
=3D"font-family:monospace" class=3D"gmail_default"><br></div><div style=3D"=
font-family:monospace" class=3D"gmail_default">---- One 14TB Seagate disk w=
as dead on arrival. But I did not find out within the</div><div style=3D"fo=
nt-family:monospace" class=3D"gmail_default">return period of the retailer;=
 had to settle for likely refurbished disk from</div><div style=3D"font-fam=
ily:monospace" class=3D"gmail_default">Seagate than much preferred outright=
 new-replacement from the retailer.</div><div style=3D"font-family:monospac=
e" class=3D"gmail_default"><br></div><div style=3D"font-family:monospace" c=
lass=3D"gmail_default"><br></div><div style=3D"font-family:monospace" class=
=3D"gmail_default">FWIW; YMMV; etc.<br></div><div style=3D"font-family:mono=
space" class=3D"gmail_default"><br></div><div style=3D"font-family:monospac=
e" class=3D"gmail_default"><br></div><div style=3D"font-family:monospace" c=
lass=3D"gmail_default">- parv</div><div style=3D"font-family:monospace" cla=
ss=3D"gmail_default"><br></div></div></div></div>

--00000000000020af8405e2907e42--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABObuOp9KSddnzq6Etubagjbut8RQZJ7uwDHNVTm=Fo_ACZXJQ>