Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Mar 2016 08:16:45 -0400
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Warner Losh <imp@bsdimp.com>
Cc:        Oliver Psotta <oliver.psotta@posteo.de>, freebsd-arm@freebsd.org
Subject:   Re: Effect of partitioning on wear-leveling
Message-ID:  <D7533887-E13D-4134-A982-F8158FA96CE0@gromit.dlib.vt.edu>
In-Reply-To: <CANCZdfoefzvP5g5hJjiRnUuzB1K_bhKEHeyMJU-V2B0MyeAUgA@mail.gmail.com>
References:  <20160321175952.GA83908@www.zefox.net> <1458586884.68920.96.camel@freebsd.org> <20160321221153.GB83908@www.zefox.net> <1458600070.68920.107.camel@freebsd.org> <1973487B-0AA7-468D-A9CC-319FBE2122F0@netgate.com> <CANCZdfrCWXAswe02Qd3tTiDL8O_4TGEWbhFqgft4Q9aKj7ixvg@mail.gmail.com> <20160322033417.GD83908@www.zefox.net> <201603230349.VAA20311@mail.lariat.net> <CANCZdfp5jffpHcnoDJg24stUydEssASeC4owmz7n-fmY=evGzQ@mail.gmail.com> <AC03ED3F-D113-4640-9DFD-DCAC193A5517@posteo.de> <CANCZdfoefzvP5g5hJjiRnUuzB1K_bhKEHeyMJU-V2B0MyeAUgA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mar 24, 2016, at 12:05 AM, Warner Losh <imp@bsdimp.com> wrote:

> When you have a fleet of thousands of ssds, you'll get failures no =
matter
> the quality...


Pertinent to discussion of SSD failures is this article about the topic, =
which summarises a FAST 2016 paper on the subject: =
http://www.zdnet.com/article/ssd-reliability-in-the-real-world-googles-exp=
erience/

Here are the "key conclusions" from the ZDNet article (and I quote):

"=E2=80=A2 Ignore Uncorrectable Bit Error Rate (UBER) specs. A =
meaningless number.
=E2=80=A2 Good news: Raw Bit Error Rate (RBER) increases slower than =
expected from wearout and is not correlated with UBER or other failures.
=E2=80=A2 High-end SLC drives are no more reliable that MLC drives.
=E2=80=A2 Bad news: SSDs fail at a lower rate than disks, but UBER rate =
is higher (see below for what this means).
=E2=80=A2 SSD age, not usage, affects reliability.
=E2=80=A2 Bad blocks in new SSDs are common, and drives with a large =
number of bad blocks are much more likely to lose hundreds of other =
blocks, most likely due to die or chip failure.
=E2=80=A2 30-80 percent of SSDs develop at least one bad block and 2-7 =
percent develop at least one bad chip in the first four years of =
deployment."

Cheers,

Paul.

>=20
> Warner
> On Mar 23, 2016 1:55 AM, "Oliver Psotta" <oliver.psotta@posteo.de> =
wrote:
>=20
>> Which SSDs failed on you, Warner? There sure are some rotten apples,
>> but the Samsung 840 pro, for example, were (are) quite reliable.
>>=20
>> -Oliver
>>=20
>>> On 23 Mar 2016, at 07:45, Warner Losh <imp@bsdimp.com> wrote:
>>>=20
>>> Hope your SSDs are better at reporting things than ours. We've seen =
some
>>> SSDs
>>> just fail even though the previous SMART data said we've used maybe =
20%
>> of
>>> the
>>> drive's write ability....
>>=20
>> _______________________________________________
>> freebsd-arm@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-arm
>> To unsubscribe, send any mail to =
"freebsd-arm-unsubscribe@freebsd.org"
>>=20
> _______________________________________________
> freebsd-arm@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-arm
> To unsubscribe, send any mail to "freebsd-arm-unsubscribe@freebsd.org"
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D7533887-E13D-4134-A982-F8158FA96CE0>