Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 5 Jul 2021 08:51:14 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Pete French <petefrench@ingresso.co.uk>
Cc:        Stefan Esser <se@freebsd.org>,  FreeBSD Stable Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: ZFS + mysql appears to be killing my SSD's
Message-ID:  <CAOtMX2gWnEEKHHaDf0GmfC1wKAUqHK3ptNUxm23UFBsD_KHwKg@mail.gmail.com>
In-Reply-To: <9c71d627-55b8-2464-6cc9-489e4ce98049@ingresso.co.uk>
References:  <89c37c3e-22e8-006e-5826-33bd7db7739e@ingresso.co.uk> <2fd9b7e4-dc75-fedc-28d7-b98191167e6b@freebsd.org> <9c71d627-55b8-2464-6cc9-489e4ce98049@ingresso.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
--00000000000087d45205c661707a
Content-Type: text/plain; charset="UTF-8"

On Mon, Jul 5, 2021 at 8:31 AM Pete French <petefrench@ingresso.co.uk>
wrote:

>
>
> On 05/07/2021 14:37, Stefan Esser wrote:
> > Hi Pete,
> >
> > have you checked the drive state and statistics with smartctl?
>
> Hi, thanks for the reply - yes, I did check the statistics, and they
> dont make a lot of sense. I was just looking at them again in fact.
>
> So, one of the machines that we chnaged a drive on when this first
> started, which was 4 weeks ago.
>
> root@telehouse04:/home/webadmin # smartctl -a /dev/ada0 | grep Perc
> 169 Remaining_Lifetime_Perc 0x0000   082   082   000    Old_age
> Offline      -       82
> root@telehouse04:/home/webadmin # smartctl -a /dev/ada1 | grep Perc
> 202 Percent_Lifetime_Remain 0x0030   100   100   001    Old_age
> Offline      -       0
>
> Now, from that you might think the 2nd drive was the one changes, but
> no. Its the first one, which is now at 82% lifetime remaining! The other
> druve, still at 100%, has been in there a year. The drives are different
> manufacturers, which makes comparing most of the numbers tricky
> unfortunately.
>
>
> Am now even more worried than when I sent the first email - if that 18%
> is accurate then I am going to be doing this again in another 4 months,
> and thats not sustainable. It also looks as if this problem has got a
> lot worse recently. Though I wasnt looking at the numbers before, only
> noticing tyhe failurses. If I look at 'Percentage Used Endurance
> Indicator' isntead of the 'Percent_Lifetime_Remain' value then I see
> some of those well over 200%. That value is, on the newer drives, 100
> minus the 'Percent_Lifetime_Remain' value, so I guess they ahve the same
> underlying metric.
>
> I didnt mention in my original email, but I am encrypting these with
> geli. Does geli do any write amplification at all ? That might explain
> the high write volumes...
>
> -pete.
>

If you're using 4K sectors anyway, then GELI does not create any extra
write amplification.   But if you're extra paranoid and you use the "-T"
option to "geli attach", then GELI will block TRIM commands.  That could
hurt SSD lifetime.  But I don't think "-T" is the default.  You are using
4K sectors, right?  ZFS's ashift is set to 12?
-Alan

--00000000000087d45205c661707a--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2gWnEEKHHaDf0GmfC1wKAUqHK3ptNUxm23UFBsD_KHwKg>