Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 31 Mar 2018 09:13:23 -0600
From:      Warner Losh <imp@bsdimp.com>
To:        Michael Dexter <editor@callfortesting.org>
Cc:        Lev Serebryakov <lev@freebsd.org>, Chuck Tuffli <chuck@tuffli.net>,  Tom Evans via freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: smart(8) Call for Testing
Message-ID:  <CANCZdfo2z7NfcGCB9jgR4m0AaDu6GFnG4TpiRWwbzvxKGeQHbA@mail.gmail.com>
In-Reply-To: <4ac57e03-f5c5-d6f1-d7a8-595398f49015@callfortesting.org>
References:  <4754cb2f-76bb-a69b-0cf5-eff4d621eb29@callfortesting.org> <CAMXt9NbdN119RrHnZHOJD1T%2BHNLLpzgkKVStyTm=49dopBMoAQ@mail.gmail.com> <CAM0tzX1oTWTa0Nes11yXg5x4c30MmxdUyT6M1_c4-PWv2%2BQbhw@mail.gmail.com> <CAMXt9NYMrtTNqNSx256mcYsPo48xnsa%2BCCYSoeFLzRsc%2BfQWMw@mail.gmail.com> <CAM0tzX32v2-=saT5iB4WVcsoVOtH%2BXE0OQoP7hEDB1xE%2Bxk%2Bsg@mail.gmail.com> <1d3f2cef-4c37-782e-7938-e0a2eebc8842@quip.cz> <A548BC90-815C-4C66-8E27-9A6F7480741D@bway.net> <7ED27465-1BC2-4522-873E-9ECE192EB7A2@ultra-secure.de> <e54ab9a7-835d-16c7-1fdd-9f8285c0642b@FreeBSD.org> <CAM0tzX3RanY=vZbCXTAHB3=kv6aVkuzO5pmwr9g%2BZQoe%2BN1hVg@mail.gmail.com> <be4d85ef-1bd4-d666-42cb-41ad1bc67dd8@FreeBSD.org> <4ac57e03-f5c5-d6f1-d7a8-595398f49015@callfortesting.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Mar 30, 2018 at 11:16 PM, Michael Dexter <editor@callfortesting.org>
wrote:

> On 3/29/18 6:43 AM, Lev Serebryakov wrote:
>
>>    Monitoring of values and alerting is VERY important (number of
>> Relocations is main indicator of spinning HDD health and when it raises
>> it must be known ASAP)
>>
>
> Another metric that frequently came up during outreach was any sudden
> increase in disk latency, usually indicating that you have between one and
> 24 hours to replace the device. I am curious what people are doing now to
> determine such changes in latency and where they feel such monitoring
> should exist in the stack.
>

Netflix has a monitoring program that uses gstat to gather average latency
stats and send them to our centralized data collection data store. It's
really only by looking at the long-term trend that you'll see the spike in
retries that manifests itself as bigger latencies. One problem with gstat,
though, is that it includes software queueing time which for many things is
fine, but when you are trying to determine if the spike is due to extra
load on the device or some hardware thing, then it becomes bothersome. The
CAM I/O scheduler, when the dynamic scheduler is enabled, keeps all kinds
of stats about device latency, including a cumulative latency histogram.
Those are also useful things to look at.

At Netflix, though, we let the disk fail and then mark it as disabled. We
don't look for trends to predict possible failure because we have a fail in
place model that doesn't care if there's data loss because all the data on
the machine is replicated from a central source of truth and can easily be
replaced.


> As for SNMP and friends, I consider those way up the stack with tools like
> smart(8) simply providing a building block.


In many ways, our data collection thing at work is an alternative to SNMP.

Warner



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfo2z7NfcGCB9jgR4m0AaDu6GFnG4TpiRWwbzvxKGeQHbA>