From owner-freebsd-fs@freebsd.org Thu Jan 12 03:19:57 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 75CFECA9ACA; Thu, 12 Jan 2017 03:19:57 +0000 (UTC) (envelope-from ultima1252@gmail.com) Received: from mail-yw0-x232.google.com (mail-yw0-x232.google.com [IPv6:2607:f8b0:4002:c05::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 34A781367; Thu, 12 Jan 2017 03:19:57 +0000 (UTC) (envelope-from ultima1252@gmail.com) Received: by mail-yw0-x232.google.com with SMTP id l75so4558026ywb.0; Wed, 11 Jan 2017 19:19:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=Qc9opC2ZrgvH2GLR9EWNyYbKJo5MNFcHE/wnBTofz7g=; b=OQ8F+HADu9X+RBeeodUa5X5KRihNDOUuYoBgy5iXiDckKDYoclB23aaBoqD5PGgoCu d2qWPMgVH7tkzzMpv23jEl6oZ/sATm4CsZz+F5AlgBaJuo+s0q87pnlTPhEXovmwyfwF b8Wbmk3Bd9UbdQp0YbZoB+xt/f+sel8/sIcNgfIDxO80ko9U4Q+KPMkprPlRn2sEcbcP 1MmC1D/4VRHbWJKYCKilFNbNOHMcNnf0eFDr8aag4anhk80QNT0HRmNHvPXAelQsn8+F is5Q1Hwg3no5y21A6VkcVhpOUS1qAjpwa6gWXTYQMicqjThJ5ZgB3zD1KuudwtO0ingg IUUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=Qc9opC2ZrgvH2GLR9EWNyYbKJo5MNFcHE/wnBTofz7g=; b=CHybXZoS3BC4Fuh3Cy6+v/Lo+h4QPfv7KduEyklg4+kAfkJ4enZ9ONv+NxDba0C9tD CaPKofWsLk6mwWBFYt+4Rgrphblg+FZVpccCufC+n40RL5reSmWOaKsjbEFRDCvVfTF3 i+j3FZsP8usGCzn84jIEAQjrsRbUkgTm0jp5eM5TUQtKOYw0nStJonaZhsnjG9oCGnjI 0rliR6+LhGa7qhCcASAmLD+u+my/Zgd9tRPixxKMzsa+WX2ENIpp0jLlLsD/6Z8277sF GtlBF607zCXpiacqzLwSAwkhceBQaXp1bECbNBjy8I3Tb6bOVOm1hh3oP/StIOvkVREX CFUg== X-Gm-Message-State: AIkVDXIg1fXPfLiRdiI+Td0Yj//advrH1GOWag2+7Ii4cqnmU2pcmRyXDjLND+eocj7Thjr/gV+IlVT6bVMY1w== X-Received: by 10.13.208.198 with SMTP id s189mr9137073ywd.315.1484191196393; Wed, 11 Jan 2017 19:19:56 -0800 (PST) MIME-Version: 1.0 Received: by 10.129.52.65 with HTTP; Wed, 11 Jan 2017 19:19:56 -0800 (PST) In-Reply-To: References: From: Ultima Date: Wed, 11 Jan 2017 22:19:56 -0500 Message-ID: Subject: Re: Sluggish performance on head r311648 To: Shane Ambler , =?UTF-8?Q?Karli_Sj=C3=B6berg?= Cc: freebsd-current@freebsd.org, freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Jan 2017 03:19:57 -0000 > One thing I keep a look out for, is in gstat, if one drive is busier > than the others, it's a clear sign of a drive dying. Offlining and > replacing the drive usually makes a huge difference. > Also, in all of these cases, SMART data shows no sign of problem and no > errors in 'zpool status'. Just watch and see if any of the drives is > working harder than the others has been the surest way to troubleshoot > performance issues, in my experience. Staring at gstat for about 5 minutes, I'm not really sure if any one stand out. They seem to all vary in activity. Something that does stand out is that on occasion a few will spike in the red, sometimes 1 or 2, other times 8ish (24 drives in pool total). I also notice once, instead of all drives working at the same time, it seems to move like a wave. Red activity hit at the top of gstat and worked its way down. Not all drives hit red during this wave, around 16 in this 5ish seconds. Not sure if this is out of the ordinary tho. I did look at SMART before porting. One thing I thought about is the corrected errors amount for each drive. When I get some time I'll create a graph and try and determine the possible bad drive(s, hopefully without the s) based on this information. > Just to eliminate the simple - is the zpool capacity high? When a pool > gets into the 80-90% capacity, performance drops. The pool is at 28% capacity atm according to zpool list. On Wed, Jan 11, 2017 at 9:03 PM, Shane Ambler wrote: > On 11/01/2017 15:32, Ultima wrote: > >> I'v been noticing lately sluggish performance, maybe zfs? First noticed >> this a few days ago right after upgrading on Jan 7th to r311648 and the >> last upgrade before that was around dec 30-jan 1 (not sure of rev). >> Decided >> to upgrade again today. I usually build and install head every week or >> two, >> but I have been extremely busy the past couple months. >> >> FreeBSD U1 12.0-CURRENT FreeBSD 12.0-CURRENT #16 r311903: Tue Jan 10 >> 17:20:11 EST 2017 amd64 >> >> Normally when one of my services scans a few directories it takes about 15 >> seconds tops, it has been taking several minutes. I want to note that this >> > > Just to eliminate the simple - is the zpool capacity high? When a pool > gets into the 80-90% capacity, performance drops. > > > > -- > FreeBSD - the place to B...Storing Data > > Shane Ambler > >