From owner-freebsd-fs@freebsd.org Mon Feb 15 01:20:01 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 99BBEAA8B9D for ; Mon, 15 Feb 2016 01:20:01 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: from mail-io0-x22e.google.com (mail-io0-x22e.google.com [IPv6:2607:f8b0:4001:c06::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5F43E183C for ; Mon, 15 Feb 2016 01:20:01 +0000 (UTC) (envelope-from thomasrcurry@gmail.com) Received: by mail-io0-x22e.google.com with SMTP id 9so144820042iom.1 for ; Sun, 14 Feb 2016 17:20:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=S5sskxGy7H5mIxvWV93089f80JoVnX3Kvvuyinr1faE=; b=K2NHJWZNThGR+OYAzuUMQzj0CKOtUQ/bIZOtVGAijDKupXoIrK7IayTVFliNzuuDvy Rmrw+rQi5CVXxPhH2Zg/82Q7g/7NJTPFCThz9Mv+sE3Lt03Prn/nLgp5g6DdzAMqapD7 8DzcYBKNTSZM2MrKNqu7Ayn5MlzhvNic4oGOhps2EkhXVdrnK2OTg6YCLGAmfc9fLxrb LF+ZHrm1jStj81MlIWCc0dFn5HnnmGZVna+Zr6h5lsqKyMvF5e+W1mM10VQYVsm3O7wg GQdTi7JtHprcZy19SMRHLG46RI22PJuEB5MFB63kBVbCrfW7ladHYGcUHpg/gYuyEsst QUWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=S5sskxGy7H5mIxvWV93089f80JoVnX3Kvvuyinr1faE=; b=Av/RXc7SSCMth32oBrvnRyXvDhO7YN4Y14v67FJiKuByMPB6O7tMsLHkRU1iHCp111 +83TxU65+hX7fU23CjAR4LSCZUPZxtOa7wewrTfTXG9ZdMTRAZwH6V64YhailIZax4VK yWblYTnI/USclr8ZNJ18Uec/ktw7iq+fWXPpGvxmRLdbvt0ZxHbYfeWIVfy48vrZiq2X /zL6hH4DHAR4xQac86wHg/Knp1orAAH5dqx47RqQrQHM6NYEcm/BQ8xHuGDIEsfifJAa kfT/RN5UJO3n6wuuElZbLbpnQKkJ6lRM7dDhq8pZXdvJsyqr31deTWjE/dA9QKBru+Xa 3d/g== X-Gm-Message-State: AG10YORmJEBvbVRSivkjXphljXLyjHFeR2CNuy29JkTpTfEp4OCDDZxqoJoq2+5Z3ILQevVjWbdof27UxmgBYQ== MIME-Version: 1.0 X-Received: by 10.107.3.220 with SMTP id e89mr13870454ioi.99.1455499200779; Sun, 14 Feb 2016 17:20:00 -0800 (PST) Received: by 10.107.4.71 with HTTP; Sun, 14 Feb 2016 17:20:00 -0800 (PST) In-Reply-To: References: Date: Sun, 14 Feb 2016 20:20:00 -0500 Message-ID: Subject: Re: Poor ZFS+NFSv3 read/write performance and panic From: Tom Curry To: David Adam Cc: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2016 01:20:01 -0000 On Sun, Feb 14, 2016 at 3:40 AM, David Adam wrote: > On Mon, 8 Feb 2016, Tom Curry wrote: > > On Sun, Feb 7, 2016 at 11:58 AM, David Adam > > wrote: > > > > > Just wondering if anyone has any idea how to identify which devices are > > > implicated in ZFS' vdev_deadman(). I have updated the firmware on the > > > mps(4) card that has our disks attached but that hasn't helped. > > > > I too ran into this problem and spent quite some time troubleshooting > > hardware. For me it turns out it was not hardware at all, but software. > > Specifically the ZFS ARC. Looking at your stack I see some arc reclaim up > > top, it's possible you're running into the same issue. There is a monster > > of a PR that details this here > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > > > If you would like to test this theory out, the fastest way is to limit > the > > ARC by adding the following to /boot/loader.conf and rebooting > > vfs.zfs.arc_max="24G" > > > > Replacing 24G with what makes sense for your system, aim for 3/4 of total > > memory for starters. If this solves the problem there are more scientific > > methods to a permanent fix, one would be applying the patch in the PR > > above, another would be a more finely tuned arc_max value. > > Thanks Tom - this certainly did sound promising, but setting the ARC to > 11G of our 16G of RAM didn't help. `zfs-stats` confirmed that the ARC was > the expected size and that there was still 461 MB of RAM free. > > We'll keep looking! > > David Adam > zanchey@ucc.gu.uwa.edu.au > Did the system still panic or did it merely degrade in performance? When performance heads south are you swapping?