From owner-freebsd-stable@FreeBSD.ORG Tue Mar 5 18:17:52 2013 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 56B5CB0C; Tue, 5 Mar 2013 18:17:52 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qa0-f51.google.com (mail-qa0-f51.google.com [209.85.216.51]) by mx1.freebsd.org (Postfix) with ESMTP id 0B5A1402; Tue, 5 Mar 2013 18:17:51 +0000 (UTC) Received: by mail-qa0-f51.google.com with SMTP id cr7so2085555qab.10 for ; Tue, 05 Mar 2013 10:17:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=+zDQG7mzu/vM8X99OkKgTJVOjVjvxnqd1a+n89P9WM4=; b=HfYg8O6HqMp/D+hlwSuXkxUvDiKULMuqqCBijgnk7F/71ZzVcQwkpiiGqBLdPOBCCz 2nat+U1PqpQpJTD7m45+5OZEGBePOuOKmRgaNi2Pfs1TDUEM3iivNaYje5kIi1zg0ZzN etQ1h8EFLet7GFOx1BF80QhHUK6QA4VVO0sBDyTWtLHsljls36qZdI5Xj0ibq7J3YtMO qSS34BETqj3K5RSCgmvpz78V7l/J2R77JQix+gLclk3UbSiH1yor1mwpLx7jxhEehgKR T975Yv2nuY3R8FcIE+65/ylJQ4Xe7x4HhbJ+xKM74BN4j5UaGU0vqX3D4IfxemS4gO5k +qvQ== MIME-Version: 1.0 X-Received: by 10.49.94.208 with SMTP id de16mr13530224qeb.22.1362507471205; Tue, 05 Mar 2013 10:17:51 -0800 (PST) Received: by 10.49.106.233 with HTTP; Tue, 5 Mar 2013 10:17:50 -0800 (PST) In-Reply-To: <20130305152252.GA52706@in-addr.com> References: <513524B2.6020600@denninger.net> <1362449266.92708.8.camel@btw.pki2.com> <51355F64.4040409@denninger.net> <201303050540.r255ecEC083742@hergotha.csail.mit.edu> <20130305152252.GA52706@in-addr.com> Date: Tue, 5 Mar 2013 10:17:50 -0800 Message-ID: Subject: Re: ZFS "stalls" -- and maybe we should be talking about defaults? From: Freddie Cash To: Gary Palmer Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: Steven Hartland , stable@freebsd.org, Garrett Wollman X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Mar 2013 18:17:52 -0000 On Tue, Mar 5, 2013 at 7:22 AM, Gary Palmer wrote: > Just as a note that there was a page I read in the past few months > that pointed out that having a huge ARC may not always be in the best > interests of the system. Some operation on the filesystem (I forget > what, apologies) caused the system to churn through the ARC and discard > most of it, while regular I/O was blocked > Huh. What timing. I've been fighting with our largest ZFS box (128 GB of RAM, 16 CPU cores, 2x SSD for SLOG, 2x SSD for L2ARC, 45x 2 TB HD for pool in 6-driive raidz2 vdevs) for the past week trying to figure out why ZFS send/recv just hangs after awhile. Everything is stuck in "D" in "ps ax" output, and top show the l2arc_feed_ thread using 100% of one CPU. Even removing the L2ARC devices from the pool doesn't help, just slows the amount of time until the "hang". ARC was configured for 120 GB, with arc_meta_limit set to 90 GB. Yes, dedup and compression are enabled (it's a backups storage box, and we get over 5x combined dedup/compress ratio). After several hours of running, the ARC and wired would get up to 100+ GB, and the box would spend most of its time "spinning", with almost 0 I/O to the pool (only a few KB/s of reads in "zpool iostat 1" or "gstat"). ZFS send/recv would eventually complete, but what used to take 15-20 minutes would take 6-8 hours to complete. I've reduced the ARC to only 32 GB, with arc_meta set to 28 GB, and things are running much smoother now (50-200 MB/s writes for 3-5 seconds every 10s), and send/recv is back down to 10-15 minutes. Who would have thought "too much RAM" would be an issue? Will play with this over the next couple of days with different ARC max settings to see where the problems start. All of our ZFS boxes until this one had under 64 GB of RAM. (And we had issues with dedupe enabled on boxes with too little RAM, as in under 32 GB.) -- Freddie Cash fjwcash@gmail.com