From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 08:18:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4BA58B6 for ; Thu, 6 Feb 2014 08:18:00 +0000 (UTC) Received: from mail-vb0-x231.google.com (mail-vb0-x231.google.com [IPv6:2607:f8b0:400c:c02::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id F31C91347 for ; Thu, 6 Feb 2014 08:17:59 +0000 (UTC) Received: by mail-vb0-f49.google.com with SMTP id x14so1157035vbb.22 for ; Thu, 06 Feb 2014 00:17:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=eCfzB1FVfF0fJeDYQOLMc4ApK+il4vzM53+/d1PzZPQ=; b=HLin5z3RndtNNibD/NpYODu7BpkuFnILWNksOUMfMYcHq4B6AkXN9v52cImHA6rJpq ujV9wottq2b+z22YnoOlDqIgEXA4SGA73fUrhDk7Xf1EPWARZso6y1PzeRpJhvtAVZxp mhe3wDOyZ3j3FRju8MOgkPbkw/NKDPewkATwSkFUNQMK98TZvQPUPp2JwIAyBJv9d6tr +OV2VWix6Mv9CM+eqR24AYjaRuaPedQqj7OxrdwklMdRJLFcwkKCrLaGnkpdJ2OcCqhR uV0rmmx/YLYXrYXEGseWV/OUpQJAgF8m8tUnXTQCWvHs37l1kCTRsHHke457iZglGKi4 0dyQ== X-Received: by 10.52.95.233 with SMTP id dn9mr3985021vdb.3.1391674679130; Thu, 06 Feb 2014 00:17:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.59.0.68 with HTTP; Thu, 6 Feb 2014 00:17:29 -0800 (PST) In-Reply-To: References: From: Matthias Gamsjager Date: Thu, 6 Feb 2014 09:17:29 +0100 Message-ID: Subject: Re: ZFS and Wired memory, again To: Anton Sayetsky Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 08:18:00 -0000 What are your vfs.zfs.zio.use_uma settings? Settings this back to 0 keeps the wired memory inline with the arc. On Wed, Jan 29, 2014 at 10:30 AM, Matthias Gamsjager wrote: > Found it. in the Freebsd Current list with subject ARC "pressured out", > how to control/stabilize > looks kinda alike > > > On Wed, Jan 29, 2014 at 10:28 AM, Matthias Gamsjager > wrote: > >> I remember reading something similar couple of days ago but can't find >> the thread. >> >> >> On Tue, Jan 28, 2014 at 7:50 PM, Anton Sayetsky wrote: >> >>> 2013-11-22 Anton Sayetsky : >>> > Hello, >>> > >>> > I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS >>> > noticed that amount of wired memory is MUCH bigger than ARC size (in >>> > absence of other hungry memory consumers, of course). I'm afraid that >>> > this strange behavior may become even worse on a machine with big pool >>> > and some hundreds gibibytes of RAM. >>> > >>> > So let me explain what happened. >>> > >>> > Immediately after booting system top says the following: >>> > ===== >>> > Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free >>> > ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other >>> > ===== >>> > Ok, wired mem - arc = 92 MiB >>> > >>> > Then I started to read pool (tar cpf /dev/null /). >>> > Memory usage when ARC size is ~1GiB >>> > ===== >>> > Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free >>> > ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other >>> > ===== >>> > 1410-1114=296 MiB >>> > >>> > Memory usage when ARC size reaches it's maximum of 2 GiB >>> > ===== >>> > Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free >>> > ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M >>> Other >>> > ===== >>> > 2523-2067=456 MiB >>> > >>> > Memory usage a few minutes later >>> > ===== >>> > Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free >>> > ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other >>> > ===== >>> > 2721-2002=719 MiB >>> > >>> > So why the wired ram on a machine with only minimal amount of services >>> > has grown from 92 to 719 MiB? Sometimes I can even see about a gig! >>> > I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 >>> > G RAM (actual available amount is 3 G). ZFS pool is configured on a >>> > GPT partition of a single 1 TB HDD. >>> > Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig >>> doesn't helps. >>> > When reading a pool, evict skips can increment very fast and sometimes >>> > arc metadata exceeds limit (2x-5x). >>> > >>> > I've attached logs with system configuration, outputs from top, ps, >>> > zfs-stats and vmstat. >>> > conf.log = system configuration, also uploaded to >>> http://pastebin.com/NYBcJPeT >>> > top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after >>> > booting system, http://pastebin.com/mudmEyG5 >>> > top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, >>> > http://pastebin.com/4AC8dn5C >>> > top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, >>> > http://pastebin.com/bx7svEP0 >>> > top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, >>> > http://pastebin.com/qYWFaNeA >>> > >>> > What should I do next? >>> BUMP >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> >