From owner-freebsd-fs@FreeBSD.ORG Tue Jan 28 18:50:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ACAA8D13 for ; Tue, 28 Jan 2014 18:50:55 +0000 (UTC) Received: from mail-vc0-x22b.google.com (mail-vc0-x22b.google.com [IPv6:2607:f8b0:400c:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6C1A41B12 for ; Tue, 28 Jan 2014 18:50:55 +0000 (UTC) Received: by mail-vc0-f171.google.com with SMTP id le5so495979vcb.2 for ; Tue, 28 Jan 2014 10:50:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=W9yfLQKBp4A85Qec//O+4q47zI/7D7NNEApUN4UlLng=; b=ew407oy0PyFTXbube7VVPJ2mPaL0QwAArRndQX/y1qG111y/f6bQ/xy724dmOgTmQN SyGQpdXqGxVmTsV3RMwWSJcjZBMjIonNmLg8vwFxmYh1fxOVKt8E6ilTkv/RM7WJSEC2 hWt9CJAERU1izq16gUldnYZuEolx9sP45rUZUXubLSOOMgqd332Qs8uXQtlGELfcmyxu 7qWJ0evNXBLGoL834c3QmafXLe+sP9Fkxo13EJ29exx5qOwCGCIkpl2ZBFlFMtTAI7cg CyUO4p8WZl+mYHn/zCuFs9SzHeGOj3JwdQqdHQgc0234m8/TIDJ86YktFWC6VReTPR7z +4Pw== X-Received: by 10.220.191.134 with SMTP id dm6mr2330673vcb.16.1390935054568; Tue, 28 Jan 2014 10:50:54 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.162.169 with HTTP; Tue, 28 Jan 2014 10:50:34 -0800 (PST) In-Reply-To: References: From: Anton Sayetsky Date: Tue, 28 Jan 2014 20:50:34 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jan 2014 18:50:55 -0000 2013-11-22 Anton Sayetsky : > Hello, > > I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS > noticed that amount of wired memory is MUCH bigger than ARC size (in > absence of other hungry memory consumers, of course). I'm afraid that > this strange behavior may become even worse on a machine with big pool > and some hundreds gibibytes of RAM. > > So let me explain what happened. > > Immediately after booting system top says the following: > ===== > Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free > ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other > ===== > Ok, wired mem - arc = 92 MiB > > Then I started to read pool (tar cpf /dev/null /). > Memory usage when ARC size is ~1GiB > ===== > Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free > ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other > ===== > 1410-1114=296 MiB > > Memory usage when ARC size reaches it's maximum of 2 GiB > ===== > Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free > ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other > ===== > 2523-2067=456 MiB > > Memory usage a few minutes later > ===== > Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free > ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other > ===== > 2721-2002=719 MiB > > So why the wired ram on a machine with only minimal amount of services > has grown from 92 to 719 MiB? Sometimes I can even see about a gig! > I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 > G RAM (actual available amount is 3 G). ZFS pool is configured on a > GPT partition of a single 1 TB HDD. > Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't helps. > When reading a pool, evict skips can increment very fast and sometimes > arc metadata exceeds limit (2x-5x). > > I've attached logs with system configuration, outputs from top, ps, > zfs-stats and vmstat. > conf.log = system configuration, also uploaded to http://pastebin.com/NYBcJPeT > top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after > booting system, http://pastebin.com/mudmEyG5 > top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, > http://pastebin.com/4AC8dn5C > top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, > http://pastebin.com/bx7svEP0 > top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, > http://pastebin.com/qYWFaNeA > > What should I do next? BUMP