From owner-freebsd-fs@FreeBSD.ORG Thu Feb 6 16:54:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3424B89C for ; Thu, 6 Feb 2014 16:54:48 +0000 (UTC) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id C8DAC1B48 for ; Thu, 6 Feb 2014 16:54:46 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 1D70F45218E; Thu, 6 Feb 2014 17:54:38 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 82C1245218C for ; Thu, 6 Feb 2014 17:54:37 +0100 (CET) Message-ID: <52F3BE38.6050103@platinum.linux.pl> Date: Thu, 06 Feb 2014 17:54:16 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS and Wired memory, again References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Feb 2014 16:54:48 -0000 So what is exactly the problem here? Free memory is essentially wasted memory and there is still plenty of free memory available so there is no point to compact wired memory. Once free memory drops below vm.v_free_target you should see something happen. On 2014-01-28 19:50, Anton Sayetsky wrote: > 2013-11-22 Anton Sayetsky : >> Hello, >> >> I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS >> noticed that amount of wired memory is MUCH bigger than ARC size (in >> absence of other hungry memory consumers, of course). I'm afraid that >> this strange behavior may become even worse on a machine with big pool >> and some hundreds gibibytes of RAM. >> >> So let me explain what happened. >> >> Immediately after booting system top says the following: >> ===== >> Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free >> ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other >> ===== >> Ok, wired mem - arc = 92 MiB >> >> Then I started to read pool (tar cpf /dev/null /). >> Memory usage when ARC size is ~1GiB >> ===== >> Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free >> ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other >> ===== >> 1410-1114=296 MiB >> >> Memory usage when ARC size reaches it's maximum of 2 GiB >> ===== >> Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free >> ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other >> ===== >> 2523-2067=456 MiB >> >> Memory usage a few minutes later >> ===== >> Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free >> ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other >> ===== >> 2721-2002=719 MiB >> >> So why the wired ram on a machine with only minimal amount of services >> has grown from 92 to 719 MiB? Sometimes I can even see about a gig! >> I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 >> G RAM (actual available amount is 3 G). ZFS pool is configured on a >> GPT partition of a single 1 TB HDD. >> Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't helps. >> When reading a pool, evict skips can increment very fast and sometimes >> arc metadata exceeds limit (2x-5x). >> >> I've attached logs with system configuration, outputs from top, ps, >> zfs-stats and vmstat. >> conf.log = system configuration, also uploaded to http://pastebin.com/NYBcJPeT >> top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after >> booting system, http://pastebin.com/mudmEyG5 >> top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, >> http://pastebin.com/4AC8dn5C >> top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, >> http://pastebin.com/bx7svEP0 >> top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, >> http://pastebin.com/qYWFaNeA >> >> What should I do next? > BUMP > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >