From owner-freebsd-current@FreeBSD.ORG Tue Nov 4 12:56:47 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82A0F701 for ; Tue, 4 Nov 2014 12:56:47 +0000 (UTC) Received: from mail-wg0-f52.google.com (mail-wg0-f52.google.com [74.125.82.52]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1699AD20 for ; Tue, 4 Nov 2014 12:56:46 +0000 (UTC) Received: by mail-wg0-f52.google.com with SMTP id b13so12818954wgh.11 for ; Tue, 04 Nov 2014 04:56:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=1tPeHWQdR31bGMvgiuWeQYa0No9qjY1pFV2JyYcw4GE=; b=gSQ+YQKce2ldezzXe4ntq+g2u8LthS7W+xlHGaVmnnFz96HjdtmWraqH9vdgMLGUDq OfKHMC+hpsV4e86xKRyXUU366HJd4PEfeqe959oh8NlXogqxY1P7Z37NdQn9aYsUZWki 1N3+6k3WuRCLrKBKTGZiYOXoic4h68XScASbcT5we2BA5k+dNSPDGJjlyFlBOVZAKuKG T/sv2w3MUnjZ38B1W2mreZoUs5Zu7nMU+26TMcPgCNm22Bm2f6h3f0R/KRyNJ6W7lY// WKsfyEbfbqBJp5Op6PgzEjpoL/Qy7ndvhrJ8f5mlqp6KOe6midLkz+WDxV0hlu6e+YMy E5Mg== X-Gm-Message-State: ALoCoQme8IgN8XEi6ewyQtPQ7doRGVTVdT3ZE1HjmAIPsM1CENbQW4SEyXjipiHhlG7AhurSSCBy X-Received: by 10.194.57.81 with SMTP id g17mr57636262wjq.12.1415105804085; Tue, 04 Nov 2014 04:56:44 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id b6sm943562wiy.22.2014.11.04.04.56.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 Nov 2014 04:56:43 -0800 (PST) Message-ID: <5458CCB6.7020602@multiplay.co.uk> Date: Tue, 04 Nov 2014 12:55:18 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: freebsd-current@freebsd.org Subject: Re: r273165. ZFS ARC: possible memory leak to Inact References: <5458c456.25b9340a.54d5.6310SMTPIN_ADDED_BROKEN@mx.google.com> In-Reply-To: <5458c456.25b9340a.54d5.6310SMTPIN_ADDED_BROKEN@mx.google.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Nov 2014 12:56:47 -0000 This is likely spikes in uma zones used by ARC. The VM doesn't ever clean uma zones unless it hits a low memory condition, which explains why your little script helps. Check the output of vmstat -z to confirm. On 04/11/2014 11:47, Dmitriy Makarov wrote: > Hi Current, > > It seems like there is constant flow (leak) of memory from ARC to Inact in FreeBSD 11.0-CURRENT #0 r273165. > > Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size very close to vfs.zfs.arc_max: > > Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M Free > ARC: 88G Total, 2100M MFU, 78G MRU, 39M Anon, 2283M Header, 6162M Other > > > But after an upgrade to (FreeBSD 11.0-CURRENT #0 r273165) we observe enormous numbers of Inact memory in the top: > > Mem: 21G Active, 45G Inact, 56G Wired, 357M Cache, 3308M Buf, 1654M Free > ARC: 42G Total, 6025M MFU, 30G MRU, 30M Anon, 819M Header, 5214M Other > > Funny thing is that when we manually allocate and release memory, using simple python script: > > #!/usr/local/bin/python2.7 > > import sys > import time > > if len(sys.argv) != 2: > print "usage: fillmem " > sys.exit() > > count = int(sys.argv[1]) > > megabyte = (0,) * (1024 * 1024 / 8) > > data = megabyte * count > > as: > > # ./simple_script 10000 > > all those allocated megabyes 'migrate' from Inact to Free, and afterwards they are 'eaten' by ARC with no problem. > Until Inact slowly grows back to the number it was before we ran the script. > > Current workaround is to periodically invoke this python script by cron. > This is an ugly workaround and we really don't like it on our production > > > To answer possible questions about ARC efficience: > Cache efficiency drops dramatically with every GiB pushed off the ARC. > > Before upgrade: > Cache Hit Ratio: 99.38% > > After upgrade: > Cache Hit Ratio: 81.95% > > We believe that ARC misbehaves and we ask your assistance. > > > ---------------------------------- > > Some values from configs. > > HW: 128GB RAM, LSI HBA controller with 36 disks (stripe of mirrors). > > top output: > > In /boot/loader.conf : > vm.kmem_size="110G" > vfs.zfs.arc_max="90G" > vfs.zfs.arc_min="42G" > vfs.zfs.txg.timeout="10" > > ----------------------------------- > > Thanks. > > Regards, > Dmitriy > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"