From owner-freebsd-hackers@freebsd.org Thu Sep 27 23:04:20 2018 Return-Path: Delivered-To: freebsd-hackers@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E4F3510BD178 for ; Thu, 27 Sep 2018 23:04:19 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7A2C67C539; Thu, 27 Sep 2018 23:04:19 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: by mail-pf1-x431.google.com with SMTP id m77-v6so2907692pfi.8; Thu, 27 Sep 2018 16:04:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=KCgzVRGokghuiNouQbJ9CzY64y+5Gxg3oB1fdWT2x14=; b=fxs6iq2782kUPEfZ/Im9cFmNnaQXQtwQHiO64NtI60ATXbr374Bt59lSY9gSeVp+1J a7FHs+quihkdulD2fKezRDdtACY0IFWhxf3zWhMYXY18XYHa+AuA+TUbw23DKgGNFOt0 6vj7dDUw52n0sT54p0h5umiZJEdK7+wlu67ie2Hns6IFj2IIB0Hep2wFigDXD0m2RJ3s AgQibEF5EdMM7rAJOoCSal7bYcAs+aikfCJwTvUyhKpCYxU8mMpS+HSS28BDiYQ1kuYy oRShgM+W2P0p6XspnZotYayZ2O9/+3gDFLn2P83N+mZ5QGETVTAGcgIRJWlxplAm9K9g G9ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=KCgzVRGokghuiNouQbJ9CzY64y+5Gxg3oB1fdWT2x14=; b=QfE40+PqXMf2meJU0a9a+c1OZu5oM2yCRdVskncI9/u5KzZpgEZuUVawwefJ4+h+dz grK93RVZl1BYuWbIAA7et4cW7tcevZCklqKGHfBVS6w3lVRHdsrZAECM2Ki9cr5hlxz9 C5aGcEAQINLkjBkRRlBDwSPu5D3wOpg/UNVJ1vwlBHjOlEm6jUTjC3QeK8+F8YNIVh5+ H5/DqCiRrEzkvGj2JXe5B6gpToFqbXmdQw7c889sVYURliRIDVnQIUpbQHMtrGihBFIw XT0wl+tmJZSJvvd4xrCfsiPYm+6Kb3gniS8CGXQ9EhBO+3+Ap32bW65xkfLfweycL6CW Yq1w== X-Gm-Message-State: ABuFfojhiSMwrC17FPqx5wKWuWpsuAHQk23k8HKUIBBDa0GIYKRdoLYw yLSqvaxeY0nO8dYM3nwN2DZCqvM= X-Google-Smtp-Source: ACcGV63UK8ICJuSuXl7NbRDe2zL5we+5ilo+U28qn67QZiflB0jj+iiBjSblu6e0GCW7mGTeFKUGNQ== X-Received: by 2002:a63:d34f:: with SMTP id u15-v6mr12443889pgi.325.1538089457806; Thu, 27 Sep 2018 16:04:17 -0700 (PDT) Received: from [192.168.1.113] (c-73-202-42-181.hsd1.ca.comcast.net. [73.202.42.181]) by smtp.gmail.com with ESMTPSA id z20-v6sm5573912pfd.99.2018.09.27.16.04.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 16:04:16 -0700 (PDT) Subject: Re: Sudden grow of memory in "Laundry" state To: Mark Johnston Cc: freebsd-hackers@freebsd.org References: <55b0dd7d-19a3-b566-0602-762b783e8ff3@gmail.com> <20180911005411.GF2849@raichu> <20180911150849.GD92634@raichu> From: Robert Message-ID: <104be96a-c16b-7e7c-7d0d-00338ab5a106@gmail.com> Date: Thu, 27 Sep 2018 16:04:15 -0700 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20180911150849.GD92634@raichu> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Sep 2018 23:04:20 -0000 Is there a way to force move pages back from laundry to Free or Inactive? Also, what's the best way to identify addresses of these pages and "look" inside of them? Thanks. On 09/11/18 08:08, Mark Johnston wrote: > On Mon, Sep 10, 2018 at 10:18:31PM -0700, Robert wrote: >> Hi, if I understood correctly, "written back to swap device" means they >> come from swap at some point, but it's not the case (see attached graph). > Sorry, I didn't mean to imply that. Pages containing your application's > shared memory, for instance, would simply be written to the swap device > before being freed and reused for some other purpose. > > Your graph shows a sudden drop in free memory. Does that coincide with > the sudden increase in size of the laundry queue? > >> Swap was 16GB, and slightly reduced when pages rapidly started to move >> from free (or "Inactive"?) into "Laundry" queue. > Right. Specifically, the amount of free swap space decreased right at > the time that the amount of free memory dropped, so what likely happened > is that the system wrote some pages in "Laundry" to the swap device so > that they could be freed, as a response to the drop in free memory. > >> Here is vmstat output: >> >> vmstat -s >> [...] >> 12 clean page reclamation shortfalls > This line basically means that at some point we were writing pages to > the swap device as fast as possible in order to reclaim some memory. > >> What do you think? How pages could be moved into "Laundry" without being >> in Swap? > That's perfectly normal. Pages typically move from "Active" or > "Inactive" to laundry. > >> On 09/10/18 17:54, Mark Johnston wrote: >>> On Mon, Sep 10, 2018 at 11:44:52AM -0700, Robert wrote: >>>> Hi, I have a server with FreeBSD 11.2 and 48 Gigs of RAM where an app >>>> with extensive usage of shared memory (24GB allocation) is running. >>>> >>>> After some random amount of time (usually few days of running), there >>>> happens a sudden increase of "Laundry" memory grow (from zero to 24G in >>>> a few minutes). >>>> >>>> Then slowly it reduces. >>>> >>>> Are described symptoms normal and expected? I've never noticed anything >>>> like that on 11.1. >>> The laundry queue contains dirty inactive pages, which need to be >>> written back to the swap device or a filesystem before they can be >>> reused. When the system is short for free pages, it will scan the >>> inactive queue looking for clean pages, which can be freed cheaply. >>> Dirty pages are moved to the laundry queue. My guess is that the >>> system was running without a page shortage for a long time, and >>> suddenly experienced some memory pressure. This caused lots of >>> pages to move from the inactive queue to the laundry queue. Demand >>> for free pages will then cause pages in the laundry queue to be >>> written back and freed, or requeued if the page was referenced after >>> being placed in the laundry queue. "vmstat -s" and "sysctl vm.stats" >>> output might make things more clear. >>> >>> All this is to say that there's nothing particularly abnormal about what >>> you're observing, though it's not clear what effects this behaviour has >>> on your workload, if any. I can't think of any direct reason this would >>> happen on 11.2 but not 11.1. >