From owner-freebsd-hackers@freebsd.org Tue Sep 11 05:18:36 2018 Return-Path: Delivered-To: freebsd-hackers@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 50F6410A944B for ; Tue, 11 Sep 2018 05:18:36 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AA85D859ED; Tue, 11 Sep 2018 05:18:35 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: by mail-pg1-x532.google.com with SMTP id l63-v6so11619178pga.7; Mon, 10 Sep 2018 22:18:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language; bh=t3Xh3BJs5LdScuZ2vp5XUQ612zqUQ7Xt0BblZRt9Sh4=; b=DJ5rSgZNJSEC1JLadFHdLYtiGruDvaHm2ou/haPK2ZvUDxtqeWZDGQ+XWGKuso+Wze fKz1TkmAkfE642cZCyPBwqUO4AL8WGYhTYDxo/wZNi+H45UKksFcv7+lOsXq2kp08HCu ZjrZ3KK8e0p6/Li0myoKapmSTJLS+NFyrKFqGqDNfV6t3QLSdYbS4Tazwo/UV5U1VPss 4LTl8QK6uBNZzqem0GmHoOdjyRyfwUwjfMm5eyldz0ENdm7h8BMWg9ojCyctZX0Zs0GS IhOrPqOqXdiBWf2Lcsx9L1PXzRzpnRA3PYaxe+C5q3xlLEG0/mn1r4N0cbOahLitn2P8 FWjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=t3Xh3BJs5LdScuZ2vp5XUQ612zqUQ7Xt0BblZRt9Sh4=; b=g5gNsv3AH0AZ9aTsc7BBRpTjC0EajeVPMUA/Mmf2ucV8uwk6Tu38Hpo3GR/OiM/oLI h6fArNsUXLIeunuw7DCrrHGH4ZkkOvbzDm34raeAaDezV9uKnEPwGzwMPN/JaNLF7Ux5 4WgxUOPuQrwXXCh4OIUq20nov1XEt+gRHyogUj/GWtWO4WmvBr2wyBl4rNCHqQDSoJoT Rz/071qVOznF27zOJkApExyA3e8/XNWdxcPpnB8DVpRgM7P1gyBDxp1vxa72ys66TgNb HUvqnxoRr4/EmkjW9NktImgEU7WLdrWZ+Ezbohj5kLSg/Sn0DHn0oN8syUALTuEbCqVI tYqA== X-Gm-Message-State: APzg51CT1+v+qWSIz1nJsA6IXGTD0veC/l/ZtPdJByhOHaVRrJsDRhnE xUGC8nktwTcZK7ZmDA4K4zdZxGkvwg== X-Google-Smtp-Source: ANB0VdbLqIwGyuwdFK714F1LZ4SuqQXPsD/6XpEqc8pCuk/SLSc9clLqI/qGWpRgI2SWDjR1BXl6uw== X-Received: by 2002:a63:4702:: with SMTP id u2-v6mr25537973pga.95.1536643114178; Mon, 10 Sep 2018 22:18:34 -0700 (PDT) Received: from [192.168.1.113] (c-73-202-42-181.hsd1.ca.comcast.net. [73.202.42.181]) by smtp.gmail.com with ESMTPSA id v2-v6sm24078082pgf.58.2018.09.10.22.18.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Sep 2018 22:18:33 -0700 (PDT) Subject: Re: Sudden grow of memory in "Laundry" state To: Mark Johnston Cc: freebsd-hackers@freebsd.org References: <55b0dd7d-19a3-b566-0602-762b783e8ff3@gmail.com> <20180911005411.GF2849@raichu> From: Robert Message-ID: Date: Mon, 10 Sep 2018 22:18:31 -0700 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20180911005411.GF2849@raichu> Content-Language: en-US Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.27 X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Sep 2018 05:18:36 -0000 Hi, if I understood correctly, "written back to swap device" means they come from swap at some point, but it's not the case (see attached graph). Swap was 16GB, and slightly reduced when pages rapidly started to move from free (or "Inactive"?) into "Laundry" queue. Here is vmstat output: vmstat -s 821885826 cpu context switches 3668809349 device interrupts 470487370 software interrupts 589970984 traps 3010410552 system calls        25 kernel threads created    378438  fork() calls     21904 vfork() calls         0 rfork() calls      1762 swap pager pageins      6367 swap pager pages paged in     61678 swap pager pageouts   1682860 swap pager pages paged out      1782 vnode pager pageins     16016 vnode pager pages paged in         0 vnode pager pageouts         0 vnode pager pages paged out         3 page daemon wakeups 1535368624 pages examined by the page daemon        12 clean page reclamation shortfalls   2621520 pages reactivated by the page daemon  18865126 copy-on-write faults      5910 copy-on-write optimized faults  36063024 zero fill pages zeroed     21137 zero fill pages prezeroed     12149 intransit blocking page faults 704496861 total VM faults taken      3164 page faults requiring I/O         0 pages affected by kernel thread creation  15318548 pages affected by  fork()    738228 pages affected by vfork()         0 pages affected by rfork()  61175662 pages freed   1936826 pages freed by daemon  24420300 pages freed by exiting processes   3164850 pages active   2203028 pages inactive   6196772 pages in the laundry queue    555637 pages wired down    102762 pages free      4096 bytes per page 2493686705 total name lookups           cache hits (99% pos + 0% neg) system 0% per-directory           deletions 0%, falsehits 0%, toolong 0% What do you think? How pages could be moved into "Laundry" without being in Swap? Thanks. On 09/10/18 17:54, Mark Johnston wrote: > On Mon, Sep 10, 2018 at 11:44:52AM -0700, Robert wrote: >> Hi, I have a server with FreeBSD 11.2 and 48 Gigs of RAM where an app >> with extensive usage of shared memory (24GB allocation) is running. >> >> After some random amount of time (usually few days of running), there >> happens a sudden increase of "Laundry" memory grow (from zero to 24G in >> a few minutes). >> >> Then slowly it reduces. >> >> Are described symptoms normal and expected? I've never noticed anything >> like that on 11.1. > The laundry queue contains dirty inactive pages, which need to be > written back to the swap device or a filesystem before they can be > reused. When the system is short for free pages, it will scan the > inactive queue looking for clean pages, which can be freed cheaply. > Dirty pages are moved to the laundry queue. My guess is that the > system was running without a page shortage for a long time, and > suddenly experienced some memory pressure. This caused lots of > pages to move from the inactive queue to the laundry queue. Demand > for free pages will then cause pages in the laundry queue to be > written back and freed, or requeued if the page was referenced after > being placed in the laundry queue. "vmstat -s" and "sysctl vm.stats" > output might make things more clear. > > All this is to say that there's nothing particularly abnormal about what > you're observing, though it's not clear what effects this behaviour has > on your workload, if any. I can't think of any direct reason this would > happen on 11.2 but not 11.1.