From owner-freebsd-hackers@freebsd.org Tue Sep 11 05:23:05 2018 Return-Path: Delivered-To: freebsd-hackers@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D06D410A977E for ; Tue, 11 Sep 2018 05:23:04 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47D9485FB3; Tue, 11 Sep 2018 05:23:04 +0000 (UTC) (envelope-from robert.ayrapetyan@gmail.com) Received: by mail-pl1-x632.google.com with SMTP id f6-v6so10794720plo.1; Mon, 10 Sep 2018 22:23:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:references:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=N+O+NpySufrmgzVaptep8X2S5MGA+Qk6b9wKglhpPhU=; b=k2s8YEhK5RsyU3MqG+NcCCFDwuwyT/ix+ZgpHhK88bnUXPRG1M3IoVpRTkZbG/Y3O/ oBAr59svlGMSIvCgVHwqLaA9w/qX/jhH0Gekbs/dvmhDvMOx9ewZPm1MMV0E1H6pUpbq 9y443ZRth6JPNp41EUgMbKbgTkg1mCHRVLCmLok95yMxzlnLPQwk+ZzBsiFjoTBYPV/H G39kiDUF59uyfe1bq/p7QMBCszsm/pj3kpqci9AvFi9QRCXeM9paKC6cpCzu6CX9pqFs YH2TEktH+2YWtpJBDHhssTZbNJ2kbqPyNulh2ViMaAfrOV6oSCjxj7oafSH5NZmnqi7q u3fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=N+O+NpySufrmgzVaptep8X2S5MGA+Qk6b9wKglhpPhU=; b=Q6agaV/yh32d9F5NlibYH01g/i179AQY+3oLUSDJnC3Vw3jkxj/qcMrys4Y9cSo7gy 13HzJSoTns4PXoEgr6rtfdA/ZX6LDnvMXWC+EdzxM9c2XhEz5RnGoPdygrt8MISPm4no 9biqNk/+z2fEs8aAAHsCDPyppmPcSV+jML1AkbdsRgj67lSLdHlBUGF1u0GBcMRlm1vE nKKlxMROzaAI3GGNCRRpfZRMCt3KDP+LPcDAPFPxXxOuPcy2v7awTDPkXK1PTiR6++D8 nv/N6tSJcnuWhsmC42CEpnSK+W9Fb85b6CV2qcDdYKLYQz//kPbE8KMOv5lOZKxBifZO DJlA== X-Gm-Message-State: APzg51COw9KImckFxTVDqzZ9mnty5wEwWxe2UEww8ZlpSqLJoHyX6vHN O9hOKqXF6xU58Vu9+syjNPNU32GCMA== X-Google-Smtp-Source: ANB0VdYRxEWZXB2TwVA0ft19WLnYrodGM9vHpxKI5trXF1lcQcbegnSJyp1k4NFko3nwe8ToDyKq6g== X-Received: by 2002:a17:902:344:: with SMTP id 62-v6mr25501980pld.164.1536643382813; Mon, 10 Sep 2018 22:23:02 -0700 (PDT) Received: from [192.168.1.113] (c-73-202-42-181.hsd1.ca.comcast.net. [73.202.42.181]) by smtp.gmail.com with ESMTPSA id s195-v6sm40230996pgs.76.2018.09.10.22.23.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Sep 2018 22:23:02 -0700 (PDT) Subject: Re: Sudden grow of memory in "Laundry" state From: Robert To: Mark Johnston Cc: freebsd-hackers@freebsd.org References: <55b0dd7d-19a3-b566-0602-762b783e8ff3@gmail.com> <20180911005411.GF2849@raichu> Message-ID: <9587adab-2084-9fc1-df75-254d9f17fecb@gmail.com> Date: Mon, 10 Sep 2018 22:23:01 -0700 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Sep 2018 05:23:05 -0000 sysctl vm.stats vm.stats.object.bypasses: 44686 vm.stats.object.collapses: 1635786 vm.stats.misc.cnt_prezero: 0 vm.stats.misc.zero_page_count: 29511 vm.stats.vm.v_kthreadpages: 0 vm.stats.vm.v_rforkpages: 0 vm.stats.vm.v_vforkpages: 738592 vm.stats.vm.v_forkpages: 15331959 vm.stats.vm.v_kthreads: 25 vm.stats.vm.v_rforks: 0 vm.stats.vm.v_vforks: 21915 vm.stats.vm.v_forks: 378768 vm.stats.vm.v_interrupt_free_min: 2 vm.stats.vm.v_pageout_free_min: 34 vm.stats.vm.v_cache_count: 0 vm.stats.vm.v_laundry_count: 6196772 vm.stats.vm.v_inactive_count: 2205526 vm.stats.vm.v_inactive_target: 390661 vm.stats.vm.v_active_count: 3163069 vm.stats.vm.v_wire_count: 556447 vm.stats.vm.v_free_count: 101235 vm.stats.vm.v_free_min: 77096 vm.stats.vm.v_free_target: 260441 vm.stats.vm.v_free_reserved: 15981 vm.stats.vm.v_page_count: 12223372 vm.stats.vm.v_page_size: 4096 vm.stats.vm.v_tfree: 61213188 vm.stats.vm.v_pfree: 24438917 vm.stats.vm.v_dfree: 1936826 vm.stats.vm.v_tcached: 0 vm.stats.vm.v_pdshortfalls: 12 vm.stats.vm.v_pdpages: 1536983413 vm.stats.vm.v_pdwakeups: 3 vm.stats.vm.v_reactivated: 2621520 vm.stats.vm.v_intrans: 12150 vm.stats.vm.v_vnodepgsout: 0 vm.stats.vm.v_vnodepgsin: 16016 vm.stats.vm.v_vnodeout: 0 vm.stats.vm.v_vnodein: 1782 vm.stats.vm.v_swappgsout: 1682860 vm.stats.vm.v_swappgsin: 6368 vm.stats.vm.v_swapout: 61678 vm.stats.vm.v_swapin: 1763 vm.stats.vm.v_ozfod: 21498 vm.stats.vm.v_zfod: 36072114 vm.stats.vm.v_cow_optim: 5912 vm.stats.vm.v_cow_faults: 18880051 vm.stats.vm.v_io_faults: 3165 vm.stats.vm.v_vm_faults: 705101188 vm.stats.sys.v_soft: 470906002 vm.stats.sys.v_intr: 3743337461 vm.stats.sys.v_syscall: 3134154383 vm.stats.sys.v_trap: 590473243 vm.stats.sys.v_swtch: 1037209739 On 09/10/18 22:18, Robert wrote: > Hi, if I understood correctly, "written back to swap device" means > they come from swap at some point, but it's not the case (see attached > graph). > > Swap was 16GB, and slightly reduced when pages rapidly started to move > from free (or "Inactive"?) into "Laundry" queue. > > Here is vmstat output: > > vmstat -s > 821885826 cpu context switches > 3668809349 device interrupts > 470487370 software interrupts > 589970984 traps > 3010410552 system calls >        25 kernel threads created >    378438  fork() calls >     21904 vfork() calls >         0 rfork() calls >      1762 swap pager pageins >      6367 swap pager pages paged in >     61678 swap pager pageouts >   1682860 swap pager pages paged out >      1782 vnode pager pageins >     16016 vnode pager pages paged in >         0 vnode pager pageouts >         0 vnode pager pages paged out >         3 page daemon wakeups > 1535368624 pages examined by the page daemon >        12 clean page reclamation shortfalls >   2621520 pages reactivated by the page daemon >  18865126 copy-on-write faults >      5910 copy-on-write optimized faults >  36063024 zero fill pages zeroed >     21137 zero fill pages prezeroed >     12149 intransit blocking page faults > 704496861 total VM faults taken >      3164 page faults requiring I/O >         0 pages affected by kernel thread creation >  15318548 pages affected by  fork() >    738228 pages affected by vfork() >         0 pages affected by rfork() >  61175662 pages freed >   1936826 pages freed by daemon >  24420300 pages freed by exiting processes >   3164850 pages active >   2203028 pages inactive >   6196772 pages in the laundry queue >    555637 pages wired down >    102762 pages free >      4096 bytes per page > 2493686705 total name lookups >           cache hits (99% pos + 0% neg) system 0% per-directory >           deletions 0%, falsehits 0%, toolong 0% > > What do you think? How pages could be moved into "Laundry" without > being in Swap? > > Thanks. > > > On 09/10/18 17:54, Mark Johnston wrote: >> On Mon, Sep 10, 2018 at 11:44:52AM -0700, Robert wrote: >>> Hi, I have a server with FreeBSD 11.2 and 48 Gigs of RAM where an app >>> with extensive usage of shared memory (24GB allocation) is running. >>> >>> After some random amount of time (usually few days of running), there >>> happens a sudden increase of "Laundry" memory grow (from zero to 24G in >>> a few minutes). >>> >>> Then slowly it reduces. >>> >>> Are described symptoms normal and expected? I've never noticed anything >>> like that on 11.1. >> The laundry queue contains dirty inactive pages, which need to be >> written back to the swap device or a filesystem before they can be >> reused.  When the system is short for free pages, it will scan the >> inactive queue looking for clean pages, which can be freed cheaply. >> Dirty pages are moved to the laundry queue.  My guess is that the >> system was running without a page shortage for a long time, and >> suddenly experienced some memory pressure.  This caused lots of >> pages to move from the inactive queue to the laundry queue. Demand >> for free pages will then cause pages in the laundry queue to be >> written back and freed, or requeued if the page was referenced after >> being placed in the laundry queue.  "vmstat -s" and "sysctl vm.stats" >> output might make things more clear. >> >> All this is to say that there's nothing particularly abnormal about what >> you're observing, though it's not clear what effects this behaviour has >> on your workload, if any.  I can't think of any direct reason this would >> happen on 11.2 but not 11.1. >