From nobody Fri Aug 4 14:44:10 2023 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RHT4b3Wvtz4TjPt for ; Fri, 4 Aug 2023 14:44:15 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: from mail-ua1-x936.google.com (mail-ua1-x936.google.com [IPv6:2607:f8b0:4864:20::936]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4RHT4b0Vzhz3LrF for ; Fri, 4 Aug 2023 14:44:15 +0000 (UTC) (envelope-from markjdb@gmail.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-ua1-x936.google.com with SMTP id a1e0cc1a2514c-78f36f37e36so734766241.3 for ; Fri, 04 Aug 2023 07:44:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691160254; x=1691765054; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=F4xPbDUSjDNON73QshzmKt1RpRaL7YGoBTj79uPzFcY=; b=c315ruT25qWu+A0e+ZnCW/6BseDoVR2Esz7SJXGI3Nn3/YZmA9bYv0s6m6HZZH8A9B 5zuxThngifPPhprGcENcDaUsRmz0pbe/egBihWNDJqvOpAR0c4CYPQPXHFvbCvfINAUd 3nOql4VfYjTFkizFDDLuPjFnL9iH6VJwKGTca5U+chSfhoTOjCRK2weZNiCf/5yY2FnE aSKBOSn/Qmw8O1xSQiwVMnC8wkAwgrXnQ6AzxrbDQFlaOqhrn1Mpa7q3SuVvThvIA1c9 vF3lpSCLdq515j3NwULmNWGhghk54gNj4PFPtJFJMdWS8TMdcpNfICMcZYp7K45T/SZI no1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691160254; x=1691765054; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F4xPbDUSjDNON73QshzmKt1RpRaL7YGoBTj79uPzFcY=; b=IUgsSPo12FCdC2XfwFdu4cjjIYkTblOSh4K2qHn2Tty0WyqJC87oh9NYcF8Fxlkpg8 sfg/keTsMKw3Tb6ob1b90aox0vuLW8J9ZZ+dzXIBe1gOJvZBp83ejgogmXqjyitDb2Z5 8O7Hqsy2/tOMa+XldX67zZNXvQTS41YEly+hJBiQtMn+RqWAgOTQWKR8T5yYotCTn62J 4SwVijEfsNJdoiT0zmMSEkkIJ9VdExNARKEPviJYlv7U2qiiT0jDLt0zm/geSA9b7qUs DNEbSWK8OBNWmqJ593TxA4wBVc3SVQ8686im96CgPoZeO4rhHd/6OgpibGMncxOE9u9s RAGQ== X-Gm-Message-State: AOJu0YymZ64kH6QroxzD4xhtf1dXRI2SciGl9cOoWrv5h/KwQvSKDhFW 3QME2oCatOn6dvRySa58k5xK3+u68rg= X-Google-Smtp-Source: AGHT+IE6vCetGL9vVG0jV4W1mq9ojRhVBozrykmV/IRJV9qldo2JyCKmBuSxTypLHgx/m6n+HeVlYQ== X-Received: by 2002:a05:6102:18a:b0:443:51a7:b63d with SMTP id r10-20020a056102018a00b0044351a7b63dmr1434965vsq.23.1691160253966; Fri, 04 Aug 2023 07:44:13 -0700 (PDT) Received: from nuc (192-0-220-237.cpe.teksavvy.com. [192.0.220.237]) by smtp.gmail.com with ESMTPSA id h6-20020a0cab06000000b0063d119034a9sm694828qvb.140.2023.08.04.07.44.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Aug 2023 07:44:12 -0700 (PDT) Date: Fri, 4 Aug 2023 10:44:10 -0400 From: Mark Johnston To: Shrikanth Kamath Cc: freebsd-hackers@freebsd.org Subject: Re: How to watch Active pagequeue transitions with DTrace in the vm layer Message-ID: References: List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 4RHT4b0Vzhz3LrF X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US] On Fri, Aug 04, 2023 at 01:31:04AM -0700, Shrikanth Kamath wrote: > Thanks and appreciate your response Mark, a follow up query, so the system > was probably at some point in the state where there were no pages in the > laundry or even had pages backed by swap (refer the top snapshot below) . > The two heavy applications with 12G resident + Wired + Buf already caused > the Free to drop close to the minimum threshold, any further memory demand > would have the pages of these applications move to laundry or swap, then > would transition to Inactive or Laundry, later when these pages were > referenced back the pagedaemon would move them back to the Active? Is that > a correct understanding? If there is a shortage of free pages, the page daemon will scan the inactive queue, trying to reclaim clean pages. Dirty pages go into the laundry; once the laundry is "large enough", the page daemon will clean pages in the laundry by writing them to swap. If, while scanning a page in the inactive or laundry queues, the pagedaemon notices that the page had been accessed since it was last visited (e.g., the "accessed" bit is set on a page table entry mapping the page), the page daemon will generally move it to the active queue. This happens lazily: if there is no demand for free pages, an accessed page can stay in the inactive/laundry queues indefinitely. > last pid: 20494; load averages: 0.38, 0.73, 0.80 up 0+01:49:05 > 21:14:49 > Mem: 9439M Active, 3638M Inact, 2644M Wired, 888M Buf, 413M Free > > Swap: 8192M Total, 8192M Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 12043 root 5 22 0 9069M 7752M kqread 2 49:37 6.25% app1 > 12051 root 1 20 0 2704M 1964M select 3 0:41 0.00% app2 > > So if I run DTrace probe on vm_page_enqueue I will probably see that > pagedaemon might be the thread that moved all those pages to Active? Is > there a way to associate these to the process which referenced these pages The page daemon does not use vm_page_enqueue() to move pages back to the active queue. Instead, check the vmstat -s counter I had mentioned, "pages reactivated by the page daemon".