From owner-freebsd-hackers@FreeBSD.ORG Sun Jul 5 19:25:28 2009 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3ABAA1065672 for ; Sun, 5 Jul 2009 19:25:28 +0000 (UTC) (envelope-from alan.l.cox@gmail.com) Received: from mail-vw0-f199.google.com (mail-vw0-f199.google.com [209.85.212.199]) by mx1.freebsd.org (Postfix) with ESMTP id BF8918FC0A for ; Sun, 5 Jul 2009 19:25:27 +0000 (UTC) (envelope-from alan.l.cox@gmail.com) Received: by vwj37 with SMTP id 37so637306vwj.3 for ; Sun, 05 Jul 2009 12:25:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:reply-to:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=canGiLqIC5IvafDoSMWZg+KIYbk4WEA1dyOvTDqEaHI=; b=wrQSxYi4Ou2NThLlaFMiC+eaRfOiykXY2kSSZirYltpS5diLqKhcktNLvofkm3Wm8X rl8pEqAIWF7f1dQ1wmboakRfF9oZYtcCqNbjZj98TmWQkCfaCoLyl5JmWMzS+0GIK7hA /w1X4Tx+5FGRpDPBJ2+Mv1av53D7tbudv8pUQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; b=i06B9y867l5iCemiuK8xnWpsqI5YK/gxyKbrIC5E8EE4uZH5983HJmN6nXvLujfkLy +j9NelV/B5pAs4xPe/m0zSQZ2KoMJvcNFiKMgilZzidA5dR5Snq1o+QIXFGmsPtHKa5u eawAkQI8PTuRM4NAVYkXyMDrY4FKqMYvXQ6L0= MIME-Version: 1.0 Received: by 10.220.73.9 with SMTP id o9mr7694630vcj.41.1246821926886; Sun, 05 Jul 2009 12:25:26 -0700 (PDT) In-Reply-To: <6dd8736a0907030618o722d8252x59479543fef23cc4@mail.gmail.com> References: <6dd8736a0907030618o722d8252x59479543fef23cc4@mail.gmail.com> Date: Sun, 5 Jul 2009 14:25:26 -0500 Message-ID: From: Alan Cox To: c0re dumped Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-hackers@freebsd.org Subject: Re: Problem with vm.pmap.shpgperproc and vm.pmap.pv_entry_max X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: alc@freebsd.org List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Jul 2009 19:25:28 -0000 On Fri, Jul 3, 2009 at 8:18 AM, c0re dumped wrote: > So, I never had problem with this server, but recently it starts to > giv me the following messages *every* minute : > > Jul 3 10:04:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:05:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:06:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:07:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:08:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:09:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:10:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:11:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > > This server is running Squid + dansguardian. The users are complaining > about slow navigation and they are driving me crazy ! > > Have anyone faced this problem before ? > > Some infos: > > # uname -a > FreeBSD squid 7.2-RELEASE FreeBSD 7.2-RELEASE #0: Fri May 1 08:49:13 > UTC 2009 root@walker.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC > i386 > > # sysctl vm > vm.vmtotal: > System wide totals computed every five seconds: (values in kilobytes) > =============================================== > Processes: (RUNQ: 1 Disk Wait: 1 Page Wait: 0 Sleep: 230) > Virtual Memory: (Total: 19174412K, Active 9902152K) > Real Memory: (Total: 1908080K Active 1715908K) > Shared Virtual Memory: (Total: 647372K Active: 10724K) > Shared Real Memory: (Total: 68092K Active: 4436K) > Free Memory Pages: 88372K > > vm.loadavg: { 0.96 0.96 1.13 } > vm.v_free_min: 4896 > vm.v_free_target: 20635 > vm.v_free_reserved: 1051 > vm.v_inactive_target: 30952 > vm.v_cache_min: 20635 > vm.v_cache_max: 41270 > vm.v_pageout_free_min: 34 > vm.pageout_algorithm: 0 > vm.swap_enabled: 1 > vm.kmem_size_scale: 3 > vm.kmem_size_max: 335544320 > vm.kmem_size_min: 0 > vm.kmem_size: 335544320 > vm.nswapdev: 1 > vm.dmmax: 32 > vm.swap_async_max: 4 > vm.zone_count: 84 > vm.swap_idle_threshold2: 10 > vm.swap_idle_threshold1: 2 > vm.exec_map_entries: 16 > vm.stats.misc.zero_page_count: 0 > vm.stats.misc.cnt_prezero: 0 > vm.stats.vm.v_kthreadpages: 0 > vm.stats.vm.v_rforkpages: 0 > vm.stats.vm.v_vforkpages: 340091 > vm.stats.vm.v_forkpages: 3604123 > vm.stats.vm.v_kthreads: 53 > vm.stats.vm.v_rforks: 0 > vm.stats.vm.v_vforks: 2251 > vm.stats.vm.v_forks: 19295 > vm.stats.vm.v_interrupt_free_min: 2 > vm.stats.vm.v_pageout_free_min: 34 > vm.stats.vm.v_cache_max: 41270 > vm.stats.vm.v_cache_min: 20635 > vm.stats.vm.v_cache_count: 5734 > vm.stats.vm.v_inactive_count: 242259 > vm.stats.vm.v_inactive_target: 30952 > vm.stats.vm.v_active_count: 445958 > vm.stats.vm.v_wire_count: 58879 > vm.stats.vm.v_free_count: 16335 > vm.stats.vm.v_free_min: 4896 > vm.stats.vm.v_free_target: 20635 > vm.stats.vm.v_free_reserved: 1051 > vm.stats.vm.v_page_count: 769244 > vm.stats.vm.v_page_size: 4096 > vm.stats.vm.v_tfree: 12442098 > vm.stats.vm.v_pfree: 1657776 > vm.stats.vm.v_dfree: 0 > vm.stats.vm.v_tcached: 253415 > vm.stats.vm.v_pdpages: 254373 > vm.stats.vm.v_pdwakeups: 14 > vm.stats.vm.v_reactivated: 414 > vm.stats.vm.v_intrans: 1912 > vm.stats.vm.v_vnodepgsout: 0 > vm.stats.vm.v_vnodepgsin: 6593 > vm.stats.vm.v_vnodeout: 0 > vm.stats.vm.v_vnodein: 891 > vm.stats.vm.v_swappgsout: 0 > vm.stats.vm.v_swappgsin: 0 > vm.stats.vm.v_swapout: 0 > vm.stats.vm.v_swapin: 0 > vm.stats.vm.v_ozfod: 56314 > vm.stats.vm.v_zfod: 2016628 > vm.stats.vm.v_cow_optim: 1959 > vm.stats.vm.v_cow_faults: 584331 > vm.stats.vm.v_vm_faults: 3661086 > vm.stats.sys.v_soft: 23280645 > vm.stats.sys.v_intr: 18528397 > vm.stats.sys.v_syscall: 1990471112 > vm.stats.sys.v_trap: 8079878 > vm.stats.sys.v_swtch: 105613021 > vm.stats.object.bypasses: 14893 > vm.stats.object.collapses: 55259 > vm.v_free_severe: 2973 > vm.max_proc_mmap: 49344 > vm.old_msync: 0 > vm.msync_flush_flags: 3 > vm.boot_pages: 48 > vm.max_wired: 255475 > vm.pageout_lock_miss: 0 > vm.disable_swapspace_pageouts: 0 > vm.defer_swapspace_pageouts: 0 > vm.swap_idle_enabled: 0 > vm.pageout_stats_interval: 5 > vm.pageout_full_stats_interval: 20 > vm.pageout_stats_max: 20635 > vm.max_launder: 32 > vm.phys_segs: > SEGMENT 0: > > start: 0x1000 > end: 0x9a000 > free list: 0xc0cca168 > > SEGMENT 1: > > start: 0x100000 > end: 0x400000 > free list: 0xc0cca168 > > SEGMENT 2: > > start: 0x1025000 > end: 0xbc968000 > free list: 0xc0cca060 > > vm.phys_free: > FREE LIST 0: > > ORDER (SIZE) | NUMBER > | POOL 0 | POOL 1 > -- -- -- -- -- -- > 10 ( 4096K) | 0 | 0 > 9 ( 2048K) | 0 | 0 > 8 ( 1024K) | 0 | 0 > 7 ( 512K) | 0 | 0 > 6 ( 256K) | 0 | 0 > 5 ( 128K) | 0 | 0 > 4 ( 64K) | 0 | 0 > 3 ( 32K) | 0 | 0 > 2 ( 16K) | 0 | 0 > 1 ( 8K) | 0 | 0 > 0 ( 4K) | 24 | 3562 > > FREE LIST 1: > > ORDER (SIZE) | NUMBER > | POOL 0 | POOL 1 > -- -- -- -- -- -- > 10 ( 4096K) | 0 | 0 > 9 ( 2048K) | 0 | 0 > 8 ( 1024K) | 0 | 0 > 7 ( 512K) | 0 | 0 > 6 ( 256K) | 0 | 0 > 5 ( 128K) | 0 | 2 > 4 ( 64K) | 0 | 3 > 3 ( 32K) | 6 | 11 > 2 ( 16K) | 6 | 21 > 1 ( 8K) | 14 | 35 > 0 ( 4K) | 20 | 70 > > vm.reserv.reclaimed: 187 > vm.reserv.partpopq: > LEVEL SIZE NUMBER > > -1: 71756K, 19 > > vm.reserv.freed: 35575 > vm.reserv.broken: 94 > vm.idlezero_enable: 0 > vm.kvm_free: 310374400 > vm.kvm_size: 1073737728 > vm.pmap.pmap_collect_active: 0 > vm.pmap.pmap_collect_inactive: 0 > vm.pmap.pv_entry_spare: 50408 > vm.pmap.pv_entry_allocs: 38854797 > vm.pmap.pv_entry_frees: 37052501 > vm.pmap.pc_chunk_tryfail: 0 > vm.pmap.pc_chunk_frees: 130705 > vm.pmap.pc_chunk_allocs: 136219 > vm.pmap.pc_chunk_count: 5514 > vm.pmap.pv_entry_count: 1802296 > vm.pmap.pde.promotions: 0 > vm.pmap.pde.p_failures: 0 > vm.pmap.pde.mappings: 0 > vm.pmap.pde.demotions: 0 > vm.pmap.shpgperproc: 200 > vm.pmap.pv_entry_max: 2002224 > vm.pmap.pg_ps_enabled: 0 > > Either pmap.shpgperproc and vm.pmap.pv_entry_max are with their > default values. I read here > (http://lists.freebsd.org/pipermail/freebsd-hackers/2003-May/000695.html) > tha its not a good ideia to increase these values arbitrarily. > There are two things that you can do: (1) Enable superpages by setting vm.pmap_pg_ps_enabled to "1" in /boot/loader.conf. A 4MB superpage mapping on i386 consumes a single PV entry instead of the 1024 entries that would be consumed by mapping 4MB of 4KB pages. Whether or not this will help depends on aspects of Squid and Dansguardian that I can't predict. (2) You shouldn't be afraid of increasing vm.pmap.pv_entry_max. However, you should watch vm.kvm_free as you do this. It will decrease in proportion to the increase in vm.pmap.pv_entry_max. Don't let vm.kvm_free drop too close to 0. I would consider too close on the order of 25-50MB. Regards, Alan