From owner-freebsd-stable@FreeBSD.ORG Wed Aug 20 15:57:11 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 01244406 for ; Wed, 20 Aug 2014 15:57:10 +0000 (UTC) Received: from mail.intermedix.com (mail.epbs.com [66.210.191.9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "Barracuda/emailAddress=sales@barracuda.com", Issuer "Barracuda/emailAddress=sales@barracuda.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id B34153033 for ; Wed, 20 Aug 2014 15:57:10 +0000 (UTC) X-ASG-Debug-ID: 1408550218-049956325c1f4020001-BIHDGU Received: from mailgate00.corp.okcyok1.priv.intermedix.com (mailgate00.epbs.com [10.130.4.34]) by mail.intermedix.com with ESMTP id SRBlcvkJ5mmJ4HTF; Wed, 20 Aug 2014 10:56:58 -0500 (CDT) X-Barracuda-Envelope-From: Steve.Polyack@intermedix.com X-ASG-Whitelist: Client X-WSS-ID: 0NAM4AW-01-VEE-02 X-M-MSG: Received: from exchange01.epbs.com (exchange01.okcyok0.priv.intermedix.com [192.168.25.157]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mailgate00.corp.okcyok1.priv.intermedix.com (Postfix) with ESMTPS id 27A9358009; Wed, 20 Aug 2014 10:56:55 -0500 (CDT) Received: from EXCHANGE03.epbs.com ([0000:0000:0000:0000:0000:0000:0.0.0.1]) by exchange01.epbs.com ([192.168.25.157]) with mapi; Wed, 20 Aug 2014 10:56:57 -0500 From: "Polyack, Steve" To: Alan Cox , "freebsd-stable@freebsd.org" Date: Wed, 20 Aug 2014 10:56:56 -0500 Subject: RE: vmdaemon CPU usage and poor performance in 10.0-RELEASE Thread-Topic: vmdaemon CPU usage and poor performance in 10.0-RELEASE X-ASG-Orig-Subj: RE: vmdaemon CPU usage and poor performance in 10.0-RELEASE Thread-Index: Ac+8jxJ31D9J+xA5TuOphWxjCWMeHQAAC8yw Message-ID: <4D557EC7CC2A544AA7C1A3B9CBA2B3672609CF335D@exchange03.epbs.com> References: <4D557EC7CC2A544AA7C1A3B9CBA2B36726098846B4@exchange03.epbs.com> <20140813152522.GI9400@home.opsec.eu> <4D557EC7CC2A544AA7C1A3B9CBA2B36726098847AF@exchange03.epbs.com> <4D557EC7CC2A544AA7C1A3B9CBA2B3672609BBA3C4@exchange03.epbs.com> <53F24E5B.1010809@rice.edu> <4D557EC7CC2A544AA7C1A3B9CBA2B3672609BBA64F@exchange03.epbs.com> <53F2790C.20703@rice.edu> <4D557EC7CC2A544AA7C1A3B9CBA2B3672609CF28E5@exchange03.epbs.com> <4D557EC7CC2A544AA7C1A3B9CBA2B3672609CF2F8F@exchange03.epbs.com> <4D557EC7CC2A544AA7C1A3B9CBA2B3672609CF31F0@exchange03.epbs.com> <53F4C4C2.1030109@rice.edu> In-Reply-To: <53F4C4C2.1030109@rice.edu> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Barracuda-Connect: mailgate00.epbs.com[10.130.4.34] X-Barracuda-Start-Time: 1408550218 X-Barracuda-URL: http://192.168.25.21:8000/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at intermedix.com X-Barracuda-BRTS-Status: 1 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Aug 2014 15:57:11 -0000 > -----Original Message----- > From: Alan Cox [mailto:alc@rice.edu] > Sent: Wednesday, August 20, 2014 11:55 AM > To: Polyack, Steve; freebsd-stable@freebsd.org > Subject: Re: vmdaemon CPU usage and poor performance in 10.0-RELEASE >=20 > On 08/20/2014 09:55, Polyack, Steve wrote: > >> -----Original Message----- > >> From: Polyack, Steve > >> Sent: Wednesday, August 20, 2014 9:14 AM > >> To: Polyack, Steve; Alan Cox; freebsd-stable@freebsd.org > >> Subject: RE: vmdaemon CPU usage and poor performance in 10.0-RELEASE > >> > >> > >>> -----Original Message----- > >>> From: owner-freebsd-stable@freebsd.org [mailto:owner-freebsd- > >>> stable@freebsd.org] On Behalf Of Polyack, Steve > >>> Sent: Tuesday, August 19, 2014 12:37 PM > >>> To: Alan Cox; freebsd-stable@freebsd.org > >>> Subject: RE: vmdaemon CPU usage and poor performance in 10.0- > RELEASE > >>> > >>>> -----Original Message----- > >>>> From: owner-freebsd-stable@freebsd.org [mailto:owner-freebsd- > >>>> stable@freebsd.org] On Behalf Of Alan Cox > >>>> Sent: Monday, August 18, 2014 6:07 PM > >>>> To: freebsd-stable@freebsd.org > >>>> Subject: Re: vmdaemon CPU usage and poor performance in 10.0- > >> RELEASE > >>>> On 08/18/2014 16:29, Polyack, Steve wrote: > >>>>>> -----Original Message----- > >>>>>> From: owner-freebsd-stable@freebsd.org [mailto:owner-freebsd- > >>>>>> stable@freebsd.org] On Behalf Of Alan Cox > >>>>>> Sent: Monday, August 18, 2014 3:05 PM > >>>>>> To: freebsd-stable@freebsd.org > >>>>>> Subject: Re: vmdaemon CPU usage and poor performance in 10.0- > >>> RELEASE > >>>>>> On 08/18/2014 13:42, Polyack, Steve wrote: > >>>>>>> Excuse my poorly formatted reply at the moment, but this seems to > >>>> have > >>>>>> fixed our problems. I'm going to update the bug report with a not= e. > >>>>>>> Thanks Alan! > >>>>>> You're welcome. And, thanks for letting me know of the outcome. > >>>>>> > >>>>> Actually, I may have spoken too soon, as it looks like we're seeing > >>>> vmdaemon tying up the system again: > >>>>> root 6 100.0 0.0 0 16 - DL Wed04P= M 4:37.95 > >>> [vmdaemon] > >>>>> Is there anything I can check to help narrow down what may be the > >>>> problem? KTrace/truss on the "process" doesn't give any information= , I > >>>> suppose because it's actually a kernel thread. > >>>> > >>>> Can you provide the full output of top? Is there anything unusual > about > >>>> the hardware or software configuration? > >>> This may have just been a fluke (maybe NFS caching the old > vm_pageout.c > >>> during the first source build). We've rebuilt and are monitoring it = now. > >>> > >>> The hardware consists of a few Dell PowerEdge R720xd servers with > 256GB > >>> of RAM and array of SSDs (no ZFS). 64GB is dedicated to postgres > >>> shared_buffers right now. FreeBSD 10, PostgreSQL 9.3, Slony-I v2.2.2, > and > >>> redis-2.8.11 are all in use here. I can't say that anything is unusu= al about > >> the > >>> configuration. > >>> > >> We are still seeing the issue. It seems to manifest once the "Free" > memory > >> gets under 10GB (of 256GB on the system), even though ~200GB of this i= s > >> classified as Inactive. For us, this was about 7 hours of database ac= tivity > >> (initial replication w/ slony). Right now vmdaemon is consuming 100% > CPU > >> and shows 671:34 CPU time when it showed 0:00 up until the problem > >> manifested. The full top output (that fits on my screen) is below: > >> > >> last pid: 62309; load averages: 4.05, 4.24, 4.10 > >> up 0+22:34:31 09:08:43 > >> 159 processes: 8 running, 145 sleeping, 1 waiting, 5 lock > >> CPU: 14.5% user, 0.0% nice, 4.9% system, 0.0% interrupt, 80.5% idle > >> Mem: 26G Active, 216G Inact, 4122M Wired, 1178M Cache, 1632M Buf, > 2136M > >> Free > >> Swap: 32G Total, 32G Free > >> > >> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WC= PU > >> COMMAND > >> 11 root 32 155 ki31 0K 512K CPU31 31 669.6H 2934.= 23% idle > >> 6 root 1 -16 - 0K 16K CPU19 19 678:57 100.0= 0% vmdaemon > >> 1963 pgsql 1 45 0 67538M 208M CPU0 0 121:46 17.3= 8% postgres > >> 2037 pgsql 1 77 0 67536M 2200K *vm ob 14 6:24 15.9= 7% postgres > >> 1864 pgsql 1 31 0 67536M 1290M semwai 4 174:41 15.1= 9% > postgres > >> 1996 pgsql 1 38 0 67538M 202M semwai 16 120:27 15.0= 9% > postgres > >> 1959 pgsql 1 39 0 67538M 204M CPU27 27 117:30 15.0= 9% postgres > >> 1849 pgsql 1 32 0 67536M 1272M semwai 23 126:22 13.9= 6% > postgres > >> 1997 pgsql 1 31 0 67538M 206M CPU30 30 122:26 11.7= 7% postgres > >> 2002 pgsql 1 34 0 67538M 182M sbwait 11 55:20 11.2= 8% postgres > >> 1961 pgsql 1 32 0 67538M 206M CPU12 12 121:47 10.9= 9% postgres > >> 1964 pgsql 1 30 0 67538M 206M semwai 28 122:08 9.8= 6% postgres > >> 1962 pgsql 1 29 0 67538M 1286M sbwait 2 45:49 7.1= 8% postgres > >> 1752 root 1 22 0 78356K 8688K CPU2 2 175:46 6.8= 8% snmpd > >> 1965 pgsql 1 25 0 67538M 207M semwai 9 120:55 6.5= 9% postgres > >> 1960 pgsql 1 23 0 67538M 177M semwai 6 52:42 4.8= 8% postgres > >> 1863 pgsql 1 25 0 67542M 388M semwai 25 9:12 2.2= 0% postgres > >> 1859 pgsql 1 22 0 67538M 1453M *vm ob 20 6:13 2.1= 0% postgres > >> 1860 pgsql 1 22 0 67538M 1454M sbwait 8 6:08 1.9= 5% postgres > >> 1848 pgsql 1 21 0 67586M 66676M *vm ob 30 517:07 1.6= 6% > postgres > >> 1856 pgsql 1 22 0 67538M 290M *vm ob 15 5:39 1.6= 6% postgres > >> 1846 pgsql 1 21 0 67538M 163M sbwait 15 5:46 1.4= 6% postgres > >> 1853 pgsql 1 21 0 67538M 110M sbwait 30 8:54 1.1= 7% postgres > >> 1989 pgsql 1 23 0 67536M 5180K sbwait 18 1:41 0.9= 8% postgres > >> 5 root 1 -16 - 0K 16K psleep 6 9:33 0.7= 8% pagedaemon > >> 1854 pgsql 1 20 0 67538M 338M sbwait 22 5:38 0.7= 8% postgres > >> 1861 pgsql 1 20 0 67538M 286M sbwait 15 6:13 0.6= 8% postgres > >> 1857 pgsql 1 20 0 67538M 1454M semwai 10 6:19 0.4= 9% postgres > >> 1999 pgsql 1 36 0 67538M 156M *vm ob 28 120:56 0.3= 9% postgres > >> 1851 pgsql 1 20 0 67538M 136M sbwait 22 5:48 0.3= 9% postgres > >> 1975 pgsql 1 20 0 67536M 5688K sbwait 25 1:40 0.2= 9% postgres > >> 1858 pgsql 1 20 0 67538M 417M sbwait 3 5:55 0.2= 0% postgres > >> 2031 pgsql 1 20 0 67536M 5664K sbwait 5 3:26 0.1= 0% postgres > >> 1834 root 12 20 0 71892K 12848K select 20 34:05 0.0= 0% slon > >> 12 root 78 -76 - 0K 1248K WAIT 0 25:47 0.0= 0% intr > >> 2041 pgsql 1 20 0 67536M 5932K sbwait 14 12:50 0.0= 0% postgres > >> 2039 pgsql 1 20 0 67536M 5960K sbwait 17 9:59 0.0= 0% postgres > >> 2038 pgsql 1 20 0 67536M 5956K sbwait 6 8:21 0.0= 0% postgres > >> 2040 pgsql 1 20 0 67536M 5996K sbwait 7 8:20 0.0= 0% postgres > >> 2032 pgsql 1 20 0 67536M 5800K sbwait 22 7:03 0.0= 0% postgres > >> 2036 pgsql 1 20 0 67536M 5748K sbwait 23 6:38 0.0= 0% postgres > >> 1812 pgsql 1 20 0 67538M 59185M select 1 5:46 0.0= 0% postgres > >> 2005 pgsql 1 20 0 67536M 5788K sbwait 23 5:14 0.0= 0% postgres > >> 2035 pgsql 1 20 0 67536M 4892K sbwait 18 4:52 0.0= 0% > >> 1852 pgsql 1 21 0 67536M 1230M semwai 7 4:47 0.0= 0% postgres > >> 13 root 3 -8 - 0K 48K - 28 4:46 0.0= 0% geom > >> > >> > > Another thing I've noticed is that this sysctl vm.stats counter is incr= easing > fairly rapidly: > > # sysctl vm.stats.vm.v_pdpages && sleep 1 && sysctl > vm.stats.vm.v_pdpages > > vm.stats.vm.v_pdpages: 3455264541 > > vm.stats.vm.v_pdpages: 3662158383 >=20 > I'm not sure what that tells us, because both the page daemon and the vm > ("swap") daemon increment this counter. >=20 > > Also, to demonstrate what kind of problems this seems to cause: > > # time sleep 1 > > > > real 0m18.288s > > user 0m0.001s > > sys 0m0.004s >=20 > If you change the sysctl vm.swap_enabled to 0, how does your system > behave? >=20 Setting vm.swap_enabled to 0 made the problem clear up almost instantly. v= mdaemon is back to 0.00% CPU usage and the system is responsive once again.