From owner-freebsd-hackers@FreeBSD.ORG Thu Nov 1 14:19:58 2012 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A6D70F6A; Thu, 1 Nov 2012 14:19:58 +0000 (UTC) (envelope-from freebsd@damnhippie.dyndns.org) Received: from duck.symmetricom.us (duck.symmetricom.us [206.168.13.214]) by mx1.freebsd.org (Postfix) with ESMTP id F40A18FC16; Thu, 1 Nov 2012 14:19:57 +0000 (UTC) Received: from damnhippie.dyndns.org (daffy.symmetricom.us [206.168.13.218]) by duck.symmetricom.us (8.14.5/8.14.5) with ESMTP id qA1EJvKC045903; Thu, 1 Nov 2012 08:19:57 -0600 (MDT) (envelope-from freebsd@damnhippie.dyndns.org) Received: from [172.22.42.240] (revolution.hippie.lan [172.22.42.240]) by damnhippie.dyndns.org (8.14.3/8.14.3) with ESMTP id qA1EJsK2007725; Thu, 1 Nov 2012 08:19:54 -0600 (MDT) (envelope-from freebsd@damnhippie.dyndns.org) Subject: Re: Threaded 6.4 code compiled under 9.0 uses a lot more memory?.. From: Ian Lepore To: David Xu In-Reply-To: <5091DA90.7050507@freebsd.org> References: <20121030182727.48f5e649@X220.ovitrap.com> <20121030194307.57e5c5a3@X220.ovitrap.com> <615577FED019BCA31EC4211B@Octca64MkIV.tdx.co.uk> <509012D3.5060705@mu.org> <20121030175138.GA73505@kib.kiev.ua> <20121031140630.GE73505@kib.kiev.ua> <5091DA90.7050507@freebsd.org> Content-Type: text/plain; charset="us-ascii" Date: Thu, 01 Nov 2012 08:19:54 -0600 Message-ID: <1351779594.1120.128.camel@revolution.hippie.lan> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Cc: Konstantin Belousov , freebsd-hackers@freebsd.org, Alfred Perlstein , Karl Pielorz X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 14:19:58 -0000 On Thu, 2012-11-01 at 10:12 +0800, David Xu wrote: > On 2012/10/31 22:44, Karl Pielorz wrote: > > > > > > --On 31 October 2012 16:06 +0200 Konstantin Belousov > > wrote: > > > >> Since you neglected to provide the verbatim output of procstat, nothing > >> conclusive can be said. Obviously, you can make an investigation on your > >> own. > > > > Sorry - when I ran it this morning the output was several hundred lines > > - I didn't want to post all of that to the list 99% of the lines are > > very similar. I can email it you off-list if having the whole lot will > > help? > > > >>> Then there's a bunch of 'large' blocks e.g.. > >>> > >>> PID START END PRT RES PRES REF SHD FL TP > >>> PATH 2010 0x801c00000 0x802800000 rw- 2869 0 4 0 > >>> ---- df 2010 0x802800000 0x803400000 rw- 1880 0 1 0 > >> > >> Most likely, these are malloc arenas. > > > > Ok, that's the heaviest usage. > > > >>> Then lots of 'little' blocks, > >>> > >>> 2010 0x7ffff0161000 0x7ffff0181000 rw- 16 0 1 0 ---D df > >> > >> And those are thread stacks. > > > > Ok, lots of those (lots of threads going on) - but they're all pretty > > small. > > > Note that libc_r's thread stack is 64K, while libthr has 1M bytes > per-thread. That would help explain the large increase in virtual size, but not the increase in resident size, right? In other words, there's nothing inherent in libthr that makes it use more stack, it just allocates more vmspace to allow greater potential growth? Hmmm, actually the chunks said to be thread stack above are neither 64K nor 1M, that's 128K. The malloc arenas are 12M, which seems like an unusual value. I haven't looked inside jemalloc at all, maybe that's normal. -- Ian