From owner-freebsd-threads@FreeBSD.ORG Mon Nov 22 06:28:41 2004 Return-Path: Delivered-To: freebsd-threads@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3C29A16A4CE; Mon, 22 Nov 2004 06:28:41 +0000 (GMT) Received: from creme-brulee.marcuscom.com (creme-brulee.marcuscom.com [24.172.16.118]) by mx1.FreeBSD.org (Postfix) with ESMTP id 965C043D55; Mon, 22 Nov 2004 06:28:40 +0000 (GMT) (envelope-from marcus@marcuscom.com) Received: from gyros (gyros.marcuscom.com [192.168.1.9]) iAM6SfML005395; Mon, 22 Nov 2004 01:28:41 -0500 (EST) (envelope-from marcus@marcuscom.com) From: Joe Marcus Clarke To: Daniel Eischen In-Reply-To: References: Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-r771LLZlfWLy4bhx52aU" Organization: MarcusCom, Inc. Date: Mon, 22 Nov 2004 01:28:39 -0500 Message-Id: <1101104919.95599.4.camel@gyros> Mime-Version: 1.0 X-Mailer: Evolution 2.0.2 FreeBSD GNOME Team Port cc: Alexander Nedotsukov cc: freebsd-threads@freebsd.org Subject: Re: Question about our default pthread stack size X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Nov 2004 06:28:41 -0000 --=-r771LLZlfWLy4bhx52aU Content-Type: text/plain Content-Transfer-Encoding: quoted-printable On Mon, 2004-11-22 at 01:15 -0500, Daniel Eischen wrote: > On Mon, 22 Nov 2004, Alexander Nedotsukov wrote: >=20 > > Daniel Eischen wrote: > > > > Heavy stack usage is not so evil like it may look like. That "pig" > > will have to allocate big memory chunk anyway and the usual options her= e > > are to grab phys memory page on the heap through the malloc() or do it > > on the stack. In both of the cases this memory will not be immediately > > allocated. But malloc() way will be slower and this is why multimedia > > processing code may prefer on stack allocation. Another malloc/free() > > disadvantage for C program is potential memory leakage. People may feel > > safer when they don't have to check all return paths but get automatic > > memory disposal. > > One more thing about "Do they know their own stack space requirements". > > No they don't. And the whole idea here I believe was to let them don't > > care at all. I doubt very much if such research for something like > > OpenOffice will worth the efforts. So it is more practical to follow > > Bill's like "no one can beat xxx barrier". Where xxx is 1MB in our 32bi= t > > case :-) >=20 > You're missing my point. There is a perfectly good POSIX standard for > setting thread stack size -- you don't even need to allocate it yourself. > Using pthread_attr_setstacksize() is more portable than relying on > the OS to guess at what an application's stack size requirements are. > We may increase it to 1MB now, but what happens when that is not enough? > And you _know_ one day, perhaps sooner than you realize, that it won't > be enough. Okay, but I still don't see the reason for not increasing the stack size now to be more inline with Solaris and Linux. We seem to be adopting other Solaris-like threading attributes (based on some of your previous emails), and this would help other popular software packages "just work" on FreeBSD. >=20 > I've searched the GTK archives and can see that the stack size was > removed from the thread pool api, but not from creating other threads. > The reason for removing it seems silly, but such is life... Right. You can still create individual threads, and specify a per-thread stack size. However, this cannot be done with GThreadPools. Joe >=20 --=20 PGP Key : http://www.marcuscom.com/pgp.asc --=-r771LLZlfWLy4bhx52aU Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (FreeBSD) iD8DBQBBoYcXb2iPiv4Uz4cRAh+EAJwKUFkPnKxoogX1GT5jTDUQqUW5bACfSNj1 WIMPoc3I++J6Qlq44xgj2iA= =fvhD -----END PGP SIGNATURE----- --=-r771LLZlfWLy4bhx52aU--