Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 Jun 2003 18:20:00 -0400 (EDT)
From:      Daniel Eischen <eischen@pcnet.com>
To:        Julian Elischer <julian@elischer.org>
Cc:        Andy Ritger <ARitger@nvidia.com>
Subject:   RE: NVIDIA and TLS
Message-ID:  <Pine.GSO.4.10.10306161817380.11847-100000@pcnet5.pcnet.com>
In-Reply-To: <Pine.BSF.4.21.0306161450140.19977-100000@InterJet.elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 16 Jun 2003, Julian Elischer wrote:
> 
> On Mon, 16 Jun 2003, Gareth Hughes wrote:
> 
> > On Mon, 16 Jun 2003, Andy Ritger wrote:
> > > 
> > > So from an OpenGL point of view, here are several alternatives that
> > > I see for atleast the near term:
> > > 
> > >     - make NVIDIA's OpenGL implementation not thread-safe (just
> > >       use global data rather that thread-local data)
> > > 
> > >     - accept the performance hit of using pthread_getspecific()
> > >       on FreeBSD.  From talking to other OpenGL engineers,
> > >       conservative estimates of the performance impact on
> > >       applications like viewperf range from 10% - 15%.  I'd like
> > >       to quantify that, but certainly there will be a performance
> > >       penalty.
> > 
> > And these are *very* conservative estimates -- you're essentially adding a
> > function call into a path that is, on average, less than ten instructions
> > per OpenGL API call, where the number of API calls per frame is upward of 3

I see this as a problem with the OpenGL API.  You're trying
to make something thread-safe that isn't by its nature.
I would rather see OpenGL-MT with new interfaces that
are by nature thread-safe.

-- 
Dan Eischen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.GSO.4.10.10306161817380.11847-100000>