Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Sep 2005 18:44:53 -0400
From:      Kris Kennaway <kris@obsecurity.org>
To:        David Xu <davidxu@freebsd.org>
Cc:        Emanuel Strobl <Emanuel.strobl@gmx.net>, freebsd-current@freebsd.org, Kris Kennaway <kris@obsecurity.org>
Subject:   Re: 4BSD/ULE numbers...
Message-ID:  <20050926224453.GB39901@xor.obsecurity.org>
In-Reply-To: <43387811.1090308@freebsd.org>
References:  <200509261847.35558@harrymail> <20050926174738.GA57284@xor.obsecurity.org> <43387811.1090308@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--yNb1oOkm5a9FJOVX
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Sep 27, 2005 at 06:37:05AM +0800, David Xu wrote:
> Kris Kennaway wrote:
>=20
> >On Mon, Sep 26, 2005 at 06:47:27PM +0200, Emanuel Strobl wrote:
> >=20
> >
> >>Hello,
> >>
> >>I tried ULE with BETA5 and for me it felt a bit sluggish when making=20
> >>ports.
> >>So I did some "realworld" simulation and compared 4BSD/ULE to see what=
=20
> >>numbers tell me. And the prooved my feeling right.
> >>It seems that ULE is priorizing nice a little higher, but in general th=
e=20
> >>output of the 4 BSD machine is higher and finishing the tests took not =
so=20
> >>long as with ULE, especially the "make configure" differs horribly.
> >>  =20
> >>
> >
> >That's consistent with my testing.  ULE seems a bit more stable now in
> >6.0 (except on my large SMP machines, which reboot spontaneously under
> >moderate load), but it doesn't perform as well as 4BSD under real
> >application workloads.
> >
> >Kris
> >
> I am fiddling it, although I don't know when I can finish.
> In fact, the ULE code in my perforce has same performance as
> 4BSD, at least this is true on my Dual PIII machine. the real
> advantage is ULE can be HTT friendly if it make it correctly,
> for example physical / logical CPU balance, if system has two
> HTT enabled physical CPU, if system has too CPU hog threads,
> you definitely want the two threads to run on the two physical
> cpu, not in same phyiscal cpu.
> but current it is not. Another advantage is when sched_lock pushes
> down, I know current sched_lock is a Giant lock between large
> number of CPU, also I don't know when sched_lock will be pushed
> down, sched_lock is abused in many place, they really can be replaced
> by another spin lock. :)

I'd love a way to measure sched_lock contention..I'm sure it's a
factor on e.g. my machines with >=3D10 CPUs, but mutex profiling doesn't
see it because it's a spinlock.

Kris


--yNb1oOkm5a9FJOVX
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (FreeBSD)

iD8DBQFDOHnlWry0BWjoQKURAv1GAJ4sFRdIURlHOtKuiInHkg6WFIoz1wCfTvaA
qVABEBq5fAq6kUqJpqBZ4Nc=
=mbcy
-----END PGP SIGNATURE-----

--yNb1oOkm5a9FJOVX--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050926224453.GB39901>