Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 07 Aug 2001 01:58:21 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Bosko Milekic <bmilekic@technokratis.com>
Cc:        Matt Dillon <dillon@earth.backplane.com>, Zhihui Zhang <zzhang@cs.binghamton.edu>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: Allocate a page at interrupt time
Message-ID:  <3B6FADAD.C8CC14C5@mindspring.com>
References:  <Pine.SOL.4.21.0108031432070.28997-100000@opal> <200108051955.f75Jtk882156@earth.backplane.com> <3B6F8A6C.B95966B7@mindspring.com> <20010807031832.A46112@technokratis.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Bosko Milekic wrote:
> > I keep wondering about the sagicity of running interrupts in
> > threads... it still seems like an incredibly bad idea to me.
> >
> > I guess my major problem with this is that by running in
> > threads, it's made it nearly impossibly to avoid receiver
> > livelock situations, using any of the classical techniques
> > (e.g. Mogul's work, etc.).
> 
>         References to published works?

Just do an NCSTRL search on "receiver livelock"; you will get
over 90 papers...

	http://ncstrl.mit.edu/

See also the list of participating institutions:

	http://ncstrl.mit.edu/Dienst/UI/2.0/ListPublishers

It won't be that hard to find... Mogul has "only" published 92
papers.  8-)


> > It also has the unfortunate property of locking us into virtual
> > wire mode, when in fact Microsoft demonstrated that wiring down
> > interrupts to particular CPUs was good practice, in terms of
> > assuring best performance.  Specifically, running in virtual
> 
>         Can you point us at any concrete information that shows
> this?  Specifically, without being Microsoft biased (as is most
> "data" published by Microsoft)? -- i.e. preferably third-party
> performance testing that attributes wiring down of interrupts to
> particular CPUs as _the_ performance advantage.

FreeBSD was tested, along with Linux and NT, by Ziff Davis
Labs, in Foster city, with the participation of Jordan
Hubbard and Mike Smith.  You can ask either of them for the
results of the test; only the Linux and NT numbers were
actually released.  This was done to provide a non-biased
baseline, in reaction to the Mindcraft benchmarks, where
Linux showed so poorly.  They ran quad ethernet cards, with
quad CPUs; the NT drivers wired the cards down to seperate
INT A/B/C/D interrupts, one per CPU.


> > wire mode means that all your CPUs get hit with the interrupt,
> > whereas running with the interrupt bound to a particular CPU
> > reduces the overall overhead.  Even what we have today, with
> 
>         Obviously.

I mention it because this is the direction FreeBSD appears
to be moving in.  Right now, Intel is shipping with seperate
PCI busses; there is one motherboard from their serverworks
division that has 16 seperate PCI busses -- which means that
you can do simultaneous gigabit card DMA to and from memory,
without running into bus contention, so long as the memory is
logically seperate.  NT can use this hardware to its full
potential; FreeBSD as it exists, can not, and FreeBSD as it
appears to be heading today (interrupt threads, etc.) seems
to be in the same boat as Linux, et. al..  PCI-X will only
make things worse (8.4 gigabit, burst rate).

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B6FADAD.C8CC14C5>