Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Jul 2000 17:16:49 -0700 (PDT)
From:      "Rodney W. Grimes" <freebsd@gndrsh.dnsmgr.net>
To:        mckay@thehub.com.au (Stephen McKay)
Cc:        freebsd-current@FreeBSD.ORG
Subject:   Re: dc driver and underruns (was: Strangeness with 4.0-S)
Message-ID:  <200007150016.RAA18115@gndrsh.dnsmgr.net>
In-Reply-To: <200007140251.MAA07785@dungeon.home> from Stephen McKay at "Jul 14, 2000 12:51:14 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
[cc: trimmed to -current]

> >>>Does anyone here actually measure these latencies?  I know for a fact
> >>>that nothing I've ever done would or could be affected by extra latencies
> >>>that are as small as the ones we are discussing.  Does anybody at all
> >>>depend on the start-transmitting-before-DMA-completed feature we are
> >>>discussing?
> >> 
> >> I don't like the idea of removing that feature.  Perhaps it should be a
> >> sysctl or ifconfig option, but it should definitely remain available.
> >> Those minute latencies are critical to those of us who use MPI for
> >> complex parallel calculations.
> >
> >I have to agree here.  The store and forward adds an approximate
> >11uS (by theory under ideal conditions 1500bytes@132MB/s = 11uS,
> >practice actually makes this worse as typical PCI does something
> >less than 100MB/s or 15uS) to a 120uS packet time on the wire (again,
> >ideal, but here given that switches, and infact often cut-through
> >switches, are used for these types of things, ideal and practice
> >are very close.)
> >
> >I don't think these folks, nor myself, are wanting^H^H^H^H^H^H^Hilling
> >to give up 12.5%.
> 
> OK.  It seems that repairing the feature, rather than disabling it is
> the most popular option.  Still, I am quite interested in finding anyone
> who actually measures these things, and is affected by them. 

As already pointed out, anyone running computational code on a compute
cluster that is passing data around is directly affected by this.  I know
of at least 3 sites that converting to store and forward would destroy
as far as ``operational'' status goes.  They have gone the extra mile to
even use cut-through ethernet switches and I can assure you that an 11uS
delay per packet would have a significant impact on cluster performance.
They don't directly measure these values, but none the less they would
have an impact.

Also for those using dc21x4x cards in high load router and/or firewall
situations would notice this, though it would be harder to measure (well,
actually a pps test should show it quite clearly, as my above 12.5% was
based on full size packets, this becomes a larger percentage as packet
size is decreased).

> These very
> same people might be able to trace why we get the underruns in the first
> place.

Of the sites I know of they don't get these messages :-). 
I have noticed that I see them more often with the dc driver than I
do with the de driver, ie now that I am upgrading more and more of
our systems to 4.x from 3.x I have started to see these on machines
that have never reported them before.  Now this may be the driver,
or it could be some other part of system that has changed.

>  I suspect an interaction between the ATA driver and VIA chipsets,
> because other than the network, that's all that is operating when I see
> the underruns.  And my Celeron with a ZX chipset is immune.

I've seen them on just about everything, chipset doesn't seem to matter,
IDE or SCSI doesn't seem to matter.

> Back to the technical, for a moment.  I have verified that stopping the
> transmitter on the 21143 is both sufficient and necessary to enable the
> thresholds to be set.  I have code that works on my machine.  I intend
> to commit it when I think it looks neat enough.

Good.  That should help the folks with the major complaint of 2 to 3 second
network outages when one of the occur.  It may also be possible to simply
start out one step further down on the fifo level and eliminate the message
for most people.  (When I do see these it usually only happens once or
maybe twice, then the box is silent about it from then on.  I have never
seen a box back off to store and forward mode that didn't have some other
serious hardware related problem.)

> Getting even more technical, it appears to me that the current driver
> instructs the 21143 to poll for transmit packets (ie a small DMA)
> every 80us even if there are none to be sent.  I don't know what percentage
> of bus time this might be, or even how to calculate it (got some time Rod?)

I'll have to look at that.  If it is a simple 32 bit read every 80uS
thats something like .1515% of the PCI bandwidth, something that shouldn't
matter much.  (I assumed a simple 4 cycle PCI operation).  Just how big
is this DMA operation every 80uS?

-- 
Rod Grimes - KD7CAX @ CN85sl - (RWG25)               rgrimes@gndrsh.dnsmgr.net


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200007150016.RAA18115>