Date: Sun, 21 Mar 1999 10:35:14 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: Terry Lambert <tlambert@primenet.com> Cc: hasty@rah.star-gate.com, wes@softweyr.com, ckempf@enigami.com, wpaul@skynet.ctr.columbia.edu, freebsd-hackers@FreeBSD.ORG Subject: Re: Gigabit ethernet -- what am I doing wrong? Message-ID: <199903211835.KAA13904@apollo.backplane.com> References: <199903211804.LAA11607@usr06.primenet.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:
:You mean "most recent network cards". Modern networks cards have memory
:that can be DMA'ed into by other modern network cards.
:
:Moral: being of later manufacture makes you more recent, but being
: capable of data higher rates is what makes you modern.
:
: Terry Lambert
: terry@lambert.org
It's a nice idea, but there are lots of problems with card-to-card
DMA. If you have only two network ports in your system (note: I said
ports, not cards), I suppose you could get away with it. Otherwise you
need something significantly more sophisticated.
The problem is that you hit one of the most common situations that occured
in early routers: DMA blockages to one destination screwing over others.
For example, say you have four network ports and you are receiving packets
which must be distributed to the other ports. Lets say network port #1
receives packets A, B, C, D, and E. Packet A must be distributed to
port #2, and packet's B-D must be distributed to port #3, and packet E
must be distributed to port #4.
What happens when the DMA to port #2 blocks due to a temporary overcommit
of packets being sent to port #2? Or due to a collision/retry situation
occuring on port #1? What happens is that the packets B-E stick around
in port #1's input queue and don't get sent to ports 3 and 4 even if
ports 3 and 4 are idle.
Even worse, what happens to poor packet E which can't be sent to port 4
until all the mess from packets A-D are dealt with? Major latency occurs
at best, packet loss occurs at worse.
For each port in your system, you need a *minimum* per-port buffer size
to handle the maximum latency you wish to allow times the number of ports
in the router. If you have 4 1 Gigabit ports and wish to allow latencies
of up to 20mS, each port would require 8 MBytes of buffer space and you
*still* don't solve the problem that occurs if one port backs up, short
of throwing away the packets destined to other ports even if the other
ports are idle.
Backups can also introduce additional latencies that are not the fault
of the destination port.
DEC Gigaswitch switches suffered from exactly this problem -- MAE-WEST
had serious problems for several years, in fact, due to overcommits on
a single port out of dozens.
There are solutions to this sort of problem, but all such solutions require
truely significant on-card buffer memory... 8 MBytes minimum with my
example above. In order to handle card-to-card DMA, cards must be able
to handle sophisticated DMA scheduling to prevent blockages from
interfering with other cards.
With the correct centralized scheduling, the amount of on-card buffer
memory can be reduced somewhat. Though not by much.
Industrial strength routers that implement cross bars or other high
speed switch matrices have to solve the ripple-effect-blockage problem.
It is not a trivial problem to solve. It *IS* a necessary problem to solve
since direct card-card transfers are much more efficient then transfers
to common shared-memory stores. It is *NOT* a problem that PC
architectures can deal with well, though. It is definitely *NOT* a problem
that PCI cards are usually able to deal with due to the lack of DMA
channels and the lack of a system-wide scheduling protocol.
-Matt
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199903211835.KAA13904>
