Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 19 Aug 2000 14:33:28 +1000 (EST)
From:      Stanley Hopcroft <Stanley.Hopcroft@IPAustralia.Gov.AU>
To:        isp@FreeBSD.ORG
Subject:   Throughput & Availability: Does anyone have experience with Trunking products (eg EtherChannel) ... ?
Message-ID:  <Pine.BSF.4.21.0008191402560.353-100000@stan>

next in thread | raw e-mail | index | archive | help
Dear Ladies and Gentlemen,

I am writing about getting beter throughput and availability for
servers (by having many NICs in each server), and ask would anyone
please care to compare their experience with

1 Trunking products (ie VLANs between switch ports that connect to
Servers) such as Foundry's trunking products and EtherChannel

2 Equal Cost Mult-path routing (with or without a routing core).

Writers to this list have commented favourably on the equal cost
multi-path option, but that option seems to be notably absent from
switch vendor literature; they only talk trunks (perhaps so they can
sell more switch ports. See for example the Foundry ISP Co-Location and
Co-Hosting "case study")

The theoretical pluses and minuses of each seem to me to be

Factor			Trunk			Multi-Path

n x Throughput		No (<= 2 x )		~ =
(n = number of NICs)

			eg 4 100 TX NICs
			=> 200 Mbps		=> 400 Mbps

Auto Failover		Yes			Yes

			(by switch)		(by routing process)

Switch Ports == NICs	No (always 2 x)		Yes

			dual or quad trunk	any number of ports
			2 or 4 ports

Layer 2 (802.1q)	Yes			No

Layer 3			No			Yes

Standard		if 802.1q not ISL	No 	==>

Available for FreeBSD	No			Yes

Available for famous
brand servers		Yes (Sun, NT, AIX)	Yes (maybe with Gated or
			if you've got the 	RRAS)		
			right OS, driver etc

All switches 		No (must have correct	Yes
			firmware etc)

Same NICs		Yes			No
in server (supported with trunking drivers etc)

TCP reordering		No			Yes

Needs an L3 Core	No			Yes

Needs a routing process	No			Yes
Per server (or many defaults)

My conclusion is that trunking products provide something like a
statistical load balancing (half traffic uses one NIC, the remainder
the other) and does not increase the client - server link capacity.

It is therefore a feeble method of increasing server throughput even
though it does improve link availality ?

The only practical disadvantage of equal cost multi path are running
routing processes on servers, having the TCP driver re order
packets, and needing an L3 switch core (so that client traffic is
fowarded to the server by all of the routes through the server NICs).

Would any one like to comment on this, or better still let me know why
trunking is a better proposal ?

Alternatvely, why would anyone use Trunking (EtherChannel) ? 

I suppose Multi-link PPP is completely out of the question because no
switch supports it ?

Does anyone use the Foundry ServerIron only to provide better server
throughput ? (the "Virtual IP" corresponds to many NICs in one server)

Thank you,

Yours sincerely.

S Hopcroft
Network Specialist
IP Australia

+61 2 6283 3189
+61 2 6281 1353 FAX




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0008191402560.353-100000>