Date: Thu, 15 Jun 2000 22:21:59 -0500 From: "Shawn Barnhart" <swb@grasslake.net> To: "Thierry Herbelot" <herbelot@cybercable.fr> Cc: <freebsd-hardware@FreeBSD.ORG> Subject: Re: 4 x Network card Message-ID: <005301bfd742$08e82c30$0102a8c0@k6> References: <20000614173426.17183.qmail@hotmail.com> <61981.961008104@verdi.nethelp.no> <3948094C.2149CFEC@cybercable.fr>
next in thread | previous in thread | raw e-mail | index | archive | help
----- Original Message ----- From: "Thierry Herbelot" <herbelot@cybercable.fr> | PS : has someone any idea on how to use all ports "ganged" to get more | bandwidth ? (I know I could use a 1-Gig Enet board, but I would like to | use a 4-port-board to get a 400Mbps bandwidth to my file server) We used to do that with Netware with multiple single-port NICs; it was referred to as load balancing. The server answered get nearest server broadcasts on ports in a round-robin fashion, which caused clients to always TX to those ports. Server TX was sent out whatever port wasn't busy. All ports had to be plugged into a switch for it to work right. How do the Intel NICs that support "teaming" do it? I've always suspected it was something similar, but moved instead from layer 4 to the driver layer. I imagine that FreeBSD would complain if all 4 ports had the same IP address, but that would kind of round-robin accomplish what you're after. It'd also work if the switch and the server both supported multilink trunking. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hardware" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?005301bfd742$08e82c30$0102a8c0>