Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 May 2006 09:46:39 +0100 (BST)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Brooks Davis <brooks@one-eyed-alien.net>
Cc:        gallatin@freebsd.org, Paul Allen <nospam@ugcs.caltech.edu>, "current@freebsd.org" <current@freebsd.org>
Subject:   Re: Importing iSCSI target from NetBSD
Message-ID:  <20060530094413.W79162@fledge.watson.org>
In-Reply-To: <20060530015234.GB26022@odin.ac.hmc.edu>
References:  <447AB34C.4030509@sippysoft.com> <11410450515.20060529225555@lacave.net> <447B77AF.9060309@samsco.org> <447B7A55.7040704@FreeBSD.org> <447B7CB7.5000000@FreeBSD.org> <447B8900.4050603@samsco.org> <20060530004328.GF28128@groat.ugcs.caltech.edu> <20060530015234.GB26022@odin.ac.hmc.edu>

next in thread | previous in thread | raw e-mail | index | archive | help

On Mon, 29 May 2006, Brooks Davis wrote:

> On Mon, May 29, 2006 at 05:43:28PM -0700, Paul Allen wrote:
>>> From Scott Long <scottl@samsco.org>, Mon, May 29, 2006 at 05:51:28PM -0600:
>>>> P.S. Just to make it clear - just consider running iSCSI over 100MBps
>>>> link or even a slower WAN links, which I think covers very large market
>>>> for this technology now. Performance constrain imposed by running in
>>>> userland is unlikely to be an issue at all.
>>>
>>> Every company and group that I've talked to about iSCSI is worried about 
>>> performance.  In any case, please follow the lead of Mr. Senault and look 
>>> at making this a port.
>>
>> And in particular the anticipation of low(er) cost 10Gb Ethernet is a 
>> driving factor behind iSCSI.
>>
>> AFAIK, the low-latency performer in this field (for NICs) is from Myricom. 
>> Andrew Gallatin (one of the FreeBSD alpha committers)  was responsible for 
>> porting the myrinet drivers, so perhaps he can comment as to whether 
>> FreeBSD will be getting a driver for their 10GbE cards.  Ethernet at these 
>> speeds is real stress-test for many OSs; it should be interesting to see 
>> how FreeBSD holds-up.
>
> There's a driver in current.  We don't perform nearly as well as we should 
> at the moment.

FYI, I recently received donated hardware from Yahoo! and Drew has kindly 
offered to send me a couple of 10gbps cards to work with, so I hope to have a 
chance to start doing some measurement and optimization work.  One of the 
problems we've been having is that it's hard to optimize the CPU use of the 
network stack when the CPU significantly outstrips available bus and network 
bandwidth.  It seems like hardware swings back and forth quite a bit -- for a 
few years gigabit was way-the-heck-faster-than-CPU, now it's the other way 
around again.  The best stack optimization work happens when you have to 
figure out how to get the network stack to perform well in near-infinite 
bandwidth scenarios with a CPU-bound stack, which is where we are with 10gbps 
currently.  One of the things that makes all this rather tricky is that it's 
quite hard to build test rigs, test setups, and get the hardware details 
right.  Hopefully, with Yahoo's and Drew's help, my test setup will be good 
for looking at this for a couple of years.

Robert N M Watson



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060530094413.W79162>