Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 May 2014 09:49:02 +0200
From:      Borja Marcos <borjam@sarenet.es>
To:        Kenneth D. Merry <ken@freebsd.org>
Cc:        freebsd-scsi@freebsd.org
Subject:   Re: Testing new mpr driver
Message-ID:  <4DB83981-0D4D-4484-BC89-4ED8C02DCD0F@sarenet.es>
In-Reply-To: <20140507184557.GA80243@nargothrond.kdm.org>
References:  <8A41AB90-AC2F-4200-91D6-3D3CF9E8A835@sarenet.es> <20140507184557.GA80243@nargothrond.kdm.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On May 7, 2014, at 8:45 PM, Kenneth D. Merry wrote:

> That's hard to say.  If you're using a 6Gb expander, you would have =
half of
> the available SAS bandwidth if you only connected four lanes from the
> controller to the expander instead of 8.  If you somehow have a 12Gb
> expander (it isn't obvious from the model number above what the =
expander
> speed is), then you would have the same amount of bandwidth.

Anyway, as far as I understand (SAS expanders perform link switching, =
right?) the actual speed will be limited
by the end to end speed. As the disks I am using are SATA, not SAS, each =
lane would be anyway working at=20
6  Gbps instead of 12.=20

> One thing that could be happening is you may have lower latency =
through the
> new 12Gb controller.

As I saw it supports something called "fastpath" (I must read something, =
I am a bit outdated in these matters) I imagined that it=20
might be a more efficient transfer method which, despite working at 6 =
Gbps, might explain a  gain in performance.

> By the way, if you run with INVARIANTS enabled, you may run into some
> issues (i.e. a panic) on reboot until we merge r265485 to stable/10.

I am running stable/10, and  I'm not running INVARIANTS. Anyway I will =
be tracking stable/10 closely, thank you!

Great to have LSI so seriously involved!



Borja.






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DB83981-0D4D-4484-BC89-4ED8C02DCD0F>