Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Aug 2007 20:25:14 -0500
From:      Eric Anderson <anderson@freebsd.org>
To:        Scott Long <scottl@samsco.org>
Cc:        FREEBSD - SCSI - LIST <freebsd-scsi@freebsd.org>
Subject:   Re: performance with LSI SAS 1064
Message-ID:  <46D76DFA.5010106@freebsd.org>
In-Reply-To: <46D6D952.40305@samsco.org>
References:  <71d0ebb0708291245g79d2141fx73cc8a6e76875944@mail.gmail.com>	 <46D5E17F.3070403@samsco.org>	 <71d0ebb0708291416v17351c65u7ccc1b7bbe0271d2@mail.gmail.com>	 <46D5E5B1.207@samsco.org>	 <71d0ebb0708291506i49649a60l8006deafb20891ac@mail.gmail.com>	 <46D63710.1020103@freebsd.org> <71d0ebb0708300502x632fe83bo617f84ca2008dc7d@mail.gmail.com> <46D6BEC0.1050104@samsco.org> <46D6CB71.4030707@freebsd.org> <46D6D952.40305@samsco.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Scott Long wrote:
> Eric Anderson wrote:
>> Scott Long wrote:
>>> Lutieri G. wrote:
>>>> 2007/8/30, Eric Anderson <anderson@freebsd.org>:
>>>>> I'm confused - you said in your first post you were getting 3MB/s, 
>>>>> where
>>>>>   above you show something like 55MB/s.
>>>> Sorry! using blogbench i got 3MB/s and 100% busy. Once is 100% busy i
>>>> thinked that 3MB/s is the maximum speed. But i was wrong...
>>>
>>> %busy is a completely useless number for a anything but untagged,
>>> uncached disk subsystems.  It's only an indirect measure of latency, and
>>> there are better tools for measuring latency (gstat).
>>>
>>>>> You didn't say what kind of disks, or how many, the configuration, 
>>>>> etc -
>>>>> so it's hard to answer much.  The 55MB/s seems pretty decent for many
>>>>> hard drives in a sequential use state (which is what dd tests really).
>>>>>
>>>> SAS disks. Seagate, i don't know what is the right model of disks.
>>>>
>>>> Ok. If 55Mb/s is a decent speed i'm happy. I'm getting problems with
>>>> squid cache and maybe should be a problem related with disks. But...
>>>> i'm investigating and discharging problems.
>>>>
>>>>
>>>>> Your errors before were probably caused because your queue depth is 
>>>>> set
>>>>> to 255 (or 256?) and the adapter can't do that many.  You should use
>>>>> camcontrol to reduce it, to maybe 32.  See the camcontrol man page for
>>>>> the right usage.  It's something that needs setting on every boot, 
>>>>> so a
>>>>> startup file is a good place for it maybe.
>>>>>
>>>> Is there any way of get the right number to reduce?!
>>>>
>>>
>>> If you're seeing erratic performance in production _AND_ you're seeing
>>> lots of accompanying messages on the console about tag depth jumping
>>> around, you can use camcontrol to force the depth to a lower number of
>>> you're choosing.  This kind of problem is pretty rare, though.
>>
>> Scott, you are far more of a SCSI guru than I, so please correct me if 
>> this is incorrect.  Can't you get a good estimate, by knowing the 
>> queue depth of the target(s), and dividing it by the number of 
>> initiators?  So in his case, he has one initiator, and (let's say) one 
>> target.  If the queue depth of the target (being the Seagate SAS 
>> drive) is 128 (see Seagate's paper here: 
>> http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/savvio/Savvio%2015K.1/SAS/100407739b.pdf 
>> ), then he should have to reduce it down from 25[56] to 128, correct?
>>
>> With QLogic cards connected to a fabric, I saw queue depth issues 
>> under heavy load.
>>
> 
> I understand what you're saying, but you're a bit confused on 
> terminology =-)

Figured as much :)

> There are two factors in the calculation.  One is how many transactions
> the controller (the initiator) can have in progress as once.  This is
> really independent of what the disks are capable of or how many disks 
> are on the bus.  This is normally known to the driver in some 
> chip-specific way.  Second is how many tagged transactions a disk can
> handle.  This actually isn't something that can be discovered in a
> generic way, so the SCSI layer in FreeBSD guesses, and then revises that
> guess over time based on feedback from the drive.
> 
> Manually setting the queue depth is not something that he "should have 
> to [do]".  It perfectly normal to get console messages on occasion about
> the OS re-adjusting the depth.  Where it becomes a problem is in high
> latency topologies (like FC fabrics) and buggy drive firmware where the 
> algorithm winds up thrashing a bit.  For direct attached SAS disks, I
> highly doubt that it is needed.  Playing a guessing game with this will
> almost certainly result in lower performance.

Ok, that makes sense - my experience was in a heavily loaded fabric 
environment.

Thanks for the great info!

Eric





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?46D76DFA.5010106>