Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Aug 2007 20:22:30 -0500
From:      Eric Anderson <anderson@freebsd.org>
To:        Scott Long <scottl@samsco.org>
Cc:        FREEBSD - SCSI - LIST <freebsd-scsi@freebsd.org>
Subject:   Re: performance with LSI SAS 1064
Message-ID:  <46D76D56.5070007@freebsd.org>
In-Reply-To: <46D6D9C3.6050202@samsco.org>
References:  <71d0ebb0708291245g79d2141fx73cc8a6e76875944@mail.gmail.com>	 <46D5E17F.3070403@samsco.org>	 <71d0ebb0708291416v17351c65u7ccc1b7bbe0271d2@mail.gmail.com>	 <46D5E5B1.207@samsco.org>	 <71d0ebb0708291506i49649a60l8006deafb20891ac@mail.gmail.com>	 <46D63710.1020103@freebsd.org>	 <71d0ebb0708300502x632fe83bo617f84ca2008dc7d@mail.gmail.com>	 <46D6BEC0.1050104@samsco.org> <46D6CB71.4030707@freebsd.org> <71d0ebb0708300737o4fc7966dj61cf0e68482da398@mail.gmail.com> <46D6D9C3.6050202@samsco.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Scott Long wrote:
> 54MB/s is reasonable for 10k 2.5" disks.  You might be able to squeeze
> some more performance by upgrading to FreeBSD 7.0.  I _do_not_ recommend
> playing with the queue depth controls unless your console logs are
> getting quickly filled with messages about it.

Yea, 55-65MB/s is about right for that drive..  Also, when I played with 
the tagged queue depth previously, I never had any issue, and it solved 
several SCSI (fabric/fiber channel thought) issues I was having.  The 
performance didn't change measurably when changing it down to 64, but 
below that it did see a performance hit.

Eric




> Lutieri G. wrote:
>> This is my disks:
>>
>> Seagate Savvio(ST913401ss) 10K.1 SAS 3Gb/s 73-GB Hard Drive. In the
>> manual file i found this information:
>>
>> Queue tagging (up to 64 queue tags supported)
>>
>> Is this the max # for setting using camcontrol?! syntax like this:
>> camcontrol tags da0 -N 64 ??
>>
>> 2007/8/30, Eric Anderson <anderson@freebsd.org>:
>>> Scott Long wrote:
>>>> Lutieri G. wrote:
>>>>> 2007/8/30, Eric Anderson <anderson@freebsd.org>:
>>>>>> I'm confused - you said in your first post you were getting 3MB/s, 
>>>>>> where
>>>>>>   above you show something like 55MB/s.
>>>>> Sorry! using blogbench i got 3MB/s and 100% busy. Once is 100% busy i
>>>>> thinked that 3MB/s is the maximum speed. But i was wrong...
>>>> %busy is a completely useless number for a anything but untagged,
>>>> uncached disk subsystems.  It's only an indirect measure of latency, 
>>>> and
>>>> there are better tools for measuring latency (gstat).
>>>>
>>>>>> You didn't say what kind of disks, or how many, the configuration, 
>>>>>> etc -
>>>>>> so it's hard to answer much.  The 55MB/s seems pretty decent for many
>>>>>> hard drives in a sequential use state (which is what dd tests 
>>>>>> really).
>>>>>>
>>>>> SAS disks. Seagate, i don't know what is the right model of disks.
>>>>>
>>>>> Ok. If 55Mb/s is a decent speed i'm happy. I'm getting problems with
>>>>> squid cache and maybe should be a problem related with disks. But...
>>>>> i'm investigating and discharging problems.
>>>>>
>>>>>
>>>>>> Your errors before were probably caused because your queue depth 
>>>>>> is set
>>>>>> to 255 (or 256?) and the adapter can't do that many.  You should use
>>>>>> camcontrol to reduce it, to maybe 32.  See the camcontrol man page 
>>>>>> for
>>>>>> the right usage.  It's something that needs setting on every boot, 
>>>>>> so a
>>>>>> startup file is a good place for it maybe.
>>>>>>
>>>>> Is there any way of get the right number to reduce?!
>>>>>
>>>> If you're seeing erratic performance in production _AND_ you're seeing
>>>> lots of accompanying messages on the console about tag depth jumping
>>>> around, you can use camcontrol to force the depth to a lower number of
>>>> you're choosing.  This kind of problem is pretty rare, though.
>>> Scott, you are far more of a SCSI guru than I, so please correct me if
>>> this is incorrect.  Can't you get a good estimate, by knowing the queue
>>> depth of the target(s), and dividing it by the number of initiators?  So
>>> in his case, he has one initiator, and (let's say) one target.  If the
>>> queue depth of the target (being the Seagate SAS drive) is 128 (see
>>> Seagate's paper here:
>>> http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/savvio/Savvio%2015K.1/SAS/100407739b.pdf 
>>>
>>> ), then he should have to reduce it down from 25[56] to 128, correct?
>>>
>>> With QLogic cards connected to a fabric, I saw queue depth issues under
>>> heavy load.
>>>
>>> Eric
>>>
>>>
>>>
>>>
>>>
>>
>>
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?46D76D56.5070007>