Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 21 Jun 2017 09:01:01 +0100
From:      Steven Hartland <killing@multiplay.co.uk>
To:        "Caza, Aaron" <Aaron.Caza@ca.weatherford.com>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: FreeBSD 11.1 Beta 2 ZFS performance degradation on SSDs
Message-ID:  <86bf6fad-977a-b096-46b9-e9099a57a1f4@multiplay.co.uk>
In-Reply-To: <c4c6d9a56a6648189011a28ab1a90f59@DM2PR58MB013.032d.mgd.msft.net>
References:  <c4c6d9a56a6648189011a28ab1a90f59@DM2PR58MB013.032d.mgd.msft.net>

next in thread | previous in thread | raw e-mail | index | archive | help

On 20/06/2017 21:26, Caza, Aaron wrote:
>> On 20/06/2017 17:58, Caza, Aaron wrote:
>> dT: 1.001s  w: 1.000s
>>    L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w    d/s   kBps   ms/d   %busy Name
>>       0   4318   4318  34865    0.0      0      0    0.0      0      0    0.0   14.2| ada0
>>       0   4402   4402  35213    0.0      0      0    0.0      0      0    0.0   14.4| ada1
>>
>> dT: 1.002s  w: 1.000s
>>    L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w    d/s   kBps   ms/d   %busy Name
>>       1   4249   4249  34136    0.0      0      0    0.0      0      0    0.0   14.1| ada0
>>       0   4393   4393  35287    0.0      0      0    0.0      0      0    0.0   14.5| ada1
>> You %busy is very low, so sounds like the bottleneck isn't in raw disk performance but somewhere else.
>>
>> Might be interesting to see if anything stands out in top -Sz and then press h for threads.
>>
> I rebooted the system to disable Trim so currently not degraded.
>
> dT: 1.001s  w: 1.000s
>   L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w    d/s   kBps   ms/d   %busy Name
>      3   3887   3887 426514    0.7      0      0    0.0      0      0    0.0   90.7| ada0
>      3   3987   3987 434702    0.7      0      0    0.0      0      0    0.0   92.0| ada1
>
> dT: 1.002s  w: 1.000s
>   L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w    d/s   kBps   ms/d   %busy Name
>      3   3958   3958 433563    0.7      0      0    0.0      0      0    0.0   91.6| ada0
>      3   3989   3989 438417    0.7      0      0    0.0      0      0    0.0   93.0| ada1
>
> test@f111beta2:~ # dd if=/testdb/test of=/dev/null bs=1m
> 16000+0 records in
> 16000+0 records out
> 16777216000 bytes transferred in 19.385855 secs (865435959 bytes/sec)
Now that is interesting, as your getting smaller number ops/s but much 
higher throughput.

In the normal case you're seeing ~108Kb per read where in the degraded 
case you're seeing 8Kb per read.

Given this and knowing the application level isn't effecting it, we need 
to identify where in the IO stack the reads are getting limited to 8Kb?

With your additional information about ARC, it could be that the limited 
memory is the cause.

     Regards
     Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?86bf6fad-977a-b096-46b9-e9099a57a1f4>