Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Sep 2019 11:35:41 -0400
From:      John Fleming <john@spikefishsolutions.com>
To:        Warner Losh <imp@bsdimp.com>
Cc:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: Question about bottle neck in storage
Message-ID:  <CABy3cGyUM6CfxL-pvd5_TCoPXd7y1kXiPRyj1-ecFpx5b3%2BPww@mail.gmail.com>
In-Reply-To: <CANCZdfoKGJ1F5sKN_1u-mbb9NHnD8c==DAjefV_RK%2ByKzip_oQ@mail.gmail.com>
References:  <CABy3cGxjhTcg%2Bpg2FiCc4OqG4Z1Qy1vFdvo7zU_t0ahC3mb%2BYw@mail.gmail.com> <CANCZdfoKGJ1F5sKN_1u-mbb9NHnD8c==DAjefV_RK%2ByKzip_oQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Sep 24, 2019 at 1:09 PM Warner Losh <imp@bsdimp.com> wrote:
>
>
>
> On Tue, Sep 24, 2019 at 5:46 PM John Fleming <john@spikefishsolutions.com=
> wrote:
>>
>> Is there anyway to see how busy a SAS/Sata controller is vs disks? I
>> have a R720 with 14 Samsung 860 EVOs in it (its a lab server) in raid
>> 10 ZFS.
>>
>> When firing off a dd I (bs=3D1G count=3D10) seems like the disks never g=
o
>> above %50 busy. I'm trying to figure out if i'm maxing out SATA 3 BW
>> or if its something else (like terrible dd options).
>
>
> Two points to consider here. First, NVMe has lots of queues and needs lot=
s of concurrent transactions to saturate, so the 50% busy means you are no =
where close to saturating the drives. Schedule more I/O too fix that. It's =
better to do lots and lots of concurrent DD to different parts of the drive=
, or to use fio with the aio kernel option and posixaio I/O scheduling meth=
od.
>
> I use the following script, but often need to increase the number of thre=
ads / jobs to saturate.
>
> ; SSD testing: 128k I/O 64 jobs 32 deep queue
>
> [global]
> direct=3D1
> rw=3Drandread
> refill_buffers
> norandommap
> randrepeat=3D0
> bs=3D128k
> ioengine=3Dposixaio
> iodepth=3D32
> numjobs=3D64
> runtime=3D60
> group_reporting
> thread
>
> [ssd128k]
>
I didn't catch what utilty was using that. I started poking around an
iozone and bonnie++

BTW these are SATA not nvme.

> Second, the system's % busy statistics are misleading. They are the %of t=
he time that a command is outstanding on the drive. 100% busy can be a tiny=
 percentage of the total bandwidth you can get from the drive.
>
>>
>> my setup is Dell R720 with 2 x LSI 9361 cards. Each card is going to a
>> dedicated 8 drive board inside the front of the R720. Basically i'm
>> just saying its not a single SAS cable to 14 drives.
>>
>> Don't have cpu info hand.. zeon something. DDR3-1600 (128GB)
>>
>> Both controllers are in 8x slots running PCIe gen 3.
>>
>> BTW i'm sure this has been asked a million times but what would be
>> some decent benchmark tests while i'm at it?
>
>
> See above... :)
>
> Warner

So my UPS got angry and shut everything down. I figured this was a
good change to look at iostats again.

This is while the array is being scrubbed.

I'm very happy with these numbers!
BTW da0 and 8 are OS drives and not raid 10 members.

extended device statistics
device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b
da0            0       0      0.0      0.0     0     0     0     0    0   0
da1         4003       7 505202.5    207.6     0     0     1     0    2 100
da2         3980      10 508980.2    265.5     0     0     0     0    2 100
da3         3904       8 499675.8    183.1     0     0     0     0    2  99
da4         3850       8 488870.5    263.9     0     0     0     0    2 100
da5         4013      11 513640.6    178.8     0     0     1     0    2 100
da6         3851      13 489035.8    286.4     0     0     1     0    2 100
da7         3931      12 503197.6    271.6     0     0     0     0    2 100
da8            0       0      0.0      0.0     0     0     0     0    0   0
da9         4002       8 505164.1    207.6     0     0     1     0    2 100
da10        3981      10 509133.8    265.5     0     0     0     0    2 100
da11        3905       8 499791.0    183.1     0     0     0     0    2 100
da12        3851       9 488985.6    263.9     0     0     0     0    2 100
da13        4012      11 513576.6    178.8     0     0     1     0    2 100
da14        3850      14 488971.8    286.4     0     0     0     0    2 100
da15        3930      12 503108.0    271.6     0     0     0     0    2 100



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABy3cGyUM6CfxL-pvd5_TCoPXd7y1kXiPRyj1-ecFpx5b3%2BPww>