Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Sep 2019 19:09:14 +0200
From:      Warner Losh <imp@bsdimp.com>
To:        John Fleming <john@spikefishsolutions.com>
Cc:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: Question about bottle neck in storage
Message-ID:  <CANCZdfoKGJ1F5sKN_1u-mbb9NHnD8c==DAjefV_RK%2ByKzip_oQ@mail.gmail.com>
In-Reply-To: <CABy3cGxjhTcg%2Bpg2FiCc4OqG4Z1Qy1vFdvo7zU_t0ahC3mb%2BYw@mail.gmail.com>
References:  <CABy3cGxjhTcg%2Bpg2FiCc4OqG4Z1Qy1vFdvo7zU_t0ahC3mb%2BYw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Sep 24, 2019 at 5:46 PM John Fleming <john@spikefishsolutions.com>
wrote:

> Is there anyway to see how busy a SAS/Sata controller is vs disks? I
> have a R720 with 14 Samsung 860 EVOs in it (its a lab server) in raid
> 10 ZFS.
>
> When firing off a dd I (bs=1G count=10) seems like the disks never go
> above %50 busy. I'm trying to figure out if i'm maxing out SATA 3 BW
> or if its something else (like terrible dd options).
>

Two points to consider here. First, NVMe has lots of queues and needs lots
of concurrent transactions to saturate, so the 50% busy means you are no
where close to saturating the drives. Schedule more I/O too fix that. It's
better to do lots and lots of concurrent DD to different parts of the
drive, or to use fio with the aio kernel option and posixaio I/O scheduling
method.

I use the following script, but often need to increase the number of
threads / jobs to saturate.

; SSD testing: 128k I/O 64 jobs 32 deep queue

[global]
direct=1
rw=randread
refill_buffers
norandommap
randrepeat=0
bs=128k
ioengine=posixaio
iodepth=32
numjobs=64
runtime=60
group_reporting
thread

[ssd128k]

Second, the system's % busy statistics are misleading. They are the %of the
time that a command is outstanding on the drive. 100% busy can be a tiny
percentage of the total bandwidth you can get from the drive.


> my setup is Dell R720 with 2 x LSI 9361 cards. Each card is going to a
> dedicated 8 drive board inside the front of the R720. Basically i'm
> just saying its not a single SAS cable to 14 drives.
>
> Don't have cpu info hand.. zeon something. DDR3-1600 (128GB)
>
> Both controllers are in 8x slots running PCIe gen 3.
>
> BTW i'm sure this has been asked a million times but what would be
> some decent benchmark tests while i'm at it?
>

See above... :)

Warner



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfoKGJ1F5sKN_1u-mbb9NHnD8c==DAjefV_RK%2ByKzip_oQ>