Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Sep 2019 12:27:19 -0400
From:      John Fleming <john@spikefishsolutions.com>
To:        Pete Wright <pete@nomadlogic.org>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: Question about bottle neck in storage
Message-ID:  <CABy3cGx5_HMNjaL=%2B9ZvCqHVK9ZkdGs%2BHdznymTX8ODZ7Gk=zA@mail.gmail.com>
In-Reply-To: <9415ff89-a36a-86ee-3f3f-47e9b807059e@nomadlogic.org>
References:  <CABy3cGxjhTcg%2Bpg2FiCc4OqG4Z1Qy1vFdvo7zU_t0ahC3mb%2BYw@mail.gmail.com> <9415ff89-a36a-86ee-3f3f-47e9b807059e@nomadlogic.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Sep 24, 2019 at 12:05 PM Pete Wright <pete@nomadlogic.org> wrote:
>
>
>
> On 9/24/19 8:45 AM, John Fleming wrote:
> > Is there anyway to see how busy a SAS/Sata controller is vs disks? I
> > have a R720 with 14 Samsung 860 EVOs in it (its a lab server) in raid
> > 10 ZFS.
> >
> > When firing off a dd I (bs=1G count=10) seems like the disks never go
> > above %50 busy. I'm trying to figure out if i'm maxing out SATA 3 BW
> > or if its something else (like terrible dd options).
> >
> > my setup is Dell R720 with 2 x LSI 9361 cards. Each card is going to a
> > dedicated 8 drive board inside the front of the R720. Basically i'm
> > just saying its not a single SAS cable to 14 drives.
> >
> > Don't have cpu info hand.. zeon something. DDR3-1600 (128GB)
> >
> > Both controllers are in 8x slots running PCIe gen 3.
>
> might want to take a look at sysutils/intel-pcm
> (https://github.com/opcm/pcm).  I *think* this should give you metrics
> on PCIe bus utilization among other useful status.
>

ok I can check that, but to be clear what i meant was like have i maxed out
SATA3 bw on the card not so much PCIe bw.

> Also, lookup the bandwidth for the PCIe bus and see if your aggregate
> disk throughput on one of the PCIe lanes is saturating the bus (pcm
> should also help here).  You can also run "zpool iostat -v 2" to see per
> disk i/o metrics to help determine if this is an issue.
>
I think i've looked at that before but i'll check again.

>
> > BTW i'm sure this has been asked a million times but what would be
> > some decent benchmark tests while i'm at it?
>
> I generally run several tests and then compare results, for example
> bonnie++, iozone, iperf (writing over the wire and to disk) as well as
> some more realistic scripts based on the use-case i'm building a
> solution for.  hope that helps.
>
I've done iperf just across network IO. I'm getting 4x25 Gb/sec
so i'm good there i think (Mellonox Connectx4 ethernet mode).

I'll poke around with bonnie++ and iozone. I used bonnie++ once but
didn't really
get what it was telling me but I really didn't put much into it.

Thanks!
> -pete
>
> --
> Pete Wright
> pete@nomadlogic.org
> @nomadlogicLA
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABy3cGx5_HMNjaL=%2B9ZvCqHVK9ZkdGs%2BHdznymTX8ODZ7Gk=zA>