Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 17 Oct 2014 09:22:56 -0400
From:      "Chad J. Milios" <chad@ccsys.com>
To:        Sourish Mazumder <sourish@cloudbyte.com>
Cc:        "freebsd-geom@freebsd.org" <freebsd-geom@freebsd.org>
Subject:   Re: geom gate network
Message-ID:  <55184ECF-DCC3-4ACE-A798-D1E7F5BDDB58@ccsys.com>
In-Reply-To: <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com>
References:  <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> On Oct 17, 2014, at 8:04 AM, Sourish Mazumder <sourish@cloudbyte.com> wrot=
e:
>=20
> Hi,
>=20
> I am planning to use geom gate network for accessing remote disks. I set u=
p
> geom gate as per the freebsd handbook. I am using freebsd 9.2.
> I am noticing heavy performance impact for disk IO when using geom gate. I=

> am using the dd command to directly write to the SSD for testing
> performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely=

> over a geom gate network, compared to the IOPS achieved when writing to th=
e
> SSD directly on the system where the SSD is attached.
> I thought that there might be some problems with the network, so decided t=
o
> create a geom gate disk on the same system where the SSD is attached. This=

> way the IO is not going over the network. However, in this use case I
> noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writin=
g
> to the SSD directly.
>=20
> So, I have a SSD and its geom gate network disk created on the same node
> and the same IOPS test using the dd command gives 2/3 IOPS performance for=

> the geom gate disk compared to running the IOPS test directly on the SSD.
>=20
> This points to some performance issues with the geom gate itself.
>=20
>=20
> Is anyone aware of any such performance issues when using geom gate networ=
k
> disks? If so, what is the reason for such IO performance drop and are ther=
e
> any solutions or tuning parameters to rectify the performance drop?
>=20
> Any information regarding the same will be highly appreciated.
>=20
> --=20
> Sourish Mazumder
> Software Architect
> CloudByte Inc.

What hardware are we talking about, specifically? Systems, NICs, SSDs. To me=
, the ratios you are describing don't seem that unreasonable. You surely rea=
lize you're asking for a lot out of a software solution and comparing it to d=
irectly attached hardware. SSDs generally handle a LOT of IOPS. SANs in gene=
ral are not going to get you anywhere close to direct attached performance w=
ithout everything in the chain being REALLY expensive. I see IOPS are your m=
ain concern but could you also post throughput numbers, to compare and contr=
ast? We need real numbers and real hardware makes/models to get an idea. Wha=
t block sizes have you tried with dd and what is your baseline direct attach=
ed performance? Have you tried iSCSI, either the new in-kernel stack or the o=
ld user land tools? Have you compared this to any linux setups on the same s=
ystem? When you said "create a geom gate disk on the same system" do you mea=
n using ggatel or still using ggated/ggatec? It'd be useful to have both tho=
se situations benchmarked for more insight regarding the factors at play.

Is there room for optimization and tweaking within the system as you describ=
ed it? Probably. To me though my first instinct was that the problem is more=
 likely in your expectations of ggate and TCP. I think iSCSI will get you cl=
oser to what you expect, how much closer I'm not sure without trying it out.=


And 9.2? That's deprecated, man. Can you use 9.3 or 10.x? :)

I realize you no doubt have real work to perform and don't have all day to b=
enchmark umpteen variations and permutations of what at first glance seems l=
ike it should be a simple system. Sorry I couldn't be of more help. Maybe so=
meone else's intuition will bring you a better answer with less headache. I o=
nly hope to have shed some light on the many factors at play here.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55184ECF-DCC3-4ACE-A798-D1E7F5BDDB58>