Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 20 Oct 2014 19:09:14 +0530
From:      Sourish Mazumder <sourish@cloudbyte.com>
To:        Sourish Mazumder <sourish@cloudbyte.com>, freebsd-geom@freebsd.org
Subject:   Re: geom gate network
Message-ID:  <CABv3qbEVwKMn3dKbjtx=ASVo5Jaqfk3_jxvFXUUb%2B=N5gAMTqA@mail.gmail.com>
In-Reply-To: <20141017165849.GX1852@funkthat.com>
References:  <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com> <20141017165849.GX1852@funkthat.com>

next in thread | previous in thread | raw e-mail | index | archive | help
I am willing to test out the patches on my setup. Please send me the
patches.

On Fri, Oct 17, 2014 at 10:28 PM, John-Mark Gurney <jmg@funkthat.com> wrote:

> Sourish Mazumder wrote this message on Fri, Oct 17, 2014 at 17:34 +0530:
> > I am planning to use geom gate network for accessing remote disks. I set
> up
> > geom gate as per the freebsd handbook. I am using freebsd 9.2.
> > I am noticing heavy performance impact for disk IO when using geom gate.
> I
> > am using the dd command to directly write to the SSD for testing
> > performance. The IOPS gets cut down to 1/3 when accessing the SSD
> remotely
> > over a geom gate network, compared to the IOPS achieved when writing to
> the
> > SSD directly on the system where the SSD is attached.
> > I thought that there might be some problems with the network, so decided
> to
> > create a geom gate disk on the same system where the SSD is attached.
> This
> > way the IO is not going over the network. However, in this use case I
> > noticed the IOPS get cut down to 2/3 compared to IOPS achieved when
> writing
> > to the SSD directly.
> >
> > So, I have a SSD and its geom gate network disk created on the same node
> > and the same IOPS test using the dd command gives 2/3 IOPS performance
> for
> > the geom gate disk compared to running the IOPS test directly on the SSD.
> >
> > This points to some performance issues with the geom gate itself.
>
> Not necessarily...  Yes, it's slower, but at the same time, you now have
> to run lots of network and TCP code in addition to the IO for each and
> every IO...
>
> > Is anyone aware of any such performance issues when using geom gate
> network
> > disks? If so, what is the reason for such IO performance drop and are
> there
> > any solutions or tuning parameters to rectify the performance drop?
> >
> > Any information regarding the same will be highly appreciated.
>
> I did some work at this a while back... and if you're interested in
> improving performance and willing to do some testing... I can send you
> some patches..
>
> There are a couple issues that I know about..
>
> First, ggate specificly sets the buffer sizes, which disables the
> autosizing of TCP's window.. This means that if you have a high latency,
> high bandwidth link, you'll be limited to 128k / rtt of bandwidth.
>
> Second is that ggate isn't issueing multiple IOs at a time.  This means
> that any NCQ or tagging isn't able to be used, where as when running
> natively they can be used...
>
> --
>   John-Mark Gurney                              Voice: +1 415 225 5579
>
>      "All that I will do, has been done, All that I have, has not."
>



-- 
Sourish Mazumder
9986309755



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABv3qbEVwKMn3dKbjtx=ASVo5Jaqfk3_jxvFXUUb%2B=N5gAMTqA>