Date: Fri, 17 Oct 2014 09:58:49 -0700 From: John-Mark Gurney <jmg@funkthat.com> To: Sourish Mazumder <sourish@cloudbyte.com> Cc: freebsd-geom@freebsd.org Subject: Re: geom gate network Message-ID: <20141017165849.GX1852@funkthat.com> In-Reply-To: <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com> References: <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Sourish Mazumder wrote this message on Fri, Oct 17, 2014 at 17:34 +0530: > I am planning to use geom gate network for accessing remote disks. I set up > geom gate as per the freebsd handbook. I am using freebsd 9.2. > I am noticing heavy performance impact for disk IO when using geom gate. I > am using the dd command to directly write to the SSD for testing > performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely > over a geom gate network, compared to the IOPS achieved when writing to the > SSD directly on the system where the SSD is attached. > I thought that there might be some problems with the network, so decided to > create a geom gate disk on the same system where the SSD is attached. This > way the IO is not going over the network. However, in this use case I > noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writing > to the SSD directly. > > So, I have a SSD and its geom gate network disk created on the same node > and the same IOPS test using the dd command gives 2/3 IOPS performance for > the geom gate disk compared to running the IOPS test directly on the SSD. > > This points to some performance issues with the geom gate itself. Not necessarily... Yes, it's slower, but at the same time, you now have to run lots of network and TCP code in addition to the IO for each and every IO... > Is anyone aware of any such performance issues when using geom gate network > disks? If so, what is the reason for such IO performance drop and are there > any solutions or tuning parameters to rectify the performance drop? > > Any information regarding the same will be highly appreciated. I did some work at this a while back... and if you're interested in improving performance and willing to do some testing... I can send you some patches.. There are a couple issues that I know about.. First, ggate specificly sets the buffer sizes, which disables the autosizing of TCP's window.. This means that if you have a high latency, high bandwidth link, you'll be limited to 128k / rtt of bandwidth. Second is that ggate isn't issueing multiple IOs at a time. This means that any NCQ or tagging isn't able to be used, where as when running natively they can be used... -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20141017165849.GX1852>