Date: Fri, 17 Oct 2014 17:34:49 +0530 From: Sourish Mazumder <sourish@cloudbyte.com> To: freebsd-geom@freebsd.org Subject: geom gate network Message-ID: <CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw@mail.gmail.com>
next in thread | raw e-mail | index | archive | help
Hi, I am planning to use geom gate network for accessing remote disks. I set up geom gate as per the freebsd handbook. I am using freebsd 9.2. I am noticing heavy performance impact for disk IO when using geom gate. I am using the dd command to directly write to the SSD for testing performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely over a geom gate network, compared to the IOPS achieved when writing to the SSD directly on the system where the SSD is attached. I thought that there might be some problems with the network, so decided to create a geom gate disk on the same system where the SSD is attached. This way the IO is not going over the network. However, in this use case I noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writing to the SSD directly. So, I have a SSD and its geom gate network disk created on the same node and the same IOPS test using the dd command gives 2/3 IOPS performance for the geom gate disk compared to running the IOPS test directly on the SSD. This points to some performance issues with the geom gate itself. Is anyone aware of any such performance issues when using geom gate network disks? If so, what is the reason for such IO performance drop and are there any solutions or tuning parameters to rectify the performance drop? Any information regarding the same will be highly appreciated. -- Sourish Mazumder Software Architect CloudByte Inc.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABv3qbGL99NZvQ-2Ze=rnQTjEEf_KLy1sJQHLV27e47sX2dLGw>