From owner-freebsd-scsi@FreeBSD.ORG Tue Sep 25 03:09:53 2012 Return-Path: Delivered-To: freebsd-scsi@FreeBSD.ORG Received: by hub.freebsd.org (Postfix, from userid 821) id 2A1F61065674; Tue, 25 Sep 2012 03:09:53 +0000 (UTC) Date: Tue, 25 Sep 2012 03:09:53 +0000 From: John To: FreeBSD iSCSI Message-ID: <20120925030953.GA84605@FreeBSD.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i Cc: Subject: Performance question - istgt with dual 10g data links to linux client X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Sep 2012 03:09:53 -0000 Hi Folks, I have a bsd 9.1 zfs server running the latest istgt connected to a RHEL 6.1 system. Regardless of how I configure the systems, I cannot seem to exceed 1GB throughput. If I create a 25G /dev/md0 and export it via istgt (no mpio here), format it with default xfs values, place a 20G file on it, I get the following: dd if=/usr2/20g of=/dev/null bs=512K 40960+0 records in 40960+0 records out 21474836480 bytes (21 GB) copied, 21.4256 s, 1.0 GB/s Running the above /dev/md0 with mpio, dual paths on 10G cards, with rr_minio set anywhere from 1 to 100 on the linux side: [PortalGroup2] Comment "Two networks - one port" Portal DA1 10.59.6.14:5020 # 10G mtu 9000 Portal DA2 10.60.6.14:5020 # 10G mtu 9000 Comment "END: PortalGroup2" mpatha (33000000051ed39a4) dm-0 FreeBSD,USE136EXHF_iSCSI size=25G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 11:0:0:0 sdd 8:48 active ready running `- 12:0:0:0 sde 8:64 active ready running dd if=/usr2/20g of=/dev/null bs=1M 20480+0 records in 20480+0 records out 21474836480 bytes (21 GB) copied, 20.0076 s, 1.1 GB/s I can see the traffic evenly across both interfaces. I simply can't seem to get the parallelization factor up. Higher levels of mpio have no effect. I realize I haven't included the entire configuration. I'm hoping someone can give some high-level thoughts. I do need to maximize single process large file i/o.. Thanks, John ps: My next thought is to setup a non-unix box and see if I get the same results - and point at either client or server side issues.