Date: Mon, 04 Apr 2011 17:10:38 +0200 From: Denny Schierz <linuxmail@4lin.net> To: freebsd-stable@freebsd.org Subject: 8.2: ISCSI: ISTGT a bit slow, I think Message-ID: <1301929838.26698.213.camel@pcdenny>
next in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
hi,
I testing the maximum throughput from ISCSI, but I've reached only
~50MB/s (dd if=/dev/zero of=/dev/da13 bs=1M count=2048) with crossover
1Gb/s cabel and raw disk. Both machines are FreeBSD 8.2-stable with
istgt and the Onboard ISCSI initiator
With ZFS as target we loose round about 8-10MB/s.
istgt.conf
======================
[global]
Timeout 30
NopInInterval 20
DiscoveryAuthMethod Auto
MaxSessions 32
MaxConnections 8
#FirstBurstLength 65536
MaxBurstLength 1048576
MaxRecvDataSegmentLength 262144
# maximum number of sending R2T in each connection
# actual number is limited to QueueDepth and MaxCmdSN and ExpCmdSN
# 0=disabled, 1-256=improves large writing
MaxR2T 32
# iSCSI initial parameters negotiate with initiators
# NOTE: incorrect values might crash
MaxOutstandingR2T 16
DefaultTime2Wait 2
DefaultTime2Retain 60
MaxBurstLength 1048576
[....]
[LogicalUnit4]
Comment "40GB Disk (iqn.san.foo:40gb)"
TargetName 40gb
TargetAlias "Data 40GB"
Mapping PortalGroup1 InitiatorGroup1
#AuthMethod Auto
#AuthGroup AuthGroup2
UnitType Disk
UnitInquiry "FreeBSD" "iSCSI Disk" "01234" "10000004"
QueueDepth 32
LUN0 Storage /failover/bigPool/disk40gb 40960MB
[LogicalUnit5]
Comment "2TB Disk (iqn.san.foo:2tb)"
TargetName 2tb
TargetAlias "Data 2TB"
Mapping PortalGroup1 InitiatorGroup1
#AuthMethod Auto
#AuthGroup AuthGroup2
UnitType Disk
UnitInquiry "FreeBSD" "iSCSI Disk" "01235" "10000005"
QueueDepth 32
LUN0 Storage /dev/da12 200480MB
=====================
The raw disks, itself reaches over 150-200MB/s with or without ZFS
(raidz2)
We have 4GB Ram and 4 x 3Ghz Xeon CPUs on board.
I thought, we should reach over 80-100MB/s, so, ISTGT or the Initiator
is a bis slow, I think.
I've tested just in the moment with Ubuntu 10.10 Initiator and I've got
round about 70>MB/s - or without ZFS - constant 80>MB/s, over a regular
switched network.
Is this the end what we could reach? 'Cause of TCP and ISCSI overhead?
What we can't: enable Jumbo frames. Our switches (Cisco catalyst
WS-X4515) doesn't support jumbo frames.
I've tested Jumbo Frames (9k) over the crossover, but the performance
was worse. Round about 20MB/s ....
So, does anyone has some hints for me? :-)
cu denny
[-- Attachment #2 --]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEABECAAYFAk2Z324ACgkQKlzhkqt9P+CH8QCglvQbgHu81wlVSeggbFN/R1cf
rsUAnRAV/CNy8LUKp7aKFI+KEiXBM50W
=PymT
-----END PGP SIGNATURE-----
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1301929838.26698.213.camel>
