Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 19 Mar 2015 23:21:16 +0300
From:      "Emil Muratov" <gpm@hotplug.ru>
To:        freebsd-fs@freebsd.org, "Alexander Motin" <mav@freebsd.org>
Subject:   Re: CAM Target over FC and UNMAP problem
Message-ID:  <op.xvrhhqo4aevz08@ghost-pc.home.lan>
In-Reply-To: <550AD6ED.50201@FreeBSD.org>
References:  <54F88DEA.2070301@hotplug.ru> <54F8AB96.6080600@FreeBSD.org> <54F9782A.90501@hotplug.ru> <54F98135.5000908@FreeBSD.org> <550979C8.2070603@hotplug.ru> <550AD6ED.50201@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Alexander Motin <mav@freebsd.org> =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0)=
 =D0=B2 =D1=81=D0=B2=D0=BE=D1=91=D0=BC =D0=BF=D0=B8=D1=81=D1=8C=D0=BC=D0=
=B5 Thu, 19 Mar 2015  =

17:02:21 +0300:

>> I looked through ctl code and changed hardcoded values for 'unmap LBA=

>> count' and 'unmap block descr count' to 8Mb and 128.
>> With this values UNMAP works like a charm! No more IO blocks, IO
>> timeouts, log error, high disk loads or anything, only a medium
>> performance drop-down during even very large unmaps. But this
>> performance drop is nothing compared with those all-blocking issues. =
No
>> problems over FiberChannel transport too.

> In my present understanding of SBC-4 specification, implemented also i=
n
> FreeBSD initiator, MAXIMUM UNMAP LBA COUNT is measured not per segment=
,
> but per command.

Hmm.. my understanding of SBC specs is close to 0 :) Just checked it,  =

looks like you were right - sounds like it must be the total block count=
  =

per command. My first assumption was based on SG_UNMAP(8) notes from  =

SG3_UTILS, it defines NUM as a value constrained by MAXIMUM UNMAP LBA  =

COUNT, but there can be more than one LBA,NUM pairs. Not sure how it was=
  =

implemented in the sg_unmap code itself. Anyway, based on the wrong  =

assumption I was lucky to hit the the jackpot :)

> From such perspective limiting it to 8MB per UNMAP
> command is IMHO an overkill. Could you try to increase it to 2097152,
> which is 1GB, while decrease MAXIMUM UNMAP BLOCK DESCRIPTOR COUNT from=

> 128 to 64? Will it give acceptable results?

Just did it, it was as bad as with the default values, same io blocking,=
  =

errors and timeouts. I'll try to test some more values between 1G and 8M=
 :)
Have no idea what is the basis for choosing this values without  =

undestanding ZFS internals.

We have a t10 compliant Hitachi HUS-VM FC-storage with a set of options =
 =

for different initiators. A standart t10-compliant setup gives this valu=
es  =

in bl VPD:

Block limits VPD page (SBC):
   Write same no zero (WSNZ): 0
   Maximum compare and write length: 1 blocks
   Optimal transfer length granularity: 128 blocks
   Maximum transfer length: 0 blocks
   Optimal transfer length: 86016 blocks
   Maximum prefetch length: 0 blocks
   Maximum unmap LBA count: 4294967295
   Maximum unmap block descriptor count: 1
   Optimal unmap granularity: 86016
   Unmap granularity alignment valid: 0
   Unmap granularity alignment: 0
   Maximum write same length: 0x80000 blocks

Very odd values (86016 blocks), no idea how this works inside HUSVM but =
 =

large unmaps is not a problem there.

BTW, msdn mentions that ws2012 implements only SBC3 unmap, but not unmap=
  =

through WRITE_SAME. I will try to test if unmap with sg_write_same behav=
es  =

as bad on ZFS vol with a default large write_same length.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.xvrhhqo4aevz08>