Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 Mar 2015 16:12:40 +0300
From:      Emil Muratov <gpm@hotplug.ru>
To:        Alexander Motin <mav@FreeBSD.org>, freebsd-fs@freebsd.org
Subject:   Re: CAM Target over FC and UNMAP problem
Message-ID:  <550979C8.2070603@hotplug.ru>
In-Reply-To: <54F98135.5000908@FreeBSD.org>
References:  <54F88DEA.2070301@hotplug.ru> <54F8AB96.6080600@FreeBSD.org> <54F9782A.90501@hotplug.ru> <54F98135.5000908@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 06.03.2015 13:28, Alexander Motin wrote:
>>> There were number of complains on UNMAP performance in Illumos lists
>>> too. Six month ago there were some fixes committed and merged to
>>> stable/10 that substantially improved the situation. Since that time I
>>> haven't observed problems with that on my tests.
>> Have you tried unmap on zvols with non-ssd backeds too? Now I'm actively
>> testing this scenario, but this issues makes it impossible to use UNMAP
>> in production, blocking timeouts turns into IO failures for initiator OS.
> My primary test system is indeed all-SSD. But I do some testing on
> HDD-based system and will do this more for UNMAP.
>
>

Hi!
I've made some progress with this issue using iSCSI transport and
sniffing initiator/target command-responses traffic.
I found that initiator sends request VPD 0xb0 page and than UNMAP
command with long LBA range and then timeouts while waiting response.
That was interesting, ctladm option 'ublocksize' doesn't make any
difference, so I've tried to tackle other values.
 I'm not sure how this should work in the first place but I found if not
a solution for ZFS than at least a workaround for CTL.
I looked through ctl code and changed hardcoded values for 'unmap LBA
count' and 'unmap block descr count' to 8Mb and 128.
With this values UNMAP works like a charm! No more IO blocks, IO
timeouts, log error, high disk loads or anything, only a medium
performance drop-down during even very large unmaps. But this
performance drop is nothing compared with those all-blocking issues. No
problems over FiberChannel transport too.

I think it would be nice to have ctl options tunable for this VPD values
at least (and maybe others), if not changing the hard-coded default.

Here are the options I came to:
ctladm create -o file=/dev/zvol/wd/zvol/zvl02 -o unmap=on -o
pblocksize=8k -o ublocksize=1m

>From initiators side disk VPD page:

$sg_vpd -p bl /dev/sdb
Block limits VPD page (SBC):
  Write same no zero (WSNZ): 0
  Maximum compare and write length: 255 blocks
  Optimal transfer length granularity: 0 blocks
  Maximum transfer length: 4294967295 blocks
  Optimal transfer length: 2048 blocks
  Maximum prefetch length: 0 blocks
  Maximum unmap LBA count: 16384
  Maximum unmap block descriptor count: 128
  Optimal unmap granularity: 2048
  Unmap granularity alignment valid: 1
  Unmap granularity alignment: 0
  Maximum write same length: 0xffffffffffffffff blocks

A patch for ctl.c

--- ./sys/cam/ctl/ctl.c.orig    2015-03-01 19:35:53.000000000 +0300
+++ ./sys/cam/ctl/ctl.c 2015-03-17 11:05:53.000000000 +0300
@@ -10327,9 +10327,11 @@
        if (lun != NULL) {
                bs = lun->be_lun->blocksize;
                scsi_ulto4b(lun->be_lun->opttxferlen,
bl_ptr->opt_txfer_len);
+               // set Block limits VPD Maximum unmap LBA count to
0x4000 (8Mbytes)
+               // set Block limits VPD Maximum unmap block descriptor
count to 128 (1Gb combined with max lba cnt)
                if (lun->be_lun->flags & CTL_LUN_FLAG_UNMAP) {
-                       scsi_ulto4b(0xffffffff, bl_ptr->max_unmap_lba_cnt);
-                       scsi_ulto4b(0xffffffff, bl_ptr->max_unmap_blk_cnt);
+                       scsi_ulto4b(0x4000, bl_ptr->max_unmap_lba_cnt);
+                       scsi_ulto4b(0x80, bl_ptr->max_unmap_blk_cnt);
                        if (lun->be_lun->ublockexp != 0) {
                                scsi_ulto4b((1 << lun->be_lun->ublockexp),
                                    bl_ptr->opt_unmap_grain);





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?550979C8.2070603>