Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 31 Jul 2017 13:10:02 +0100
From:      Kaya Saman <kayasaman@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS and ISCSI question
Message-ID:  <e0354086-6325-be12-03fb-2eca62f56462@gmail.com>
In-Reply-To: <27221f18-07d7-64c6-289a-81e839b10d67@norma.perm.ru>
References:  <fd4b620a-e1dc-157e-a914-a8d0192c1199@gmail.com> <CADyrUxMoqjyMj=uxweJJTm8gNuARxKm2qLtZWhP1cAGMCZ=cfA@mail.gmail.com> <0f4905d2-6567-a83c-45f8-435c7c987d5b@gmail.com> <27221f18-07d7-64c6-289a-81e839b10d67@norma.perm.ru>

next in thread | previous in thread | raw e-mail | index | archive | help


On 07/31/2017 11:05 AM, Eugene M. Zheganin wrote:
> Hi.
>
> On 30.07.2017 17:19, Kaya Saman wrote:
>>
>>
>> I understand that iscsi works at the "block device" level but how 
>> would one go about using ZFS on the initiator?
>>
>> The standard ZFS commands can be run:
>>
>> zpool followed by zfs FS-set on the Initiator machine
>>
>> however, it doesn't seem right to first create a ZFS pool on the 
>> Target system then create another one on the same pool on the Initiator.
>>
>>
>>
>> Would zpool import/export work or does something else need to be done 
>> to get the Initiator to create a ZFS data set? 
> Zvol is a block device indeed, but, even it is an entity from a parent 
> zfs pool, it doesn't contain any filesystem, including zfs. Thus the 
> kernel won't see anything. So you have to create a zpool first with 
> 'zpool create'.
>
> Eugene.
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"

Hmm.... basically what I am trying to acheive is to be able to create a 
zpool and zfs file system on the Initiator system.

Of course on the Target one could run:

zpool create pool_1 <device_list>
zfs -V <size> pool_1/zvol

and on the Initiator:

zpool create pool <zvol_device>
zfs create pool/fs-set


but would that be recommended as one would be using 2x zpools on the 
same "device list"?



As an alternative I have tried something like this:

/etc/ctl.conf

portal-group pg0 {
     discovery-auth-group no-authentication
     listen 0.0.0.0
     listen [::]
}

target iqn.2012-06.com.example:target0 {
     auth-group no-authentication
     portal-group pg0

     lun 0 {
#        path /dev/zvol/iscsi-tst/tank
#        size 900M
         path /data/disk1
         size 200M
     }

         lun 1 {
                 path /data/disk2
                 size 200M
         }

#        lun 2 {
#                path /data/disk3
#                size 500M
#        }

#        lun 3 {
#                path /data/disk4
#                size 500M
#        }

}

target iqn.2012-06.com.example:target1 {
         auth-group no-authentication
         portal-group pg0

         lun 2 {
                 path /data/disk3
                 size 500M
         }

         lun 3 {
                 path /data/disk4
                 size 500M
         }

}


Then on Initiator:

# iscsictl -L
Target name                          Target portal    State
iqn.2012-06.com.example:target0      <IP>    Connected: da24 da25
iqn.2012-06.com.example:target1      <IP>    Connected: da26 da27


so then the zpool becomes:

iscsi-tst     672M   936K   671M         -     0%     0%  1.00x ONLINE  -

# zpool status iscsi-tst
   pool: iscsi-tst
  state: ONLINE
status: One or more devices are configured to use a non-native block size.
     Expect reduced performance.
action: Replace affected devices with devices that support the
     configured block size, or migrate data to a properly configured
     pool.
   scan: none requested
config:

     NAME        STATE     READ WRITE CKSUM
     iscsi-tst   ONLINE       0     0     0
       mirror-0  ONLINE       0     0     0
         da24    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
         da25    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
       mirror-1  ONLINE       0     0     0
         da26    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
         da27    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native

errors: No known data errors


Then zfs dataset:

# zfs list iscsi-tst
NAME        USED  AVAIL  REFER  MOUNTPOINT
iscsi-tst   816K   639M   192K  /iscsi-tst

# zfs list iscsi-tst/tank
NAME             USED  AVAIL  REFER  MOUNTPOINT
iscsi-tst/tank   192K   639M   192K  /iscsi-tst/tank


So for best redundancy like "hot swap" etc... what would be the best 
solution or is there an ISCSI "Best Practice" to not get totally burned 
if something goes wrong with an iscsi attached drive? <-- taking backups 
of data excluded of course :-)


Regards,

Kaya



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?e0354086-6325-be12-03fb-2eca62f56462>