Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Oct 2012 20:34:49 -0500
From:      dweimer <dweimer@dweimer.net>
To:        <freebsd-questions@freebsd.org>
Subject:   Re: Freebsd iSCSI client =?UTF-8?Q?=3F?=
Message-ID:  <9872161b9b8eda6eb5ea925c66326f72@dweimer.net>
In-Reply-To: <64340a4a169d59fac776572bf88dc076@dweimer.net>
References:  <20121029132939.9540.qmail@joyce.lan> <da5f486482b2223ae969989003f087a3@dweimer.net> <64340a4a169d59fac776572bf88dc076@dweimer.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2012-10-29 17:08, dweimer wrote:
> On 2012-10-29 13:51, dweimer wrote:
>> On 2012-10-29 08:29, John Levine wrote:
>>> I'm trying to set up a freebsd image under vmware, but I need more 
>>> disk
>>> space than the vmware hosts offer.  So the guy who runs the hosting 
>>> place
>>> suggests getting a 1U disk server and using iSCSI over gigabit 
>>> Ethernet
>>> so I can build zfs volumes from the iSCSI disks.
>>>
>>> Poking around, the reports say that FreeBSD is a pretty good iSCSI
>>> server in such forms as freenas, but a lousy iSCSI client, with the
>>> first problem being that that kludges are required to get iSCSI
>>> volumes mounted early enough in the boot process for ZFS to find 
>>> them.
>>> Is this still the case in FreeBSD 9?
>>>
>>> I'd rather not use NFS, since the remote disks have mysql 
>>> databases,
>>> and mysql and NFS are not friends.
>>>
>>> An alternative is to mount the iSCSI under vmware, so zfs sees them 
>>> as
>>> normal disks.  Anyone tried that?
>>>
>>> TIA,
>>> John
>>
>> I don't have an answer for you at the moment, but I can tell you 
>> that
>> I just started a new server build this morning with the intent of
>> using it as an iSCSI client and running ZFS on the drive.  In my 
>> case
>> however its going to be a file server that doesn't have very much
>> heavy I/O, with the intention of using compression on the ZFS file
>> set.  In my case a script ran after start up to mount the drive 
>> would
>> work if it fails.  I will let you know what I find out, server is in
>> the middle of a buildworld to get it updated to the p4 release.
>>
>> Yes you can mount as a drive through VMware and use ZFS just fine, I
>> have done a lot of recent tests using ZFS as the boot volume under
>> VMware. This new server will be my first production server to use 
>> what
>> I have learned from those tests, as its system drive mounted through
>> VMware (ESX 4.1) and is booting from ZFS.  Once the install of the
>> buildworld is complete I will add a 150G ZFS data set on our HP
>> Lefthand Networks SAN, run some tests and let you know the outcome 
>> of
>> them.
>
> Looks like I have some learning to do, system is up and running and
> talks to the iscsi volume just fine, however as you mentioned, the 
> big
> problem is mounting the volume at start up.  can't find any options 
> at
> all to launch iscontrol at boot.  Found an example
> /usr/local/etc/rc.d/ script from a mail forum a ways back however it
> was setup to use UFS volumes and a secondary fstab file for the iscsi
> volumes.  I don't see any reason that one can't be made to make use 
> of
> zfs with the volumes set with option canmount=noauto and using an
> rc.conf variable to pass which volumes to mount at boot, and umount 
> at
> shutdown to the script.
> However, I have some reading to do before I get started, as I haven't
> tried to create an rc.d script, and need to get an understanding of
> how to properly create one which follows all the proper guidelines,
> and allows itself to be a requirement for other scripts.  I don't see
> any reason it would work successfully to host a MySQL database as the
> OP was looking for or a Samba share as I intend to use it as long as
> their start up can be set to require the iSCSI start up to run first.
> If anyone has already done something similar to this and has some
> information to pass on that would be great.  I probably won't have
> time to even start researching this till Thursday this week

Well I got stuck waiting at work today for a replacement array 
controller, and got some time to work on this.  This still needs some 
work, and I am not sure its the best way to handle it as it does an 
export on the zpool at shutdown and import at start up.  I also don't 
know at this point about other services waiting on it.  But I have 
verified that a server reboot cleanly dismounts the volumes and a reboot 
remounts them.

Things to note, the # BEFORE: line below, that was copied from the old 
mailing list thread I found, not sure if that is something real or not.  
The ZFS data set I was using was set with option canmount=noauto.  the 
zpool import/export and zfs mount/umount are just typed in there, it 
needs to be broken up and pulled form an rc.conf variable option instead

#!/bin/sh

# PROVIDE: iscsi
# REQUIRE: NETWORKING
# BEFORE: mountcritremote
# KEYWORD: shutdown

. /etc/rc.subr

name="iscsi"
start_cmd="iscsi_start"
stop_cmd="iscsi_stop"
rcvar="iscsi_enable"
required_modules="iscsi_initiator:iscsi"

iscsi_start() {
   ${iscsi_command} -c ${iscsi_config} -n ${iscsi_nickname}
   sleep 1
   zpool import ziscsi
   zfs mount ziscsi/storage
}

iscsi_stop() {
   zfs umount ziscsi/storage
   zpool export ziscsi
   killall -HUP ${iscsi_command}
}

load_rc_config $name

: ${iscsi_enable="NO"}
: ${iscsi_command="iscontrol"}
: ${iscsi_config="/etc/iscsi.conf"}
: ${iscsi_nickname=""}

run_rc_command "$1"



Other files information used:
rc.conf:
...
# Enable iscsi
iscsi_enable="YES"
iscsi_command="iscontrol"
iscsi_nickname="LHMG002"
iscsi_config="/etc/iscsi.conf"
...

iscsi.conf:
# Globals
port = 3260
InitiatorName = iqn.2005-01.il.ac.huji.cs:testvm.local

LHMG002 {
         TargetAddress   = 10.31.120.102:3260,1
         TargetName      = 
iqn.2003-10.com.lefthandnetworks:lhmg002:1203:testvm-storage
}


-- 
Thanks,
    Dean E. Weimer
    http://www.dweimer.net/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9872161b9b8eda6eb5ea925c66326f72>