From owner-freebsd-fs@freebsd.org Sun Jul 3 21:47:35 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 685E6B90C8D for ; Sun, 3 Jul 2016 21:47:35 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1957A2B02 for ; Sun, 3 Jul 2016 21:47:34 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1467582443-0a88181ce78120c0001-3nHGF7 Received: from mordor.lan (213.219.165.225.bro01.dyn.edpnet.net [213.219.165.225]) by relay-b03.edpnet.be with ESMTP id 3LvHQYWjClWQyiMq (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sun, 03 Jul 2016 23:47:25 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Apparent-Source-IP: 213.219.165.225 Date: Sun, 3 Jul 2016 23:47:23 +0200 From: Julien Cigar To: Ben RUBSON Cc: freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160703214723.GF41276@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> <20160630153747.GB5695@mordor.lan> <63C07474-BDD5-42AA-BF4A-85A0E04D3CC2@gmail.com> <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="w3uUfsyyY1Pqa/ej" Content-Disposition: inline In-Reply-To: <20160703192945.GE41276@mordor.lan> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Start-Time: 1467582443 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 5666 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.51 X-Barracuda-Spam-Status: No, SCORE=0.51 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=6.0 tests=BSF_SC1_TG070 X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.30986 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC1_TG070 Custom Rule TG070 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Jul 2016 21:47:35 -0000 --w3uUfsyyY1Pqa/ej Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jul 03, 2016 at 09:29:46PM +0200, Julien Cigar wrote: > On Sat, Jul 02, 2016 at 05:04:22PM +0200, Ben RUBSON wrote: > >=20 > > > On 30 Jun 2016, at 20:57, Julien Cigar wrote: > > >=20 > > > On Thu, Jun 30, 2016 at 11:32:17AM -0500, Chris Watson wrote: > > >>=20 > > >>=20 > > >> Sent from my iPhone 5 > > >>=20 > > >>>=20 > > >>>>=20 > > >>>> Yes that's another option, so a zpool with two mirrors (local +=20 > > >>>> exported iSCSI) ? > > >>>=20 > > >>> Yes, you would then have a real time replication solution (as HAST)= , compared to ZFS send/receive which is not. > > >>> Depends on what you need :) > > >>>=20 > > >>>>=20 > > >>>>> ZFS would then know as soon as a disk is failing. > > >>=20 > > >> So as an aside, but related, for those watching this from the peanut= gallery and for the benefit of the OP perhaps those that run with this set= up might give some best practices and tips here in this thread on making th= is a good reliable setup. I can see someone reading this thread and tossing= two crappy Ethernet cards in a box and then complaining it doesn't work we= ll.=20 > > >=20 > > > It would be more than welcome indeed..! I have the feeling that HAST > > > isn't that much used (but maybe I am wrong) and it's difficult to fin= d=20 > > > informations on it's reliability and concrete long-term use cases... > > >=20 > > > Also the pros vs cons of HAST vs iSCSI > >=20 > > I made further testing today. > >=20 > > # serverA, serverB : > > kern.iscsi.ping_timeout=3D5 > > kern.iscsi.iscsid_timeout=3D5 > > kern.iscsi.login_timeout=3D5 > > kern.iscsi.fail_on_disconnection=3D1 > >=20 > > # Preparation : > > - serverB : let's make 2 iSCSI targets : rem3, rem4. > > - serverB : let's start ctld. > > - serverA : let's create a mirror pool made of 4 disks : loc1, loc2, re= m3, rem4. > > - serverA : pool is healthy. > >=20 > > # Test 1 : > > - serverA : put a lot of data into the pool ; > > - serverB : stop ctld ; > > - serverA : put a lot of data into the pool ; > > - serverB : start ctld ; > > - serverA : make all pool disks online : it works, pool is healthy. > >=20 > > # Test 2 : > > - serverA : put a lot of data into the pool ; > > - serverA : export the pool ; > > - serverB : import the pool : it does not work, as ctld locks the disks= ! Good news, nice protection (both servers won't be able to access the sam= e disks at the same time). > > - serverB : stop ctld ; > > - serverB : import the pool : it works, 2 disks missing ; > > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > > - serverB : make all pool disks online : it works, pool is healthy. > >=20 > > # Test 3 : > > - serverA : put a lot of data into the pool ; > > - serverB : stop ctld ; > > - serverA : put a lot of data into the pool ; > > - serverB : import the pool : it works, 2 disks missing ; > > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > > - serverB : make all pool disks online : it works, pool is healthy, but= of course data written at step3 is lost. > >=20 > > # Test 4 : > > - serverA : put a lot of data into the pool ; > > - serverB : stop ctld ; > > - serverA : put a lot of data into the pool ; > > - serverA : export the pool ; > > - serverA : let's make 2 iSCSI targets : rem1, rem2 ; > > - serverB : import the pool : it works, pool is healthy, data written a= t step3 is here. > >=20 > > # Test 5 : > > - serverA : rsync a huge remote repo into the pool in the background ; > > - serverB : stop ctld ; > > - serverA : 2 disks missing, but rsync still runs flawlessly ; > > - serverB : start ctld ; > > - serverA : make all pool disks online : it works, pool is healthy. > > - serverB : ifconfig down ; > > - serverA : 2 disks missing, but rsync still runs flawlessly ; > > - serverB : ifconfig up ; > > - serverA : make all pool disks online : it works, pool is healthy. > > - serverB : power reset ! > > - serverA : 2 disks missing, but rsync still runs flawlessly ; > > - serverB : let's wait for server to be up ; > > - serverA : make all pool disks online : it works, pool is healthy. > >=20 > > Quite happy with these tests actually :) >=20 > Thank you very much for thoses quick tests! I'll start my own ones > tomorrow, but based on your preliminary it *seems* that ZFS + iSCSI > combinaison could be a potential candidate for what I'd like to do..! another question from a performance point of view, imagine that you=20 create a single mirror zpool, something like: $> zpool create storage mirror loc1 loc2 rem1 rem2 (where rem1 and rem2 are iSCSI disks) I guess that ZFS will split the read requests accross all devices in order to maximize performance... which could lead to contrary to what is expecpted when iSCSI disks are involved, no? Is there some sysctl params which could prevent this unexpected behavior? >=20 > >=20 > > Ben > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 > --=20 > Julien Cigar > Belgian Biodiversity Platform (http://www.biodiversity.be) > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > No trees were killed in the creation of this message. > However, many electrons were terribly inconvenienced. --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --w3uUfsyyY1Pqa/ej Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXeYfmAAoJELK7NxCiBCPAllkQAJq5GNwdUeTIErA12G7u8T4t mXL2fBao7Pg864cS60FnBLnBcaOnkYoRpVbY3vcp2Zs8Xjj2z/Hc0fDgYrOC0h1T OLTPt/wftlmyhh2slJ1xMyRhiax1LhKRAgYxu8okk+ArGUkzmKUmHNsXptJ6ZGwm p0ZQSiMkDgRkImgV4oFWzpe6rrlsaamYhzt37JCsAjwhb7vxewZ7G9+FgOShHaaB VHVmvucKadGG8mzRrBjnj7Y4wYF2gDKG3c7Gf7VnfQPLX/Fk3v3C4D92U13I4hYn NDtL6R9UQsCzcw4jjjBv/GK+3Ha68Kou8kloP+usFyxzDBBd7H5JE8ggEyE8NZAE cViSScxNwZeVEdjC1sqvoBbl7DErqe6pZg34V0jnp/i2ueLQ7mn4DjOV6XaNZvVb DaTIHtoxabBuvjMCHafr86jfQB9KrdetiqRi/GphmmrJhalb6w4e9AqctVZc9ZYO qoWU/OOUqqVrsIB5GuM8emjpeoaDKY2V0ZNxvWQO2v50hK4mYkSLthCLoXbK0gKk qSe4RtilwaSEyVdsvg4ICDALKnW4SaAXRIRxYPRb2nC0gJUXi1Do8ciZEztUZ9rW wYGFdqn8BVYeJdse8HBQ8gwafp72bRrMf7ecrCmkiPCuxi1kVeAXxvhuYjGi5e8a BG82TpIku2fHYsdrbK0D =eQxz -----END PGP SIGNATURE----- --w3uUfsyyY1Pqa/ej--