From owner-freebsd-fs@freebsd.org Thu Jun 30 15:30:32 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5D99EB87F3F for ; Thu, 30 Jun 2016 15:30:32 +0000 (UTC) (envelope-from julien@perdition.city) Received: from relay-b03.edpnet.be (relay-b03.edpnet.be [212.71.1.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "edpnet.email", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 08F6D26AF for ; Thu, 30 Jun 2016 15:30:31 +0000 (UTC) (envelope-from julien@perdition.city) X-ASG-Debug-ID: 1467300626-0a88181ce45a77a0001-3nHGF7 Received: from mordor.lan (213.219.165.225.bro01.dyn.edpnet.net [213.219.165.225]) by relay-b03.edpnet.be with ESMTP id BGNjjoetWIMzCRud (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 30 Jun 2016 17:30:28 +0200 (CEST) X-Barracuda-Envelope-From: julien@perdition.city X-Barracuda-Effective-Source-IP: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Apparent-Source-IP: 213.219.165.225 Date: Thu, 30 Jun 2016 17:30:26 +0200 From: Julien Cigar To: InterNetX - Juergen Gotteswinter Cc: freebsd-fs@freebsd.org Subject: Re: HAST + ZFS + NFS + CARP Message-ID: <20160630153026.GA5695@mordor.lan> X-ASG-Orig-Subj: Re: HAST + ZFS + NFS + CARP References: <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="wRRV7LY7NUeQGEoC" Content-Disposition: inline In-Reply-To: <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Barracuda-Connect: 213.219.165.225.bro01.dyn.edpnet.net[213.219.165.225] X-Barracuda-Start-Time: 1467300627 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://212.71.1.220:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 4065 X-Virus-Scanned: by bsmtpd at edpnet.be X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0100 X-Barracuda-Spam-Score: 0.01 X-Barracuda-Spam-Status: No, SCORE=0.01 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.30898 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2016 15:30:32 -0000 --wRRV7LY7NUeQGEoC Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jun 30, 2016 at 05:14:08PM +0200, InterNetX - Juergen Gotteswinter = wrote: >=20 >=20 > Am 30.06.2016 um 16:45 schrieb Julien Cigar: > > Hello, > >=20 > > I'm always in the process of setting a redundant low-cost storage for= =20 > > our (small, ~30 people) team here. > >=20 > > I read quite a lot of articles/documentations/etc and I plan to use HAST > > with ZFS for the storage, CARP for the failover and the "good old NFS" > > to mount the shares on the clients. > >=20 > > The hardware is 2xHP Proliant DL20 boxes with 2 dedicated disks for the > > shared storage. > >=20 > > Assuming the following configuration: > > - MASTER is the active node and BACKUP is the standby node. > > - two disks in each machine: ada0 and ada1. > > - two interfaces in each machine: em0 and em1 > > - em0 is the primary interface (with CARP setup) > > - em1 is dedicated to the HAST traffic (crossover cable) > > - FreeBSD is properly installed in each machine. > > - a HAST resource "disk0" for ada0p2. > > - a HAST resource "disk1" for ada1p2. > > - a zpool create zhast mirror /dev/hast/disk0 /dev/hast/disk1 is created > > on MASTER > >=20 > > A couple of questions I am still wondering: > > - If a disk dies on the MASTER I guess that zpool will not see it and > > will transparently use the one on BACKUP through the HAST ressource.. >=20 > thats right, as long as writes on $anything have been successful hast is > happy and wont start whining >=20 > > is it a problem?=20 >=20 > imho yes, at least from management view >=20 > > could this lead to some corruption? >=20 > probably, i never heard about anyone who uses that for long time in > production >=20 > At this stage the > > common sense would be to replace the disk quickly, but imagine the > > worst case scenario where ada1 on MASTER dies, zpool will not see it= =20 > > and will transparently use the one from the BACKUP node (through the= =20 > > "disk1" HAST ressource), later ada0 on MASTER dies, zpool will not=20 > > see it and will transparently use the one from the BACKUP node=20 > > (through the "disk0" HAST ressource). At this point on MASTER the two= =20 > > disks are broken but the pool is still considered healthy ... What if= =20 > > after that we unplug the em0 network cable on BACKUP? Storage is > > down.. > > - Under heavy I/O the MASTER box suddently dies (for some reasons),=20 > > thanks to CARP the BACKUP node will switch from standy -> active and= =20 > > execute the failover script which does some "hastctl role primary" for > > the ressources and a zpool import. I wondered if there are any > > situations where the pool couldn't be imported (=3D data corruption)? > > For example what if the pool hasn't been exported on the MASTER before > > it dies? > > - Is it a problem if the NFS daemons are started at boot on the standby > > node, or should they only be started in the failover script? What > > about stale files and active connections on the clients? >=20 > sometimes stale mounts recover, sometimes not, sometimes clients need > even reboots >=20 > > - A catastrophic power failure occur and MASTER and BACKUP are suddently > > powered down. Later the power returns, is it possible that some > > problem occur (split-brain scenario ?) regarding the order in which t= he >=20 > sure, you need an exact procedure to recover >=20 > > two machines boot up? >=20 > best practice should be to keep everything down after boot >=20 > > - Other things I have not thought? > >=20 >=20 >=20 >=20 > > Thanks! > > Julien > >=20 >=20 >=20 > imho: >=20 > leave hast where it is, go for zfs replication. will save your butt, > sooner or later if you avoid this fragile combination Do you mean a $> zfs snapshot followed by a $> zfs send ... | ssh zfs receive ... ? --=20 Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. --wRRV7LY7NUeQGEoC Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCgAGBQJXdTsPAAoJELK7NxCiBCPAKugQAMBExrLenCJ2tMFbkeT8ii/r DApZUEnkbeUeBFxvlbt0BPWIyTYWI7aQoAOeiSV4sdjJmxUqANKSFKnoU909dc29 RrH0j36ijjbeogoBq+QmScM2+odvw13gdJxmkxRqBT/FSKRaKiSUZe51VdibsE43 Dm4YknXLb0Y8V6b0vZ6DdQ1iaWZwa/rakalDK1Y4bSoGhQGZPJocPRxlDIuMBway AZQIIb6HaUueRGDVKAOsJTvVrgV36vNEeHyfeSKakxOm/Qm55qRFwbqfWastFZTd pzLY6ExLDiZ3TM32bphPtuvcj6EFKD1CyjRJr6+wlR0j19SfCoAVaAwBp7wh95B5 u3Kub34z0HzfWGe+qcoKXKe0eYxUIjn6pE4BziRIO3ggiXuD2FZuHiv5n86sB1/G qOIb90Mc/wGvgiSCnTNuXg0xUb9RI3x/BBnwM3cONuBXiu26Thuz3NbHx0S/lI5n G1CfyOhBcZPBHPnfl/BpWLw9+DdCVQ8SU/Rz0rGD0rmHjpeMRtbgXV+hCglpdxC9 bS33+FTqTaLm+L2emMTa/iaM7ZTJwwR6IOPVaHoKKvZ3eJFyfmeWWI+ShZhEx24g x2G08K4m7cdjXtMlWIXesGc7OzCY/7T1je6hUNR4zWyGYN096i8+r7jsX3GcWQ8D b9+pMsCa3+vVZN2YpwLy =2Yy9 -----END PGP SIGNATURE----- --wRRV7LY7NUeQGEoC--