Date: Wed, 01 Oct 2014 15:29:02 +0200 From: InterNetX - Juergen Gotteswinter <jg@internetx.com> To: George Kontostanos <gkontos.mail@gmail.com> Cc: freebsd-fs@freebsd.org, JF-Bogaerts <JF-Bogaerts@skynet.be> Subject: Re: HAST with broken HDD Message-ID: <542C019E.2080702@internetx.com> In-Reply-To: <CA%2BdUSyp4vMB_qUeqHgXNz2FiQbWzh8MjOEFYw%2BURcN4gUq69nw@mail.gmail.com> References: <542BC135.1070906@Skynet.be> <542BDDB3.8080805@internetx.com> <CA%2BdUSypO8xTR3sh_KSL9c9FLxbGH%2BbTR9-gPdcCVd%2Bt0UgUF-g@mail.gmail.com> <542BF853.3040604@internetx.com> <CA%2BdUSyp4vMB_qUeqHgXNz2FiQbWzh8MjOEFYw%2BURcN4gUq69nw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Am 01.10.2014 um 15:06 schrieb George Kontostanos: > > > On Wed, Oct 1, 2014 at 3:49 PM, InterNetX - Juergen Gotteswinter > <jg@internetx.com <mailto:jg@internetx.com>> wrote: > > Am 01.10.2014 um 14:28 schrieb George Kontostanos: > > > > On Wed, Oct 1, 2014 at 1:55 PM, InterNetX - Juergen Gotteswinter > > <jg@internetx.com <mailto:jg@internetx.com> > <mailto:jg@internetx.com <mailto:jg@internetx.com>>> wrote: > > > > Am 01.10.2014 um 10:54 schrieb JF-Bogaerts: > > > Hello, > > > I'm preparing a HA NAS solution using HAST. > > > I'm wondering what will happen if one of disks of the > primary node will > > > fail or become erratic. > > > > > > Thx, > > > Jean-François Bogaerts > > > > nothing. if you are using zfs on top of hast zfs wont even > take notice > > about the disk failure. > > > > as long as the write operation was sucessfull on one of the 2 > nodes, > > hast doesnt notify the ontop layers about io errors. > > > > interesting concept, took me some time to deal with this. > > > > > > Are you saying that the pool will appear to be optimal even with a bad > > drive? > > > > > > https://forums.freebsd.org/viewtopic.php?&t=24786 > > > > It appears that this is actually the case. And it is very disturbing, > meaning that a drive failure goes unnoticed. In my case I completely > removed the second disk on the primary node and a zpool status showed > absolutely no problem. Scrubbing the pool began resilvering which > indicates that there is actually something wrong! right. lets go further and think how zfs works regarding direct hardware / disk access. theres a layer between which always says ey, everthing is fine. no more need for pool scrubbing, since hastd wont tell if anything is wrong :D > > pool: tank > > state: ONLINE > > status: One or more devices has experienced an error resulting in data > > corruption. Applications may be affected. > > action: Restore the file in question if possible. Otherwise restore the > > entire pool from backup. > > see: http://illumos.org/msg/ZFS-8000-8A > > scan: scrub repaired 16K in 0h2m with 7 errors on Wed Oct 1 16:00:47 2014 > > config: > > > NAME STATE READ WRITE CKSUM > > tank ONLINE 0 0 7 > > mirror-0 ONLINE 0 0 40 > > hast/disk1 ONLINE 0 0 40 > > hast/disk2 ONLINE 0 0 40 > > > Unfortunately, in this case there was data loss and hastctl status does > not report the missing disk! > > NameStatusRoleComponents > > disk1complete primary /dev/ada1hast2 > > disk2complete primary /dev/ada2hast2 > > > -- > George Kontostanos > --- > http://www.aisecure.net
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?542C019E.2080702>