Date: Sat, 2 Jul 2011 14:43:15 -0700 From: Timothy Smith <tts@personalmis.com> To: Mikolaj Golub <trociny@freebsd.org> Cc: Pawel Jakub Dawidek <pjd@freebsd.org>, freebsd-stable@freebsd.org Subject: Re: HAST + ZFS: no action on drive failure Message-ID: <CAAemB=6xnkTAitfuXThrtXdKjXDSw6fiiZg=7AonTBOVtxWsMA@mail.gmail.com> In-Reply-To: <8639ioadji.fsf@kopusha.home.net> References: <BANLkTi==ctVw1HpGkw-8QG68abCg-1Vp9g@mail.gmail.com> <8639ioadji.fsf@kopusha.home.net>
next in thread | previous in thread | raw e-mail | index | archive | help
Hello Mikolaj, So, just to be clear, if a local drive fails in my pool, but the corresponding remote drive remains available, then hastd will both write to and read from the remote drive? That's really very cool! I looked more closely at the hastd(8) man page. There is some indication of what you say, but not so clear: "Read operations (BIO_READ) are handled locally unless I/O error occurs or local version of the data is not up-to-date yet (synchronization is in progress)." Perhaps this can be modified a bit? Adding, "or the local disk is unavailable. In such a case, the I/O operation will be handled by the remote resource." It does makes sense however, since HAST is base on the idea of raid. This feature increases the redundancy of the system greatly. My boss will be very impressed, as am I! I did notice however that when the pulled drive is reinserted, I need to change the associated hast resource to init, then back to primary to allow hastd to once again use it (perhaps the same if the secondary drive is failed?). Unless it will do this on it's own after some time? I did not wait more than a few minutes. But this is easy enough to script or to monitor the log and present a notification to admin at such a time. Thank you so much for the help! On Sat, Jul 2, 2011 at 8:49 AM, Mikolaj Golub <trociny@freebsd.org> wrote: > > On Thu, 30 Jun 2011 20:02:19 -0700 Timothy Smith wrote: > > TS> First posting here, hopefully I'm doing it right =) > > TS> I also posted this to the FreeBSD forum, but I know some hast folks > monitor > TS> this list regularly and not so much there, so... > > TS> Basically, I'm testing failure scenarios with HAST/ZFS. I got two > nodes, > TS> scripted up a bunch of checks and failover actions between the nodes. > TS> Looking good so far, though more complex that I expected. It would be > cool > TS> to post it somewher to get some pointers/critiques, but that's another > TS> thing. > > TS> Anyway, now I'm just seeing what happens when a drive fails on primary > node. > TS> Oddly/sadly, NOTHING! > > TS> Hast just keeps on a ticking, and doesn't change the state of the > failed > TS> drive, so the zpool has no clue the drive is offline. The > TS> /dev/hast/<resource> remains. The hastd does log some errors to the > system > TS> log like this, but nothing more. > > TS> messages.0:Jun 30 18:39:59 nas1 hastd[11066]: [ada6] (primary) Unable > to > TS> flush activemap to disk: Device not configured. > TS> messages.0:Jun 30 18:39:59 nas1 hastd[11066]: [ada6] (primary) Local > request > TS> failed (Device not configured): WRITE(4736512, 512). > > Although the request to local drive failed it succeeded on remote node, so > data was not lost, it was considered as successful, and no error was > returned > to ZFS. > > TS> So, I guess the question is, "Do I have to script a cronjob to check > for > TS> these kinds of errors and then change the hast resource to 'init' or > TS> something to handle this?" Or is there some kind of hastd config > setting > TS> that I need to set? What's the SOP for this? > > Currently the only way to know is monitoring logs. It is not difficult to > hook > event for these errors in the HAST code (like it is done for > connect/disconnect, syncstart/done etc) so one could script what to do on > an > error occurrence but I am not sure it is a good idea -- the errors may be > generated with high rate. > > TS> As something related too, when the zpool in FreeBSD does finally > notice that > TS> the drive is missing because I have manually changed the hast resource > to > TS> INIT (so the /dev/hast/<res> is gone), my zpool (raidz2) hot spare > doesn't > TS> engage, even with "autoreplace=on". The zpool status of the degraded > pool > TS> seems to indicate that I should manually replace the failed drive. If > that's > TS> the case, it's not really a "hot spare". Does this mean the "FMA > Agent" > TS> referred to in the ZFS manual is not implemented in FreeBSD? > > TS> thanks! > TS> _______________________________________________ > TS> freebsd-stable@freebsd.org mailing list > TS> http://lists.freebsd.org/mailman/listinfo/freebsd-stable > TS> To unsubscribe, send any mail to " > freebsd-stable-unsubscribe@freebsd.org" > > -- > Mikolaj Golub >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAAemB=6xnkTAitfuXThrtXdKjXDSw6fiiZg=7AonTBOVtxWsMA>