Date: Fri, 1 Apr 2011 07:18:01 -0700 From: Freddie Cash <fjwcash@gmail.com> To: Pete French <petefrench@ingresso.co.uk> Cc: trociny@freebsd.org, freebsd-fs@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org, pjd@freebsd.org Subject: Re: Any success stories for HAST + ZFS? Message-ID: <BANLkTi=W9uyhYACMfR=Fa4sTr31WfXV=GA@mail.gmail.com> In-Reply-To: <E1Q5cRC-0000iz-JX@dilbert.ticketswitch.com> References: <AANLkTi=zXX93Tzd1fYq3bJ4BEuvUf43y=94fT3rXd6j9@mail.gmail.com> <E1Q5cRC-0000iz-JX@dilbert.ticketswitch.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Apr 1, 2011 at 4:22 AM, Pete French <petefrench@ingresso.co.uk> wro= te: >> The other 5% of the time, the hastd crashes occurred either when >> importing the ZFS pool, or when running multiple parallel rsyncs to >> the pool. =C2=A0hastd was always shown as the last running process in th= e >> backtrace onscreen. > > This is what I am seeing - did you manage to reproduce this with the patc= h, > or does it fix the issue for you ? Am doing more test now, with only a si= ngle > hast device to see if it is stable. Am Ok to run without mirroring across > hast devices for now, but wouldnt like to do so long term! I have not been able to crash or hang the box since applying Mikolaj's patc= h. I've tried the following: - destroy pool - create pool - destroy hast providers - create hast providers - switch from master to slave via hastctl using "role secondary all" - switch from slave to master via hastctl using "role primary all" - switch roles via hast-carp-switch which does one provider per second - import/export pool I've been running 6 parallel rsyncs for the past 48 hours, getting a consistent 200 Mbps of transfers, with just under 2 TB of deduped data in the pool, without any lockups. So far, so good. --=20 Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTi=W9uyhYACMfR=Fa4sTr31WfXV=GA>