Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Jan 2014 23:02:08 +0100
From:      Joar Jegleim <joar.jegleim@gmail.com>
To:        Ronald Klop <ronald-lists@klop.ws>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: hast and zfs
Message-ID:  <CAFfb-hrdvk7m0b6E0A=h%2BY2CsvuTaFwzUBuX7OVvXWZEAA9XAw@mail.gmail.com>
In-Reply-To: <op.w93dorv8kndu52@212-182-167-131.ip.telfort.nl>
References:  <CAFfb-hq4R-f7yNCbAGS9X8wW4FcYY8%2B_jyxsqRsPxcnkYtEA7g@mail.gmail.com> <op.w93dorv8kndu52@212-182-167-131.ip.telfort.nl>

next in thread | previous in thread | raw e-mail | index | archive | help
Thnx for your reply ronald. Checking the commit logs seem like a good idea.
Main reason for 9.1 is that I've taken over more than a hundred
installations with everythimg from 7.0 up to 9.1 and pretty much every
minor release inbetween. My initial goal was upgrading everything to 9.1,
which may turn out to be too ambitious ( old inhouse apps, maintainers long
gone)
I did not mention in my first post that I used ssd cache on the primary
node that wasn't part of hast. I switched over to the hast setup last night
and it ran for about 3 hours without problems(without l2arc) . I'm not sure
if it could be the ssd l2arc that is the issue, or if it's was because of
less iops ( stopped all publishing of files while testing) . It also ran
for about 2 hours during office hours today too, and I'm hopefully able to
let it run for longer than that tomorrow.

Regards
Joar
 On 22 Jan 2014 15:11, "Ronald Klop" <ronald-lists@klop.ws> wrote:

> On Tue, 21 Jan 2014 15:46:09 +0100, Joar Jegleim <joar.jegleim@gmail.com>
> wrote:
>
>  Hi list !
>>
>> I've setup hast with zfs on 9.1-RELEASE-p10
>> I've got an lsi hba connected to an external sas hp msa that has 20
>> disks in it, same setup on both nodes.
>> I've created a hast disk dev such as /dev/hast/disk1a,
>> /dev/hast/disk1b etc... ( see attached hast.conf )
>> And built a zpool mirror accross the 20 disks.
>>
>> While setting up this, and testing, all looked pretty good, I've tried
>> numerous failovers and so on.
>> When I just switched over to this setup in production, it worked for
>> about half an hour until the primary node got a zfs hang where zpool
>> list hung and hastctl status hung completely .
>>
>> There was no output in dmesg nor /var/log/messages that said anything
>> related to this (?) .
>>
>> The server serves some 3 000 000 jpegs for a site, I see anything from
>> 50 to just above 1000 iops, though the average iops is bellow 100.
>> It's completely random what pictures are being fetched at any time.
>>
>> Could there be some sysctls to tune related to this setup mabye ?
>> Anybody using hast and zfs in production got any tip ?
>>
>>
>>
>>
> You did not get any reply yet. I'm not of any help either, but is there a
> reason you don't use a newer FreeBSD version? (9.1 is from December 2012.)
> It can be useful to read through the commits about hast which happened in
> the meantime.
>
> Ronald.
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFfb-hrdvk7m0b6E0A=h%2BY2CsvuTaFwzUBuX7OVvXWZEAA9XAw>