Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 16 Dec 2005 15:51:07 +0100
From:      Michael Schuh <michael.schuh@gmail.com>
To:        Eric Anderson <anderson@centtech.com>
Cc:        Ivan Voras <ivoras@fer.hr>, freebsd-geom@freebsd.org
Subject:   Re: Questions about geom-gate and RAID1/10 and CARP
Message-ID:  <1dbad3150512160651nc440efer@mail.gmail.com>
In-Reply-To: <1dbad3150512160537y4a944f81u@mail.gmail.com>
References:  <1dbad3150512090509l2ab08e03k@mail.gmail.com> <43998C95.3050505@fer.hr> <1dbad3150512130421i5278d693g@mail.gmail.com> <439ED00E.7050701@centtech.com> <1dbad3150512150355r576b1b0j@mail.gmail.com> <43A18CAB.6020705@centtech.com> <1dbad3150512160537y4a944f81u@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello Eric,
Hello List,

2005/12/16, Michael Schuh <michael.schuh@gmail.com>:
> > There are others, like lustre, gfs, polyserve, etc, however none of the=
m
> > work in FreeBSD at this point.  A few people (including myself) have
> > started a project to port gfs to FreeBSD (gfs4fbsd project on
> > sourceforge).
> Oh i think this was a very good idea......i'm will watch this project.
> another good thing was gpfs from IBM.......
> >
> >
> >
> > I'm wondering actually if you couldn't actually do this with NFS, and
> > hacking some pieces together.  Haven't thought through it, but seems
> > like maybe you could make the active writer an nfs server, that also
> > mounts it's own nfs share rw, but the nfs sharing would be on a virtual
> > interface, or at least the one that 'moves' with your failover.  The
> > other machines would mount that nfs server's export ro, and when it
> > fails over, the one taking over would have to run a script to begin
> > serving that export rw to all, and it's own client would continue its
> > connection but now on it's new virtual interface.  You'd also have to
> > have the ggate stuff set up, so that it was mirroring the original
> > 'master' disk, but when the failover occurred, you would quickly mount
> > your local mirrored disk rw, ignoring the 'unclean' message, begin a
> > background fsck, then start the nfs server on that mount point.  You
> > would probably also have to fail the original drive in the mirror to
> > effectively 'fence' that node from making disk changes at the same time
> > the new master did.
I have made a quick lookup into the coda-doc's and it seems to me,
that coda is exactly the solution for this Problem, but im not
really sure. i do further reading the doc's of coda.......

>
> Oh yes, this was also possible, but i would slunk around the script/cron-=
hell.
> It wasn't really a hell, but a serious error source.....
> my prior idea was to mount union two different mounted nfs-servers.....
> like your suggestion, but here comes the next problem......unionfs....
> and later the sync.......
>
> I hope a good solution for real HA is around the corner....
>
> michael
>

Thanks to all

regards

michael



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1dbad3150512160651nc440efer>