Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Sep 2014 21:10:07 -0400
From:      Richard Yao <ryao@gentoo.org>
To:        Wojciech Puchar <wojtek@puchar.net>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, "mexas@bristol.ac.uk" <mexas@bristol.ac.uk>
Subject:   Re: cluster FS?
Message-ID:  <A42D6469-5A59-4AC7-9C43-690AF7AC4736@gentoo.org>
In-Reply-To: <alpine.BSF.2.00.1409301300350.864@laptop>
References:  <201409300845.s8U8jUTa079241@mech-as221.men.bris.ac.uk> <alpine.BSF.2.00.1409301300350.864@laptop>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sep 30, 2014, at 7:04 AM, Wojciech Puchar <wojtek@puchar.net> wrote:

>>=20
>> It seems to me (just from reading the handbook)
>> that none of NFS, HAST or iSCSI provide this.
>=20
> none of following are filesystems at all. NFS is remote access to filesyst=
em, the rest presents raw block device.
>=20
>> My specific needs are as follows.
>> I have multiple nodes and a disk array.
>> Each node is connected by fibre to the disk array.
>> I want to have each node read/write access
>> to all disks on disk array.
>> So that if any node fails, the
>> data is still accessible
>> via the remaining nodes.
>=20
> as disk array presents block devices, not files it is not possible to have=
 filesystem read write access with more than one computer to the same block d=
evice.
> There is no AFAIK filesystems that can communicate between nodes to synchr=
onize state after writes and prevent conflict.

Linux tends to have most of the work in this area. In specific, Lustre, Ceph=
 and Gluster. Gluster is FUSE-based and the server will run on FreeBSD:

https://wiki.freebsd.org/GlusterFS

The client likely can run on FreeBSD too, but it might be that no one has te=
sted it because the FreeBSD support was done before FreeBSD supported FUSE.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A42D6469-5A59-4AC7-9C43-690AF7AC4736>