From owner-freebsd-hackers Wed Jun 17 20:08:01 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id UAA02904 for freebsd-hackers-outgoing; Wed, 17 Jun 1998 20:08:01 -0700 (PDT) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from bandicoot.prth.tensor.pgs.com (bandicoot.prth.tensor.pgs.com [157.147.224.1]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id UAA02876 for ; Wed, 17 Jun 1998 20:07:53 -0700 (PDT) (envelope-from shocking@ariadne.prth.tensor.pgs.com) Received: from ariadne.tensor.pgs.com (ariadne [157.147.227.36]) by bandicoot.prth.tensor.pgs.com (8.8.8/8.8.8) with SMTP id LAA07321 for ; Thu, 18 Jun 1998 11:07:06 +0800 (WST) Received: from ariadne by ariadne.tensor.pgs.com (SMI-8.6/SMI-SVR4) id LAA27243; Thu, 18 Jun 1998 11:07:21 +0800 Message-Id: <199806180307.LAA27243@ariadne.tensor.pgs.com> X-Mailer: exmh version 2.0.2 2/24/98 To: hackers@FreeBSD.ORG Subject: Clusters, Distributed File Systems and the like. Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Thu, 18 Jun 1998 11:07:21 +0800 From: Stephen Hocking-Senior Programmer PGS Tensor Perth Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG I'm wondering if anyone has done some work on a distributed file system, where various nodes are each responsible for part of a filesystem and can see other node's part of the filesystem, with files being able to span more than one node. I think Greg Lehey's work (vinum) could be used as part of a solution. The reason I ask is that I'm working for a geophysics data processing company, and we have all the 32 & 64 node IBM SP2 boxes floating about. Each node runs its own copy of AIX and has a couple of 9Gb disks, of which one is for the OS and the other is for the CFS (Common File System) which is shared between all the nodes in a system. It struck me as an interesting problem which has probably been solved a number of times. I'm in the process of porting our geophysical software to FreeBSD, which currently uses PVM and will soon be using MPI to distribute the processes among the nodes. I'm not expecting the performance to be within cooee of the SP2s, but it's an interesting exercide all the same. Stephen To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message