From owner-freebsd-current@FreeBSD.ORG Wed Jan 9 15:39:44 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E8F1916A419 for ; Wed, 9 Jan 2008 15:39:44 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.freebsd.org (Postfix) with ESMTP id 9C53813C457 for ; Wed, 9 Jan 2008 15:39:44 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.14.0/8.14.0) with ESMTP id m09FbpOc023479 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 9 Jan 2008 10:37:51 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.12.9p2/8.12.9/Submit) id m09FaYUG092096; Wed, 9 Jan 2008 10:36:34 -0500 (EST) (envelope-from gallatin) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18308.59929.901540.689667@grasshopper.cs.duke.edu> Date: Wed, 9 Jan 2008 10:36:34 -0500 (EST) To: "Alexandre \"Sunny\" Kovalenko" In-Reply-To: <1199890529.756.10.camel@RabbitsDen> References: <20080106141157.I105@fledge.watson.org> <47810DE3.3050106@FreeBSD.org> <478119AB.8050906@FreeBSD.org> <47814160.4050401@samsco.org> <4781541D.6070500@conducive.net> <47815D29.2000509@conducive.net> <1199664196.899.10.camel@RabbitsDen> <47818E97.8030601@conducive.net> <18308.51970.859622.363321@grasshopper.cs.duke.edu> <1199890529.756.10.camel@RabbitsDen> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Cc: askbill@conducive.net, freebsd-current@freebsd.org Subject: Re: ZFS honesty X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2008 15:39:45 -0000 "Alexandre \"Sunny\" Kovalenko" writes: > > On Wed, 2008-01-09 at 08:23 -0500, Andrew Gallatin wrote: > > =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= writes: > > > > OTOH that's all GPFS is. > > > > > > Far more features than that - 'robust', 'fault tolerant', 'Disaster Recovery' > > > ... all the usual buzzwords. > > > > > > And nothing prevents using 'cluster' tools on a single box. Not storage-wise anyway. > > > > Having had the misfortune of being involved in a cluster which used > > GPFS, I can attest that GPFS is anything but "robust" and "fault > > tolerant" in my experience. Granted this was a few years ago, and > > things may have improved, but that one horrible experience was > > sufficient to make me avoid GPFS for life. > Would you mind sharing your experience, maybe in the private E-mail. I > am especially interested in the platform you have used (as in AIX or > Linux) and underlying storage configuration (as in directly attached vs. > separate file system servers). > > I am running few small AIX clusters in the lab using GPFS 3.1 over iSCSI > and so far was fairly pleased with that. Linux, with GPFS 1.x over ethernet. If there was even the slightest load on the ethernet network, and a GPFS heartbeat message got lost, the entire FS would die. That did not meet my definition of robust :(. Note that this was nearly 4 years ago, so it has likely gotten better. > However, OP's point was that ZFS has inherent cluster abilities, of > which I have found no information whatsoever. Indeed, but I do remember hearing the Lustre/ZFS rumors. Drew