From owner-freebsd-current@FreeBSD.ORG Wed Jan 9 13:26:05 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 01BB516A420 for ; Wed, 9 Jan 2008 13:26:05 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.freebsd.org (Postfix) with ESMTP id B0AD413C47E for ; Wed, 9 Jan 2008 13:26:04 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.14.0/8.14.0) with ESMTP id m09DOOhw016875 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 9 Jan 2008 08:24:24 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.12.9p2/8.12.9/Submit) id m09DNtHZ091989; Wed, 9 Jan 2008 08:23:55 -0500 (EST) (envelope-from gallatin) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18308.51970.859622.363321@grasshopper.cs.duke.edu> Date: Wed, 9 Jan 2008 08:23:55 -0500 (EST) To: =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= In-Reply-To: <47818E97.8030601@conducive.net> References: <20080106141157.I105@fledge.watson.org> <47810DE3.3050106@FreeBSD.org> <478119AB.8050906@FreeBSD.org> <47814160.4050401@samsco.org> <4781541D.6070500@conducive.net> <47815D29.2000509@conducive.net> <1199664196.899.10.camel@RabbitsDen> <47818E97.8030601@conducive.net> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Cc: freebsd-current@freebsd.org Subject: Re: ZFS honesty X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jan 2008 13:26:05 -0000 =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= writes: > > OTOH that's all GPFS is. > > Far more features than that - 'robust', 'fault tolerant', 'Disaster Recovery' > ... all the usual buzzwords. > > And nothing prevents using 'cluster' tools on a single box. Not storage-wise anyway. Having had the misfortune of being involved in a cluster which used GPFS, I can attest that GPFS is anything but "robust" and "fault tolerant" in my experience. Granted this was a few years ago, and things may have improved, but that one horrible experience was sufficient to make me avoid GPFS for life. Drew