Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 9 Jan 2008 08:23:55 -0500 (EST)
From:      Andrew Gallatin <gallatin@cs.duke.edu>
To:        =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= <askbill@conducive.net>
Cc:        freebsd-current@freebsd.org
Subject:   Re: ZFS honesty
Message-ID:  <18308.51970.859622.363321@grasshopper.cs.duke.edu>
In-Reply-To: <47818E97.8030601@conducive.net>
References:  <fll63b$j1c$1@ger.gmane.org> <20080106141157.I105@fledge.watson.org> <flr0np$euj$2@ger.gmane.org> <47810DE3.3050106@FreeBSD.org> <flr3iq$of7$1@ger.gmane.org> <478119AB.8050906@FreeBSD.org> <47814160.4050401@samsco.org> <4781541D.6070500@conducive.net> <flrlib$j29$1@ger.gmane.org> <47815D29.2000509@conducive.net> <1199664196.899.10.camel@RabbitsDen> <47818E97.8030601@conducive.net>

index | next in thread | previous in thread | raw e-mail

=?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= writes:
 > >  OTOH that's all GPFS is.
 > 
 > Far more features than that - 'robust', 'fault tolerant', 'Disaster Recovery' 
 > ... all the usual buzzwords.
 > 
 > And nothing prevents using 'cluster' tools on a single box. Not storage-wise anyway.

Having had the misfortune of being involved in a cluster which used
GPFS, I can attest that GPFS is anything but "robust" and "fault
tolerant" in my experience.  Granted this was a few years ago, and
things may have improved, but that one horrible experience was 
sufficient to make me avoid GPFS for life.

Drew


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?18308.51970.859622.363321>