From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 13:29:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0676C978 for ; Thu, 4 Apr 2013 13:29:07 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id D8663104 for ; Thu, 4 Apr 2013 13:29:06 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.136]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNkDt-000JCF-Cs; Thu, 04 Apr 2013 08:29:03 -0500 Message-ID: <515D8011.9050806@physics.umn.edu> Date: Thu, 04 Apr 2013 08:28:49 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Sami Halabi , freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-3.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, TW_ZF autolearn=no version=3.3.2 Subject: Re: ZFS in production enviroments X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 13:29:07 -0000 Like you, we watched for a long time before jumping in. We ran ZFS for quite a few years doing it the "wrong" way - having a filesystem on a single volume (mapped from our SAN). This was on a fairly low-capacity data-backup server (4TB or so) and although this misses a lot of the features of ZFS, it did give us some basic experience using it as a filesystem. More recently we run a couple of "bulk data" servers for compute clusters, with between 40-120 drives each. We used: Dell R710 or R720 as head node, 48-64GB RAM, starting from FreeBSD 9.1-RC1. multiple LSI SAS 9205-8e HBAs (should probably have looked at -16e) Intel 10GBe ethernet (old-style CX4 adapters, we are cheapskates :-) Supermicro SC847 E16-RJBOD1 45-bay SAS chassis (this is just the single-channel model) WD 3TB Red drives Intel 313 SSD log cache mirror randomly-selected L2ARC SSD (currently some mind of Samsung) each zfs pool made of four 10-drive raidz2 vdevs plus associated SSD drives (fits self-contained into one JBOD chassis). This has performed really well even though we have barely done any NFS tuning yet. For home directories for which I've have been asking advice on this list, for building in the next few weeks, we will probably use dual-path chassis and gmultipath, WD RE-series SAS drives, and some variety of mirroring rather than raidz. None of these are meant to be high-availability, we'd just swap connections to a different head unit in case of failure. Graham On 4/4/2013 1:01 AM, Sami Halabi wrote: > Hi, > I've registered the last year to the list in order to get more involved in > ZFS filesystem. > I must admit i didn't install it yet in any prod machine, rather than in a > VM for testing that I installed lately. > > I see a lots of bugs/patches/stability issues regarding ZFS, what makes me > think: > 1. is it really ready for production enviroments? > 2. Is there anyone that installed it in prod and can give some feedback > about stability, config? > 3. from all the mails about reccomendations I've seen, is someone in > fbsd-team taking the reccomendations and putting them somewhere in a > one-document that describes all the suggestions rather than mailing lists?