From owner-freebsd-isp@FreeBSD.ORG Wed Jul 8 08:22:20 2009 Return-Path: Delivered-To: freebsd-isp@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ABB1E1065686 for ; Wed, 8 Jul 2009 08:22:20 +0000 (UTC) (envelope-from tonix@interazioni.it) Received: from mx02.interazioni.net (mx02.interazioni.net [80.94.114.204]) by mx1.freebsd.org (Postfix) with ESMTP id EABE78FC14 for ; Wed, 8 Jul 2009 08:22:19 +0000 (UTC) (envelope-from tonix@interazioni.it) Received: (qmail 78520 invoked by uid 88); 8 Jul 2009 08:22:18 -0000 Received: from unknown (HELO ?192.168.56.198?) (tonix@interazioni.it@85.18.206.139) by relay.interazioni.net with ESMTPA; 8 Jul 2009 08:22:18 -0000 Message-ID: <4A545739.20405@interazioni.it> Date: Wed, 08 Jul 2009 10:22:17 +0200 From: "Tonix (Antonio Nati)" User-Agent: Thunderbird 2.0.0.22 (Windows/20090605) MIME-Version: 1.0 To: freebsd-isp@freebsd.org References: <4A5209CA.4030304@interazioni.it> <4A5343B6.2090608@ibctech.ca> <225769512.20090707203135@homelink.ru> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: ZFS in productions 64 bit X-BeenThere: freebsd-isp@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Internet Services Providers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jul 2009 08:22:20 -0000 Freddie Cash ha scritto: > On Tue, Jul 7, 2009 at 9:31 AM, Dennis Yusupoff wrote: > > >>> If there's anything missing from there that you would like to know, just >>> ask. :) >>> >> At first, I would like to say thanks for your detailed "success-story" >> report. It was great! >> So, now a questions. ;) >> Have you got any HDD failure, and if yes, how do you repair filesystem >> and so on? >> >> > > > We've had one drive fail so far, which is how we discovered that our intial > pool setup was horribly, horribly, horribly misconfigured. We originally > used a single raidz2 vdev using all 24 harddrives. NOT RECOMMENDED!!! Our > throughput was horrible (taking almost 8 hours to complete a backup run of > less than 80 servers). Spent over a week trying to get that new drive to > resilver, but it just thrashed the drives. > > Then I found a bunch of articles online that describe how the raidz > implementation works (limited to the IOps of a single drive), and that one > should not use more than 8 or 9 drives in a raidz vdev. We built the > secondary server using the 3-raidz vdev layout, and copied over as much data > as we could (lost 3 months of daily backups, saved 2 months). Then we > rebuilt the primary servers using the 3-raidz vdev layout, and copied the > data back. > > Since then, we haven't had any other harddrive issues. > > And, we now run a "zpool scrub" every weekend to check for filesystem > inconsistencies, bad checksums, bad data, and so on. So far, no issues > found. > > > > >> Why are you use software RAID, not hardware? >> >> > > For the flexibility, and all the integrity features of ZFS. The pooled > storage concept is just so much nicer/easier to work with than hardware RAID > arrays, separate LUNs, separate volume managers, separate partitions, etc. > > Need more storage? Just add another raidz vdev to the pool. Instantly have > more storage space, and performance increases as well (the pool stripes > across all the vdevs by default). Don't have any more drive bays? Then > just replace the drives in the raidz vdev with larger ones. All the space > becomes available to the pool. And *all* the filesystems use that pool, so > they all get access to the extra space (no reformatting, no repartitioning, > no offline expansion required). > > Add in the snapshots feature, that actually works without slowing down the > system (UFS) or requiring "wasted"/used space (LVM), and it's hard to use > hardware RAID anymore. :) > > Or course, we do still use hardware RAID controllers, for the disk > management and alerting features, the onboard cache, the fast buses > (PCI-X/PCIe), multi-lane cabling, hot-plug support, etc; we just don't use > the actual RAID features. > > All of our Linux servers still use hardware RAID (5 and 10), with LVM on > top, and XFS on top of that. But it's just not as nice of a storage stack > to work with. :) > > Is there any plan to make ZFS clustered (I mean using iSCSI disks)? Any special thing to do to make it work with heartbeat? Tonino -- ------------------------------------------------------------ Inter@zioni Interazioni di Antonio Nati http://www.interazioni.it tonix@interazioni.it ------------------------------------------------------------