Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 08 Jul 2009 10:22:17 +0200
From:      "Tonix (Antonio Nati)" <tonix@interazioni.it>
To:        freebsd-isp@freebsd.org
Subject:   Re: ZFS in productions 64 bit
Message-ID:  <4A545739.20405@interazioni.it>
In-Reply-To: <b269bc570907071111o3b850b84q203bead00ad72597@mail.gmail.com>
References:  <4A5209CA.4030304@interazioni.it>	<b269bc570907061352u1389d231k8ba35cc5de2d83cb@mail.gmail.com>	<4A5343B6.2090608@ibctech.ca>	<b269bc570907070851m720e0be6ief726027d4d20994@mail.gmail.com>	<225769512.20090707203135@homelink.ru> <b269bc570907071111o3b850b84q203bead00ad72597@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Freddie Cash ha scritto:
> On Tue, Jul 7, 2009 at 9:31 AM, Dennis Yusupoff <dyr@homelink.ru> wrote:
>
>   
>>> If there's anything missing from there that you would like to know, just
>>> ask.  :)
>>>       
>> At first, I would like to say thanks for your detailed "success-story"
>> report. It was great!
>> So, now a questions. ;)
>> Have you got any HDD failure, and if yes, how do you repair filesystem
>> and so on?
>>
>>     
>
>
> We've had one drive fail so far, which is how we discovered that our intial
> pool setup was horribly, horribly, horribly misconfigured.  We originally
> used a single raidz2 vdev using all 24 harddrives.  NOT RECOMMENDED!!!  Our
> throughput was horrible (taking almost 8 hours to complete a backup run of
> less than 80 servers).  Spent over a week trying to get that new drive to
> resilver, but it just thrashed the drives.
>
> Then I found a bunch of articles online that describe how the raidz
> implementation works (limited to the IOps of a single drive), and that one
> should not use more than 8 or 9 drives in a raidz vdev.  We built the
> secondary server using the 3-raidz vdev layout, and copied over as much data
> as we could (lost 3 months of daily backups, saved 2 months).  Then we
> rebuilt the primary servers using the 3-raidz vdev layout, and copied the
> data back.
>
> Since then, we haven't had any other harddrive issues.
>
> And, we now run a "zpool scrub" every weekend to check for filesystem
> inconsistencies, bad checksums, bad data, and so on.  So far, no issues
> found.
>
>
>
>   
>> Why are you use software RAID, not hardware?
>>
>>     
>
> For the flexibility, and all the integrity features of ZFS.  The pooled
> storage concept is just so much nicer/easier to work with than hardware RAID
> arrays, separate LUNs, separate volume managers, separate partitions, etc.
>
> Need more storage?  Just add another raidz vdev to the pool.  Instantly have
> more storage space, and performance increases as well (the pool stripes
> across all the vdevs by default).  Don't have any more drive bays?  Then
> just replace the drives in the raidz vdev with larger ones.  All the space
> becomes available to the pool.  And *all* the filesystems use that pool, so
> they all get access to the extra space (no reformatting, no repartitioning,
> no offline expansion required).
>
> Add in the snapshots feature, that actually works without slowing down the
> system (UFS) or requiring "wasted"/used space (LVM), and it's hard to use
> hardware RAID anymore.  :)
>
> Or course, we do still use hardware RAID controllers, for the disk
> management and alerting features, the onboard cache, the fast buses
> (PCI-X/PCIe), multi-lane cabling, hot-plug support, etc; we just don't use
> the actual RAID features.
>
> All of our Linux servers still use hardware RAID (5 and 10), with LVM on
> top, and XFS on top of that.  But it's just not as nice of a storage stack
> to work with.  :)
>
>   
Is there any plan to make ZFS clustered (I mean using iSCSI disks)?
Any special thing to do to make it work with heartbeat?

Tonino

-- 
------------------------------------------------------------
        Inter@zioni            Interazioni di Antonio Nati 
   http://www.interazioni.it      tonix@interazioni.it           
------------------------------------------------------------




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A545739.20405>