Date: Tue, 8 Mar 2016 11:56:20 +0800 From: Fred Liu <fred.fliu@gmail.com> To: illumos-zfs <zfs@lists.illumos.org> Cc: "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org>, developer@lists.open-zfs.org, developer <developer@open-zfs.org>, illumos-developer <developer@lists.illumos.org>, omnios-discuss <omnios-discuss@lists.omniti.com>, Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>, "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>, "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, "zfs-devel@freebsd.org" <zfs-devel@freebsd.org> Subject: Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built Message-ID: <CALi05XzuODjdbmufSfaCEYRmRZiS4T3dwwcD2oW6NLBNZx=Y0Q@mail.gmail.com> In-Reply-To: <CAESZ%2B_-%2B1jKQC880bew-maDyZ_xnMmB7QxPHyKAc_3P44%2Bm%2BuQ@mail.gmail.com> References: <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <CACTb9pxJqk__DPN_pDy4xPvd6ETZtbF9y=B8U7RaeGnn0tKAVQ@mail.gmail.com> <CAJjvXiH9Wh%2BYKngTvv0XG1HtikWggBDwjr_MCb8=Rf276DZO-Q@mail.gmail.com> <56D87784.4090103@broken.net> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD392@SH-MAIL.ISSI.COM> <5158F354-9636-4031-9536-E99450F312B3@RichardElling.com> <CALi05Xxm9Sdx9dXCU4C8YhUTZOwPY%2BNQqzmMEn5d0iFeOES6gw@mail.gmail.com> <6E2B77D1-E0CA-4901-A6BD-6A22C07536B3@gmail.com> <CALi05Xw1NGqZhXcS4HweX7AK0DU_mm01tj=rjB%2BqOU9N0-N=ng@mail.gmail.com> <CAESZ%2B_-%2B1jKQC880bew-maDyZ_xnMmB7QxPHyKAc_3P44%2Bm%2BuQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
2016-03-08 4:55 GMT+08:00 Liam Slusser <lslusser@gmail.com>: > I don't have a 2000 drive array (thats amazing!) but I do have two 280 > drive arrays which are in production. Here are the generic stats: > > server setup: > OpenIndiana oi_151 > 1 server rack > Dell r720xd 64g ram with mirrored 250g boot disks > 5 x LSI 9207-8e dualport SAS pci-e host bus adapters > Intel 10g fibre ethernet (dual port) > 2 x SSD for log cache > 2 x SSD for cache > 23 x Dell MD1200 with 3T,4T, or 6T NLSAS disks (a mix of Toshiba, Western > Digital, and Seagate drives - basically whatever Dell sends) > > zpool setup: > 23 x 12-disk raidz2 glued together. 276 total disks. Basically each new > 12 disk MD1200 is a new raidz2 added to the pool. > > Total size: ~797T > > We have an identical server which we replicate changes via zfs snapshots > every few minutes. The whole setup as been up and running for a few years > now, no issues. As we run low on space we purchase two additional MD1200 > shelfs (one for each system) and add the new raidz2 into pool on-the-fly. > > The only real issues we've had is sometimes a disk fails in such a way > (think Monty Python and the holy grail i'm not dead yet) where the disk > hasn't failed but is timing out and slows the whole array to a standstill > until we can manual find and remove the disk. Other problems are once a > disk has been replaced sometimes the resilver process can take > an eternity. We have also found the snapshot replication process can > interfere with the resilver process - resilver gets stuck at 99% and never > ends - so we end up stopping or only doing one replication a day until the > resilver process is done. > > The last helpful hint I have was lowering all the drive timeouts, see > http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/ > for info. > > [Fred]: zpool wiith 280 drives in production is pretty big! I think 2000 > drives were just in test. It is true that huge pools have lots of operation > challenges. I have met the similar sluggish issue caused by a > will-die disk. Just curious, what is the cluster software implemented in http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/ ? Thanks. Fred > > > >>> >>> >> > *illumos-zfs* | Archives > <https://www.listbox.com/member/archive/182191/=now> > <https://www.listbox.com/member/archive/rss/182191/22147814-d504851f> | > Modify > <https://www.listbox.com/member/?member_id=22147814&id_secret=22147814-a72bcb8a> > Your Subscription <http://www.listbox.com> >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALi05XzuODjdbmufSfaCEYRmRZiS4T3dwwcD2oW6NLBNZx=Y0Q>