From owner-freebsd-fs@FreeBSD.ORG Mon Sep 19 19:00:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A44CD1065673 for ; Mon, 19 Sep 2011 19:00:16 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm26-vm0.bullet.mail.ne1.yahoo.com (nm26-vm0.bullet.mail.ne1.yahoo.com [98.138.91.68]) by mx1.freebsd.org (Postfix) with SMTP id 8557B8FC15 for ; Mon, 19 Sep 2011 19:00:12 +0000 (UTC) Received: from [98.138.90.56] by nm26.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:12 -0000 Received: from [98.138.87.11] by tm9.bullet.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:11 -0000 Received: from [127.0.0.1] by omp1011.mail.ne1.yahoo.com with NNFMP; 19 Sep 2011 19:00:11 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 920117.34743.bm@omp1011.mail.ne1.yahoo.com Received: (qmail 89302 invoked by uid 60001); 19 Sep 2011 19:00:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1316458811; bh=DNnpNlXlNZFTB11wTOCDmdzpTf15WRadTDxoZuwcTyU=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=fG3TkJ0abYGA7KGR7TuaOZzGinnd5R7QQZegEncW3WlWAgy2/reNRZAAFTPMTM+5V+3Q6vERxRa1/y0/OXRFuURtBf5ZYjSsVx62NlNP7aXbsOWk5Jc+ulqeyzu//psYPmq4Mh9ypgQ9csI/UpGV4VqENWipEwKcGyys8JUrLHo= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=byx3RDbWUw1TAcsxwxSAXzdeuYWzP4ieAE8fnQBIloS4uqgl4fUhbISWGcpZLI88CoVZBrGjbvVERgjx1IN0Ip54h1zV6YXsooJF27zB6q8gucMjmDnA2LEc4Z5yVjNxIidmcgNs3QbxFjlAyvX7Ca855E6eZlVDcp9YMcBQt5s=; X-YMail-OSG: SP7OcbgVM1n4UVf0R5o910Lu1L6sNh6VXkiLv7nlDLYQ_XB QWMSUbKRblj4VzySb3sjMitBrOWd1wqiSaCNfVSKIUFSByNPfNjdKT4N..Tt x_ULoE3EFO4FvWefSxdgPOyrRyuy.hGJRXX_8QZImYZYjTz4xULfmxf0BPto 1VV9aCpTPhI0ENBPLhz9ub1K7jYrnKQgB5M2E2ecsjnrreGEsInaaSlxBgbO SfMUrTzHZDDUKLiIbLObZozDfHtBH9ltvzgcePEi3jKPxk2MmQePlU1fLV8z ktwp43xkcHjiEhQdyFStsD.JmwiXeb4e5FD02VN_zBOHzdrJeDcqQgFkRT8i 0upvYsVcnOJGq8mTCND9.AkV5.qIUv7wyBG3sAfP5uYgoTXAXfM2GAqOOhAQ FOp4- Received: from [80.237.226.76] by web121208.mail.ne1.yahoo.com via HTTP; Mon, 19 Sep 2011 12:00:11 PDT X-Mailer: YahooMailClassic/14.0.5 YahooMailWebService/0.8.114.317681 Message-ID: <1316458811.88701.YahooMailClassic@web121208.mail.ne1.yahoo.com> Date: Mon, 19 Sep 2011 12:00:11 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org In-Reply-To: <72A6ABD6-F6FD-4563-AB3F-6061E3DD9FBF@digsys.bg> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2011 19:00:16 -0000 --- On Sat, 9/17/11, Daniel Kalchev wrote: > There is not single magnetic drive on the market that can > saturate SATA2 (300 Mbps), yet. Most can't match even SATA1 > (150 MBps). You don't need that much dedicated bandwidth for > drives. > If you intend to have 48/96 SSDs, then that is another > story, but then I am doubtful a "PC" architecture can handle > that much data either. Hmmm... I understand this, but is there not any data that might transfer from multiple magnetic disks, simultaneously, at 6GB, that could periodically max out the card bandwidth ? As in, all drives in a 12 drive array perform an operation on their built-in cache simultaneously ? I know the spinning disks themselves can't do it, but there is 64 MB of cache on each drive, and that can run at 6G ... this doesn't ever happen ? Further, the cards I use will be the same regardless - the number of PCIe lanes is just a different motherboard choice at the front end, and only adds a marginal extra cost (assuming there _IS_ a 112+ lane mobo around) ... so why not ? > Memory is much more expensive than SSDs for L2ARC and if > your workload permits it (lots of repeated small reads), > larger L2ARC will help a lot. It will also help if you have > huge spool or if you enable dedup etc. Just populate as much > RAM as the server can handle and then add L2ARC > (read-optimized). That's interesting (the part about dedup being assisted by L2ARC) ... what about snapshots ? If we run 14 or 21 snapshots, what component is that stressing, and what structures would speed that up ? Thanks a lot.