From owner-freebsd-stable@FreeBSD.ORG Sun Jan 9 22:07:02 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A12AB106566C for ; Sun, 9 Jan 2011 22:07:02 +0000 (UTC) (envelope-from cforgeron@acsi.ca) Received: from mta04.eastlink.ca (mta04.eastlink.ca [24.224.136.10]) by mx1.freebsd.org (Postfix) with ESMTP id 5C1A28FC19 for ; Sun, 9 Jan 2011 22:07:02 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=iso-8859-1 Received: from ip01.eastlink.ca ([unknown] [24.222.39.10]) by mta04.eastlink.ca (Sun Java(tm) System Messaging Server 7.3-11.01 64bit (built Sep 1 2009)) with ESMTP id <0LES00DBP03PS5B0@mta04.eastlink.ca> for freebsd-stable@freebsd.org; Sun, 09 Jan 2011 18:07:01 -0400 (AST) X-CMAE-Score: 0 X-CMAE-Analysis: v=1.1 cv=mORQtGzMSGJSBwuMSvVfB0MKjPGmXehAuj88Uvu04o4= c=1 sm=1 a=tZDC3L0OECsA:10 a=8nJEP1OIZ-IA:10 a=vNmoZVmIAAAA:8 a=6I5d2MoRAAAA:8 a=QycZ5dHgAAAA:8 a=CYHHQ_cAAAAA:8 a=eK3PI9EcAAAA:8 a=4Fl5IS_IAAAA:8 a=GQ2g13N_AAAA:8 a=UZZ4REbu0Gdqlnn5YLQA:9 a=6Jm289b_5GFUZSlYqokA:7 a=EwuxOvjK6YE9n6d9V7bC0ltyf1cA:4 a=wPNLvfGTeEIA:10 a=ejTeHdhWsb0A:10 a=kAxpFDoqt_QA:10 a=SV7veod9ZcQA:10 a=Uf46NRAMALYA:10 a=gA6IeH5FQcgA:10 a=NWVoK91CQyQA:10 a=wQyIZ9_OPoaBy-Zk:21 a=2gFHOeQMgqgwCP6O:21 a=E/PVjAe7IbPkHCM0BPV0xg==:117 Received: from blk-222-10-85.eastlink.ca (HELO server7.acsi.ca) ([24.222.10.85]) by ip01.eastlink.ca with ESMTP; Sun, 09 Jan 2011 18:07:00 -0400 Received: from server7.acsi.ca ([192.168.9.7]) by server7.acsi.ca ([192.168.9.7]) with mapi; Sun, 09 Jan 2011 18:07:00 -0400 From: Chris Forgeron To: "freebsd-stable@freebsd.org" Date: Sun, 09 Jan 2011 18:06:58 -0400 Thread-topic: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks Thread-index: AcuuZybY8//wLuxhTWe205Epf6R42gB4H4aA Message-id: References: <4D1C6F90.3080206@my.gd> <4D21E679.80002@my.gd> <84882169-0461-480F-8B4C-58E794BCC8E6@my.gd> <488AE93A-97B9-4F01-AD0A-0098E4B329C3@my.gd> <20110107014249.GA3719@icarus.home.lan> In-reply-to: Accept-Language: en-US Content-language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-transfer-encoding: quoted-printable Subject: RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Jan 2011 22:07:02 -0000 >> On 6 January 2011 22:26, Chris Forgeron wrote: >> > You know, these days I'm not as happy with SSD's for ZIL. I may blog a= bout some of the speed results I've been getting over the last 6mo-1yr that= I've been running them with ZFS. I think people should be using hardware R= AM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for t= he cost of a 60 gig SSD, and it will trounce the SSD for speed. >> > (I'm making an updated comment on my previous comment. Sorry for the topic = drift, but I think this is important to consider) I decided to do some tests between my Gigabyte i-RAM and OCZ Vertex 2 SSD. = I've found that they are both very similar for Random 4K-aligned Write spee= d (I was receiving around 17,000 IOPS on both, slightly faster ms access ti= me for the i-RAM). Now, if you're talking 512b aligned writes (which is wha= t ZFS is unless you've tweaked the ashift value) you're going to win with a= n i-RAM device. The OCZ Drops down to ~6000 IOPS for 512b random writes. Please note, that's on a used Vertex 2. A fresh Vertex 2 was giving me 28,0= 00 IOPS on 4k aligned writes - Faster than the i-RAM. But with more time, i= t will be slower than the i-RAM due to SSD fade.=20 I'm seriously considering trading in my ZIL SSD's for i-RAM devices, they a= re around the same price if you can still find them, and they won't degrade= like an SSD does. ZIL doesn't need much storage space. I think 12 gig (3 I= -RAM's) would do nicely, and would give me an aggregate IOPS close to a ddr= drive for under $500.=20 I did some testing with SSD Fade recently, here's the link to my blog on it= if anyone cares for more detail - http://christopher-technicalmusings.blog= spot.com/2011/01/ssd-fade-its-real-and-why-you-may-not.html I'm still using SSDs for my ZIL, but I think I'll be switching over to some= sort of RAM device shortly. I wish the i-RAM in 3.5" format had proper SAT= A power connectors on the back so it could plug into my SAS backplane like = the OCZ 3.5" SSDs do. As it stands, I'd have to rig something, as my SAN he= ad doesn't have any PCI controller slots for the other i-RAM format. -----Original Message----- From: owner-freebsd-stable@freebsd.org [mailto:owner-freebsd-stable@freebsd= .org] On Behalf Of Markiyan Kushnir Sent: Friday, January 07, 2011 8:10 AM To: Jeremy Chadwick Cc: Chris Forgeron; freebsd-stable@freebsd.org; Artem Belevich; Jean-Yves A= venard Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks 2011/1/7 Jeremy Chadwick : > On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote: >> On 6 January 2011 22:26, Chris Forgeron wrote: >> > You know, these days I'm not as happy with SSD's for ZIL. I may blog a= bout some of the speed results I've been getting over the last 6mo-1yr that= I've been running them with ZFS. I think people should be using hardware R= AM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for t= he cost of a 60 gig SSD, and it will trounce the SSD for speed. >> > >> > I'd put your SSD to L2ARC (cache). >> >> Where do you find those though. >> >> I've looked and looked and all references I could find was that >> battery-powered RAM card that Sun used in their test setup, but it's >> not publicly available.. > > DDRdrive: > =A0http://www.ddrdrive.com/ > =A0http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-c= ostly/ > > ACard ANS-9010: > =A0http://techreport.com/articles.x/16255 > > GC-RAMDISK (i-RAM) products: > =A0http://us.test.giga-byte.com/Products/Storage/Default.aspx > > Be aware these products are absurdly expensive for what they offer (the > cost isn't justified), not to mention in some cases a bottleneck is > imposed by use of a SATA-150 interface. =A0I'm also not sure if all of > them offer BBU capability. > > In some respects you might be better off just buying more RAM for your > system and making md(4) memory disks that are used by L2ARC (cache). > I've mentioned this in the past (specifically "back in the days" when > the ARC piece of ZFS on FreeBSD was causing havok, and asked if one > could work around the complexity by using L2ARC with md(4) drives > instead). > Once you have got extra RAM, why not just reserve it directly to ARC (via vm.kmem_size[_max] and vfs.zfs.arc_max)? Markiyan. > I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on > startup WRT the aforementioned. > > -- > | Jeremy Chadwick =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 jdc@parodius.com | > | Parodius Networking =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://= www.parodius.com/ | > | UNIX Systems Administrator =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Mountain = View, CA, USA | > | Making life hard for others since 1977. =A0 =A0 =A0 =A0 =A0 =A0 =A0 PGP= 4BD6C0CB | > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"