From owner-freebsd-questions@FreeBSD.ORG Thu Oct 20 08:18:20 2011 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AD7BB106566B for ; Thu, 20 Oct 2011 08:18:20 +0000 (UTC) (envelope-from ml@my.gd) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id 6B80A8FC14 for ; Thu, 20 Oct 2011 08:18:20 +0000 (UTC) Received: by ggnq2 with SMTP id q2so1597386ggn.13 for ; Thu, 20 Oct 2011 01:18:19 -0700 (PDT) Received: by 10.101.99.1 with SMTP id b1mr2044808anm.9.1319098699535; Thu, 20 Oct 2011 01:18:19 -0700 (PDT) Received: from [10.68.149.197] ([92.90.16.70]) by mx.google.com with ESMTPS id r9sm12479784anh.8.2011.10.20.01.18.12 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 20 Oct 2011 01:18:17 -0700 (PDT) References: <20111019141443.GQ4592@pcjas.obspm.fr> From: Damien Fleuriot Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii In-Reply-To: Message-Id: Date: Thu, 20 Oct 2011 10:15:44 +0200 To: Dennis Glatting Mime-Version: 1.0 (iPhone Mail 8J2) X-Mailer: iPhone Mail (8J2) Cc: Albert Shih , "zfs-discuss@opensolaris.org" , "Fajar A. Nugraha" , "freebsd-questions@freebsd.org" Subject: Re: [zfs-discuss] ZFS on Dell with FreeBSD X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Oct 2011 08:18:20 -0000 On 20 Oct 2011, at 05:24, Dennis Glatting wrote: >=20 >=20 > On Thu, 20 Oct 2011, Fajar A. Nugraha wrote: >=20 >> On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser w= rote: >>> On 10/19/11 9:14 AM, "Albert Shih" wrote: >>>=20 >>>> When we buy a MD1200 we need a RAID PERC H800 card on the server >>>=20 >>> No, you need a card that includes 2 external x4 SFF8088 SAS connectors. >>> I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then= >>> it presents the individual disks and ZFS can handle redundancy and >>> recovery. >>=20 >> Exactly, thanks for suggesting an exact controller model that can >> present disks as JBOD. >>=20 >> With hardware RAID, you'd pretty much rely on the controller to behave ni= cely, which is why I suggested to simply create one big volume for zfs to us= e (so you pretty much only use features like snapshot, clones, etc, but don'= t use zfs self healing feature). Again, others might (and have) disagree and= suggest using volumes for individual disk (even when you're still relying o= n hardware RAID controller). But ultimately there's no question that the bes= t possible setup would be to present the disks as JBOD and let zfs handle it= directly. >>=20 >=20 > I saw something interesting and different today, which I'll just throw out= . >=20 > A buddy has a HP370 loaded with disks (not the only machine that provides t= hese services, rather the one he was showing off). The 370's disks are manag= ed by the underlying hardware RAID controller, which he built as multiple RA= ID1 volumes. >=20 > ESXi 5.0 is loaded and in control of the volumes, some of which are partit= ioned. Consequently, his result is vendor supported interfaces between disks= , RAID controller, ESXi, and managing/reporting software. >=20 > The HP370 has multiple FreeNAS instances whose "disks" are the "disks" (vo= lumes/partitions) from ESXi (all on the same physical hardware). The FreeNAS= instances are partitioned according to their physical and logical function w= ithin the infrastructure, whether by physical or logical connections. The Fre= eNAS instances then serves its "disks" to consumers. >=20 > We have not done any performance testing. Generally, his NAS consumers are= not I/O pigs though we want the best performance possible (some consumers a= re over the WAN resulting in any HP/ESXi/FreeNAS performance issues possibly= moot). (I want to do some performance testing because, well, it may have si= gnificant amusement value.) A question we have is whether ZFS (ARC, maybe L2= ARC) within FreeNAS is possible or would provide any value. >=20 Possible, yes. Provides value, somewhat. You still get to use snapshots, compression, dedup... You don't get ZFS self healing though which IMO is a big loss. Regarding the ARC, it totally depends on the kind of files you serve and the= amount of RAM you have available. If you keep serving huge, different files all the time, it won't help as muc= h as when clients request the same small/avg files over and over again.=