From owner-freebsd-questions@FreeBSD.ORG Tue Jul 16 17:03:05 2013 Return-Path: Delivered-To: questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 9F7CB391 for ; Tue, 16 Jul 2013 17:03:05 +0000 (UTC) (envelope-from chenjunbing1234@126.com) Received: from m15-15.126.com (m15-15.126.com [220.181.15.15]) by mx1.freebsd.org (Postfix) with ESMTP id AAAC4387 for ; Tue, 16 Jul 2013 17:03:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=Received:Date:From:To:Subject:Content-Type: MIME-Version:Message-ID; bh=NNJ4yud54IsTHC6ccqOanpdEV7zbEEAt8jVq 2zSpgR4=; b=Rnd3A7/MQWMPoTm/jCtWi6AxSSJY4wqBLUWMkzfr5ZmyiyrX/PyV XoWy3B+Wr77bPDS7gBkU0q3hWwbMY7ENd92TIByKU6jrZI4x27m8E6SdeT9ywFS9 XlHHsg+BwwvrwvgNvPsf6AzzMwHQ1y3i49z33kZuQ4e/jjXRcT/KnTs= Received: from chenjunbing1234$126.com ( [221.225.187.134] ) by ajax-webmail-wmsvr15 (Coremail) ; Wed, 17 Jul 2013 00:32:28 +0800 (CST) X-Originating-IP: [221.225.187.134] Date: Wed, 17 Jul 2013 00:32:28 +0800 (CST) From: chenjunbing1234 To: questions@FreeBSD.org Subject: FreeBSD software installation problems X-Priority: 3 X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build 20130613(22460.5432.5432) Copyright (c) 2002-2013 www.mailtech.cn 126com X-CM-CTRLDATA: NLw56GZvb3Rlcl9odG09MzA0OTo4MQ== MIME-Version: 1.0 Message-ID: <6a2798a0.1b306.13fe8536cbd.Coremail.chenjunbing1234@126.com> X-CM-TRANSID: D8qowGC5YEKcdeVRzduEAA--.1447W X-CM-SenderInfo: hfkh0ypxqex0jjrsjka6rslhhfrp/1tbiXBZcbU3mCdEH7QADsh X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jul 2013 17:03:05 -0000 cXVlc3Rpb25zQEZyZWVCU0Qub3JnCklrbm93dmVyeSBsaXR0bGVFbmdsaXNoLCBhbmQgSXdhbnQg dG8gbGVhcm5mcmVlYnNkLEkgd2FzIHVuZGVyZnRwOi8vZnRwLmZyZWVic2Qub3JnL3B1Yi9GcmVl QlNEL2RvYy96aF9DTi5HQjIzMTIvYm9va3MvaGFuZGJvb2svYWJvdmUgdHV0b3JpYWx0byBpbnN0 YWxsYW5kIHByZXBhcmF0aW9uLCBhbmRtZXRhIGxvdCBvZiBwcm9ibGVtcyxJbWFkZSDigIvigIth dGhyZWVodHRwOi8vYmJzLmNoaW5hdW5peC5uZXQvZm9ydW0tNS0xLmh0bWxmb3J1bXBvc3Rpbmdz ZW50aXRsZWQ6bm92aWNlc3RlcCBieSBzdGVwaW5zdGFsbEZyZWVCU0QtOS4wLVJFTEVBU0Usbm90 IG1hbnkgcGVvcGxldG8gaGVscE15bWFpbnByb2JsZW1pcyB0aGUgc29mdHdhcmVpbnN0YWxsZWQs SSBob3BldG8gZ2V0IHlvdXIgaGVscC4= From owner-freebsd-questions@FreeBSD.ORG Tue Jul 16 17:33:19 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id CE248A34 for ; Tue, 16 Jul 2013 17:33:19 +0000 (UTC) (envelope-from joh.hendriks@gmail.com) Received: from mail-vc0-x230.google.com (mail-vc0-x230.google.com [IPv6:2607:f8b0:400c:c03::230]) by mx1.freebsd.org (Postfix) with ESMTP id 917C26EA for ; Tue, 16 Jul 2013 17:33:19 +0000 (UTC) Received: by mail-vc0-f176.google.com with SMTP id ha12so658869vcb.7 for ; Tue, 16 Jul 2013 10:33:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=AZpoXk0sWJnFIWSFloNvf8Y4TVOyI7PipjqhTlPyCes=; b=MmyS18DN9VBcKgUnRr4wRfAWiCkpeypBBrMpgNUzFLosVdjQaJeIrM6/VrVcxBb+gV 9n5Vnhj5YV01giC1L5z/3vk4c1sWfBhSyc/s/+0crQWOCEFsTHuIehcQG5aaby4J987l IOsK89/q7A1J5gKIEhlE46i9jLBSLSYv7QKapwX5ZM4LcxYo1C+7gpXzeIEmjYYw6bdD 1qOpKkIgHEdb9QpDDRGqksib3S8doR8DySO2cZHd3Be9fubIHmBePU/bTAPl/v0q/xip xdD2DnoTxxlSSBJaTruPoqyxMnWZgOwnolvw+uFbZ1Bi5xNOwe+1byeCfb5XVf/fbTGU Tfig== MIME-Version: 1.0 X-Received: by 10.52.186.129 with SMTP id fk1mr638269vdc.66.1373995999115; Tue, 16 Jul 2013 10:33:19 -0700 (PDT) Received: by 10.58.249.1 with HTTP; Tue, 16 Jul 2013 10:33:19 -0700 (PDT) In-Reply-To: <51E52190.7020008@fjl.co.uk> References: <4DFBC539-3CCC-4B9B-AB62-7BB846F18530@gmail.com> <976836C5-F790-4D55-A80C-5944E8BC2575@gmail.com> <51E51558.50302@ShaneWare.Biz> <51E52190.7020008@fjl.co.uk> Date: Tue, 16 Jul 2013 19:33:19 +0200 Message-ID: Subject: Re: to gmirror or to ZFS From: Johan Hendriks To: Frank Leonhardt Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-questions@freebsd.org X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jul 2013 17:33:20 -0000 Op dinsdag 16 juli 2013 schreef Frank Leonhardt (frank2@fjl.co.uk) het volgende: > On 16/07/2013 10:41, Shane Ambler wrote: > >> On 16/07/2013 14:41, aurfalien wrote: >> >>> >>> On Jul 15, 2013, at 9:23 PM, Warren Block wrote: >>> >>> On Mon, 15 Jul 2013, aurfalien wrote: >>>> >>>> ... thats the question :) >>>>> >>>>> At any rate, I'm building a rather large 100+TB NAS using ZFS. >>>>> >>>>> However for my OS, should I also ZFS or simply gmirror as I've a >>>>> dedicated pair of 256GB SSD drives for it. I didn't ask for SSD >>>>> sys drives, this system just came with em. >>>>> >>>>> This is more of a best practices q. >>>>> >>>> >>>> ZFS has data integrity checking, gmirror has low RAM overhead. >>>> gmirror is, at present, restricted to MBR partitioning due to >>>> metadata conflicts with GPT, so 2TB is the maximum size. >>>> >>>> Best practices... depends on your use. gmirror for the system >>>> leaves more RAM for ZFS. >>>> >>> >>> Perfect, thanks Warren. >>> >>> Just what I was looking for. >>> >> >> I doubt that you would save any ram having the os on a non-zfs drive as >> you will already be using zfs chances are that non-zfs drives would only >> increase ram usage by adding a second cache. zfs uses it's own cache >> system and isn't going to share it's cache with other system managed >> drives. I'm not actually certain if the system cache still sits above >> zfs cache or not, I think I read it bypasses the traditional drive cache. >> >> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max >> that is a system wide setting and isn't going to increase if you have >> two zpools. >> >> Tip: set the arc_max value - by default zfs will use all physical ram >> for cache, set it to be sure you have enough ram left for any services >> you want running. >> >> Have you considered using one or both SSD drives with zfs? They can be >> added as cache or log devices to help performance. >> See man zpool under Intent Log and Cache Devices. >> >> I agree with the sentiment of using the SSD as ZFS cache - it's possibly > the only logical use for them. > > I guess that with 100Tb worth of Winchesters you're not on a very tight > budget, and not too tight on RAM for the OS either. If I was going to do > this I'd stick with the OS on UFS and a gmirror because I simply don't > trust ZFS. This is based on pure prejudice and inexperience. > > I know how to arrange disks on a UNIX file system for performance - what > to use for swap, where tmp files should go and so on. I also know where > every file will be, physically, in the event of trouble. And here's the > clincher: If the machine blows up I can simply take one of the mirrored > drives, slap it in to some new hardware and I've got a very reasonable > chance that it'll boot. Can I do this with ZFS? I get the feeling that the > answer is an emphatic "maybe". > > So all things considered, I'd need a good reason not to stick with what I > know works reliably and can be recovered in the event of a disaster (UFS), > but I'm happy to watch and learn from everyone else's experience! > > I would us a zfs for the os. I have a couple of servers that did not survive a power failure with gmirror. The problems i had was when the power failed one disk was in a rebuilding state and then when the background fsck started or was busy for some time it would crash the whole server. Removing the disk that was rebuilding resolved the issue. This happened to me more than once. Most of the times it worked as advertised but not always. Before people tell me to use an UPS, i used a UPS but the damn thing gave way itself. Then after it came back from the warranty repair it gave way again. Some times it came back right away, leaving some servers survive and some in the state they where. It was hard to find the cause in the beginning because of the fact some servers did survive the power failure. We did not suspect the UPS at first. Anyway, gmirror did not work for me in all cases. I am now running a few servers with a zfs root. I did not have any problems with them till now (knock on wood). Since reading that swap on zfs root can cause trouble i have a separate freebsd-swap partition for the swap. Gr Johan > ______________________________**_________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-**questions > To unsubscribe, send any mail to " > freebsd-questions-unsubscribe@freebsd.org" >