From owner-freebsd-fs@FreeBSD.ORG Sun Nov 13 15:49:06 2005 Return-Path: X-Original-To: freebsd-fs@FreeBSD.ORG Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 818EE16A41F for ; Sun, 13 Nov 2005 15:49:06 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (lurza.secnetix.de [83.120.8.8]) by mx1.FreeBSD.org (Postfix) with ESMTP id B95C443D55 for ; Sun, 13 Nov 2005 15:49:05 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (wluzyr@localhost [127.0.0.1]) by lurza.secnetix.de (8.13.1/8.13.1) with ESMTP id jADFn222056446 for ; Sun, 13 Nov 2005 16:49:03 +0100 (CET) (envelope-from oliver.fromme@secnetix.de) Received: (from olli@localhost) by lurza.secnetix.de (8.13.1/8.13.1/Submit) id jADFn2s5056445; Sun, 13 Nov 2005 16:49:02 +0100 (CET) (envelope-from olli) Date: Sun, 13 Nov 2005 16:49:02 +0100 (CET) Message-Id: <200511131549.jADFn2s5056445@lurza.secnetix.de> From: Oliver Fromme To: freebsd-fs@FreeBSD.ORG In-Reply-To: <200511071301.jA7D14PT038818@lurza.secnetix.de> X-Newsgroups: list.freebsd-fs User-Agent: tin/1.5.4-20000523 ("1959") (UNIX) (FreeBSD/4.11-RELEASE (i386)) Cc: Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-fs@FreeBSD.ORG List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2005 15:49:06 -0000 Oliver Fromme wrote: > user wrote: > > On Sun, 6 Nov 2005, Eric Anderson wrote: > > > [fsck on large file systems taking a long time] > > > > Can you elaborate ? Namely, how long on the 2GB filesystems ? > > It depends very much on the file system parameters. In > particular, it's well worth to lower the inode density > (i.e. increase the -i number argument to newfs) if you > can afford it, i.e. if you expect to have fewer large > files on the file system (such as multimedia files). I just accidentally pulled the wrong power cord ... So now I can give you first-hand numbers. :-} This is a 250 Gbyte data disk that has been newfs'ed with -i 65536, so I get about 4 million inodes: Filesystem iused ifree %iused /dev/ad0s1f 179,049 3,576,789 5% So I still have 95% of free inodes, even though the filesystem is fairly good filled: Filesystem 1K-blocks Used Avail Capacity /dev/ad0s1f 237,652,238 188,173,074 30,466,986 86% fsck(8) took about 2 minutes, which is acceptable, I think. Note that I always disable background fsck (for me personally, it has more disadvantages than advantages). This is what fsck(8) reported when the machin came back up: /dev/ad0s1f: 179049 files, 94086537 used, 24739582 free (26782 frags, 3089100 blocks, 0.0% fragmentation) Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing Dienstleistungen mit Schwerpunkt FreeBSD: http://www.secnetix.de/bsd Any opinions expressed in this message may be personal to the author and may not necessarily reflect the opinions of secnetix in any way. "Python tricks" is a tough one, cuz the language is so clean. E.g., C makes an art of confusing pointers with arrays and strings, which leads to lotsa neat pointer tricks; APL mistakes everything for an array, leading to neat one-liners; and Perl confuses everything period, making each line a joyous adventure . -- Tim Peters From owner-freebsd-fs@FreeBSD.ORG Sun Nov 13 17:17:37 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6380116A420 for ; Sun, 13 Nov 2005 17:17:37 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id A64FE43D72 for ; Sun, 13 Nov 2005 17:17:26 +0000 (GMT) (envelope-from scottl@samsco.org) Received: from [192.168.254.11] (junior.samsco.home [192.168.254.11]) (authenticated bits=0) by pooker.samsco.org (8.13.4/8.13.4) with ESMTP id jADHH8SP053474; Sun, 13 Nov 2005 10:17:08 -0700 (MST) (envelope-from scottl@samsco.org) Message-ID: <43777523.8020709@samsco.org> Date: Sun, 13 Nov 2005 10:17:23 -0700 From: Scott Long User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.8) Gecko/20050615 X-Accept-Language: en-us, en MIME-Version: 1.0 To: delphij@delphij.net References: <436BDB99.5060907@samsco.org> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.4 required=3.8 tests=ALL_TRUSTED autolearn=failed version=3.1.0 X-Spam-Checker-Version: SpamAssassin 3.1.0 (2005-09-13) on pooker.samsco.org Cc: freebsd-fs@freebsd.org, user Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2005 17:17:37 -0000 Xin LI wrote: > On 11/5/05, Scott Long wrote: > >>The UFS snapshot code was written at a time when disks were typically >>around 4-9GB in size, not 400GB in size =-) Unfortunately, the amount > > > s/size/cylinder groups/g :-) > > >>of time it takes to do the initial snapshot bookkeeping scales linearly >>with the size of the drive, and many people have reported that it takes >>considerable amount of time (anywhere from several minutes to several >>dozen minutes) on large drives/arrays like you describe. So, you should >>test and plan accordingly if you are interested in using them. > > > I have some ideas about lazy snapshotting. But unfortunately I don't > have much time to implement a prototype ATM, and I think we really > need a file system that is capable for: > - Handling large number of files in one directory (say, some sort of > indexing mechanism, etc. And yes, I know that this is somewhat > insane, but the [ab]use is present in many large e-mail systems that > uses mailbox) > - Effective recovery. Personally I do not buy journalling much, and > I think the problem could be resolved by something like WAFL did. > > I think that JUFS would provide some help for (2), do you have some > plan about (1)? > I guess that UFS_DIRHASH doesn't give enough benefit for your situation? The idea of doing alternate directory layouts (such as b-trees) has been proposed a number of times. Apparently there was an idea at one point for UFS to generate a b-tree layout for directory and and save it on disk as a cache. The primary method of directory storage would remain the traditional linear way so that compatibility is preserved, but OS's that were aware of the cache could use it too. There are still some reserved flags and fields in UFS2 for doing this, in case you're interested. Since it requires double bookkeeping for link creation and removal, I'm not sure how speedy it is for anything other than VOP_LOOKUP operations. An alternate idea I've had is to break with compatibility and doing b-trees or something similar as the native format for UFS3 (along with native journalling and other things). Scott From owner-freebsd-fs@FreeBSD.ORG Sun Nov 13 18:04:38 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3161816A41F for ; Sun, 13 Nov 2005 18:04:38 +0000 (GMT) (envelope-from delphij@gmail.com) Received: from xproxy.gmail.com (xproxy.gmail.com [66.249.82.196]) by mx1.FreeBSD.org (Postfix) with ESMTP id B891943D46 for ; Sun, 13 Nov 2005 18:04:37 +0000 (GMT) (envelope-from delphij@gmail.com) Received: by xproxy.gmail.com with SMTP id t10so1471889wxc for ; Sun, 13 Nov 2005 10:04:37 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=O/Vk5d7M4qHF+VE2Dd22BOmnN34rf64N1a+moMc/hLA2ibdl2lMQDeT3P+wEsR32pGP/RE/PDbLxq45rjAdawZ3G4m/XzdVDRudoSyjqNMXKbpx8fjy0FXRZfOVre0XINtBKtJ/u87dViqeKfHe+ZTuR61Mzmv0qLiD0FesNGkM= Received: by 10.64.131.4 with SMTP id e4mr1788996qbd; Sun, 13 Nov 2005 09:07:15 -0800 (PST) Received: by 10.64.21.5 with HTTP; Sun, 13 Nov 2005 09:07:15 -0800 (PST) Message-ID: Date: Mon, 14 Nov 2005 01:07:15 +0800 From: Xin LI To: Scott Long In-Reply-To: <436BDB99.5060907@samsco.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 Content-Disposition: inline References: <436BDB99.5060907@samsco.org> Cc: freebsd-fs@freebsd.org, user Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: delphij@delphij.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2005 18:04:38 -0000 T24gMTEvNS8wNSwgU2NvdHQgTG9uZyA8c2NvdHRsQHNhbXNjby5vcmc+IHdyb3RlOgo+IFRoZSBV RlMgc25hcHNob3QgY29kZSB3YXMgd3JpdHRlbiBhdCBhIHRpbWUgd2hlbiBkaXNrcyB3ZXJlIHR5 cGljYWxseQo+IGFyb3VuZCA0LTlHQiBpbiBzaXplLCBub3QgNDAwR0IgaW4gc2l6ZSA9LSkgIFVu Zm9ydHVuYXRlbHksIHRoZSBhbW91bnQKCnMvc2l6ZS9jeWxpbmRlciBncm91cHMvZyA6LSkKCj4g b2YgdGltZSBpdCB0YWtlcyB0byBkbyB0aGUgaW5pdGlhbCBzbmFwc2hvdCBib29ra2VlcGluZyBz Y2FsZXMgbGluZWFybHkKPiB3aXRoIHRoZSBzaXplIG9mIHRoZSBkcml2ZSwgYW5kIG1hbnkgcGVv cGxlIGhhdmUgcmVwb3J0ZWQgdGhhdCBpdCB0YWtlcwo+IGNvbnNpZGVyYWJsZSBhbW91bnQgb2Yg dGltZSAoYW55d2hlcmUgZnJvbSBzZXZlcmFsIG1pbnV0ZXMgdG8gc2V2ZXJhbAo+IGRvemVuIG1p bnV0ZXMpIG9uIGxhcmdlIGRyaXZlcy9hcnJheXMgbGlrZSB5b3UgZGVzY3JpYmUuICBTbywgeW91 IHNob3VsZAo+IHRlc3QgYW5kIHBsYW4gYWNjb3JkaW5nbHkgaWYgeW91IGFyZSBpbnRlcmVzdGVk IGluIHVzaW5nIHRoZW0uCgpJIGhhdmUgc29tZSBpZGVhcyBhYm91dCBsYXp5IHNuYXBzaG90dGlu Zy4gIEJ1dCB1bmZvcnR1bmF0ZWx5IEkgZG9uJ3QKaGF2ZSBtdWNoIHRpbWUgdG8gaW1wbGVtZW50 IGEgcHJvdG90eXBlIEFUTSwgYW5kIEkgdGhpbmsgd2UgcmVhbGx5Cm5lZWQgYSBmaWxlIHN5c3Rl bSB0aGF0IGlzIGNhcGFibGUgZm9yOgogLSBIYW5kbGluZyBsYXJnZSBudW1iZXIgb2YgZmlsZXMg aW4gb25lIGRpcmVjdG9yeSAoc2F5LCBzb21lIHNvcnQgb2YKaW5kZXhpbmcgbWVjaGFuaXNtLCBl dGMuICBBbmQgeWVzLCBJIGtub3cgdGhhdCB0aGlzIGlzIHNvbWV3aGF0Cmluc2FuZSwgYnV0IHRo ZSBbYWJddXNlIGlzIHByZXNlbnQgaW4gbWFueSBsYXJnZSBlLW1haWwgc3lzdGVtcyB0aGF0CnVz ZXMgbWFpbGJveCkKIC0gRWZmZWN0aXZlIHJlY292ZXJ5LiAgUGVyc29uYWxseSBJIGRvIG5vdCBi dXkgam91cm5hbGxpbmcgbXVjaCwgYW5kCkkgdGhpbmsgdGhlIHByb2JsZW0gY291bGQgYmUgcmVz b2x2ZWQgYnkgc29tZXRoaW5nIGxpa2UgV0FGTCBkaWQuCgpJIHRoaW5rIHRoYXQgSlVGUyB3b3Vs ZCBwcm92aWRlIHNvbWUgaGVscCBmb3IgKDIpLCBkbyB5b3UgaGF2ZSBzb21lCnBsYW4gYWJvdXQg KDEpPwoKQ2hlZXJzLAotLQpYaW4gTEkgPGRlbHBoaWpAZGVscGhpai5uZXQ+IGh0dHA6Ly93d3cu ZGVscGhpai5uZXQK From owner-freebsd-fs@FreeBSD.ORG Sun Nov 13 18:12:16 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4BE8216A444 for ; Sun, 13 Nov 2005 18:12:16 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from gate.bitblocks.com (bitblocks.com [209.204.185.216]) by mx1.FreeBSD.org (Postfix) with ESMTP id 065DD43D46 for ; Sun, 13 Nov 2005 18:12:13 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost [127.0.0.1]) by gate.bitblocks.com (8.13.4/8.13.1) with ESMTP id jADIC34x005950; Sun, 13 Nov 2005 10:12:03 -0800 (PST) (envelope-from bakul@bitblocks.com) Message-Id: <200511131812.jADIC34x005950@gate.bitblocks.com> To: Scott Long In-reply-to: Your message of "Sun, 13 Nov 2005 10:17:23 MST." <43777523.8020709@samsco.org> Date: Sun, 13 Nov 2005 10:12:03 -0800 From: Bakul Shah Cc: freebsd-fs@freebsd.org, user , delphij@delphij.net Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2005 18:12:16 -0000 > The idea of doing alternate directory layouts (such as b-trees) has been > proposed a number of times. Apparently there was an idea at one point > for UFS to generate a b-tree layout for directory and and save it on > disk as a cache. The primary method of directory storage would remain > the traditional linear way so that compatibility is preserved, but OS's > that were aware of the cache could use it too. There are still some > reserved flags and fields in UFS2 for doing this, in case you're > interested. Since it requires double bookkeeping for link creation and > removal, I'm not sure how speedy it is for anything other than > VOP_LOOKUP operations. An alternate idea I've had is to break with > compatibility and doing b-trees or something similar as the native > format for UFS3 (along with native journalling and other things). Or *BSD can do something really radical: use the on-disk format of XFS. Why go to a new disk format when an existing one like XFS is good enough. From scratch BSD licensed code can probably be written faster than "evolving" UFS2 to UFS3 when you add in time to fully test and debug either implementation. [But IANAL and don't know if a) this will contravene the DMCA or b) it will be used by FSF to prevent such reverse engineering:-)] From owner-freebsd-fs@FreeBSD.ORG Sun Nov 13 20:14:50 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2D2A416A41F for ; Sun, 13 Nov 2005 20:14:50 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id D69CC43D45 for ; Sun, 13 Nov 2005 20:14:48 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [192.168.42.25] ([192.168.42.25]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id jADKEl7R037262 for ; Sun, 13 Nov 2005 14:14:47 -0600 (CST) (envelope-from anderson@centtech.com) Message-ID: <43779EB1.5070302@centtech.com> Date: Sun, 13 Nov 2005 14:14:41 -0600 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.12) Gecko/20051021 X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <200511131549.jADFn2s5056445@lurza.secnetix.de> In-Reply-To: <200511131549.jADFn2s5056445@lurza.secnetix.de> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/1169/Fri Nov 11 15:28:05 2005 on mh2.centtech.com X-Virus-Status: Clean Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Nov 2005 20:14:50 -0000 Oliver Fromme wrote: > Oliver Fromme wrote: > > user wrote: > > > On Sun, 6 Nov 2005, Eric Anderson wrote: > > > > [fsck on large file systems taking a long time] > > > > > > Can you elaborate ? Namely, how long on the 2GB filesystems ? > > > > It depends very much on the file system parameters. In > > particular, it's well worth to lower the inode density > > (i.e. increase the -i number argument to newfs) if you > > can afford it, i.e. if you expect to have fewer large > > files on the file system (such as multimedia files). > > I just accidentally pulled the wrong power cord ... > So now I can give you first-hand numbers. :-} > > This is a 250 Gbyte data disk that has been newfs'ed > with -i 65536, so I get about 4 million inodes: > > Filesystem iused ifree %iused > /dev/ad0s1f 179,049 3,576,789 5% > > So I still have 95% of free inodes, even though the > filesystem is fairly good filled: > > Filesystem 1K-blocks Used Avail Capacity > /dev/ad0s1f 237,652,238 188,173,074 30,466,986 86% > > fsck(8) took about 2 minutes, which is acceptable, I > think. Note that I always disable background fsck > (for me personally, it has more disadvantages than > advantages). > > This is what fsck(8) reported when the machin came > back up: > > /dev/ad0s1f: 179049 files, 94086537 used, 24739582 free > (26782 frags, 3089100 blocks, 0.0% fragmentation) 180k inodes seems like a pretty small amount to me. Here's some info from some of my filesystems: # df -i Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/amrd0s1d 13065232 1109204 10910810 9% 663 1695079 0% /var /dev/label/vol1 1891668564 1494254268 246080812 86% 68883207 175586551 28% /vol1 /dev/label/vol2 1891959846 924337788 816265272 53% 59129223 185364087 24% /vol2 /dev/label/vol3 1892634994 1275336668 465887528 73% 31080812 213506706 13% /vol3 Even /var has over 1million. I think your tests are interesting, however not very telling of many real-world scenarios. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Anything that works is better than anything that doesn't. ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Mon Nov 14 10:44:11 2005 Return-Path: X-Original-To: freebsd-fs@FreeBSD.ORG Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 730DB16A41F for ; Mon, 14 Nov 2005 10:44:11 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (lurza.secnetix.de [83.120.8.8]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2FFFB43D45 for ; Mon, 14 Nov 2005 10:44:09 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (klmrwv@localhost [127.0.0.1]) by lurza.secnetix.de (8.13.1/8.13.1) with ESMTP id jAEAi8AT020304 for ; Mon, 14 Nov 2005 11:44:08 +0100 (CET) (envelope-from oliver.fromme@secnetix.de) Received: (from olli@localhost) by lurza.secnetix.de (8.13.1/8.13.1/Submit) id jAEAi8bg020303; Mon, 14 Nov 2005 11:44:08 +0100 (CET) (envelope-from olli) Date: Mon, 14 Nov 2005 11:44:08 +0100 (CET) Message-Id: <200511141044.jAEAi8bg020303@lurza.secnetix.de> From: Oliver Fromme To: freebsd-fs@FreeBSD.ORG In-Reply-To: <43779EB1.5070302@centtech.com> X-Newsgroups: list.freebsd-fs User-Agent: tin/1.5.4-20000523 ("1959") (UNIX) (FreeBSD/4.11-RELEASE (i386)) Cc: Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-fs@FreeBSD.ORG List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Nov 2005 10:44:11 -0000 Eric Anderson wrote: > Oliver Fromme wrote: > > I just accidentally pulled the wrong power cord ... > > So now I can give you first-hand numbers. :-} > > > > This is a 250 Gbyte data disk that has been newfs'ed > > with -i 65536, so I get about 4 million inodes: > > > > Filesystem iused ifree %iused > > /dev/ad0s1f 179,049 3,576,789 5% > > > > So I still have 95% of free inodes, even though the > > filesystem is fairly good filled: > > > > Filesystem 1K-blocks Used Avail Capacity > > /dev/ad0s1f 237,652,238 188,173,074 30,466,986 86% > > > > fsck(8) took about 2 minutes, which is acceptable, I > > think. Note that I always disable background fsck > > (for me personally, it has more disadvantages than > > advantages). > > > > This is what fsck(8) reported when the machin came > > back up: > > > > /dev/ad0s1f: 179049 files, 94086537 used, 24739582 free > > (26782 frags, 3089100 blocks, 0.0% fragmentation) > > 180k inodes seems like a pretty small amount to me. It's my multimedia disk. It contains mainly multimedia files, such as images, audio and video files. > Here's some info from some of my filesystems: > > # df -i > Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on > /dev/amrd0s1d 13065232 1109204 10910810 9% 663 1695079 0% /var > /dev/label/vol1 1891668564 1494254268 246080812 86% 68883207 175586551 28% /vol1 > /dev/label/vol2 1891959846 924337788 816265272 53% 59129223 185364087 24% /vol2 > /dev/label/vol3 1892634994 1275336668 465887528 73% 31080812 213506706 13% /vol3 > > Even /var has over 1million. No. Your /var has just 663 inodes in use, and it has about 1.7 million unused inodes which is just a waste. Your other file systems use much more inodes, but they're also much bigger (2 Tbyte) than mine, and they seem to contain different kind of data. > I think your tests are interesting, > however not very telling of many real-world scenarios. As mentioned above, my "test" was done on my multimedia file system with an average file size of roughly 1 Mbyte. Such file systems are quite real-world. :-) On a file system containing exclusively video files, innd cycle buffers or similarly large files, the inode density can be reduced even further. If you have a 2 Tbyte file system that contains only a few thousand files, then you're wasting 60 Gbytes for unused inode data. Of course, if you design a file system for different purposes, your requirements might be completely different. A maildir server or squid proxy server definitely requires a much higher inode density, for example. Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing Dienstleistungen mit Schwerpunkt FreeBSD: http://www.secnetix.de/bsd Any opinions expressed in this message may be personal to the author and may not necessarily reflect the opinions of secnetix in any way. Perl is worse than Python because people wanted it worse. -- Larry Wall From owner-freebsd-fs@FreeBSD.ORG Mon Nov 14 18:49:05 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9635A16A41F for ; Mon, 14 Nov 2005 18:49:05 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id E563B43D46 for ; Mon, 14 Nov 2005 18:49:04 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id jAEIn3GF054172 for ; Mon, 14 Nov 2005 12:49:03 -0600 (CST) (envelope-from anderson@centtech.com) Message-ID: <4378DC18.2070103@centtech.com> Date: Mon, 14 Nov 2005 12:48:56 -0600 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.12) Gecko/20051021 X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <200511141044.jAEAi8bg020303@lurza.secnetix.de> In-Reply-To: <200511141044.jAEAi8bg020303@lurza.secnetix.de> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/1169/Fri Nov 11 15:28:05 2005 on mh2.centtech.com X-Virus-Status: Clean Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Nov 2005 18:49:05 -0000 Oliver Fromme wrote: > Eric Anderson wrote: > > Oliver Fromme wrote: > > > I just accidentally pulled the wrong power cord ... > > > So now I can give you first-hand numbers. :-} > > > > > > This is a 250 Gbyte data disk that has been newfs'ed > > > with -i 65536, so I get about 4 million inodes: > > > > > > Filesystem iused ifree %iused > > > /dev/ad0s1f 179,049 3,576,789 5% > > > > > > So I still have 95% of free inodes, even though the > > > filesystem is fairly good filled: > > > > > > Filesystem 1K-blocks Used Avail Capacity > > > /dev/ad0s1f 237,652,238 188,173,074 30,466,986 86% > > > > > > fsck(8) took about 2 minutes, which is acceptable, I > > > think. Note that I always disable background fsck > > > (for me personally, it has more disadvantages than > > > advantages). > > > > > > This is what fsck(8) reported when the machin came > > > back up: > > > > > > /dev/ad0s1f: 179049 files, 94086537 used, 24739582 free > > > (26782 frags, 3089100 blocks, 0.0% fragmentation) > > > > 180k inodes seems like a pretty small amount to me. > > It's my multimedia disk. It contains mainly multimedia > files, such as images, audio and video files. > > > Here's some info from some of my filesystems: > > > > # df -i > > Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on > > /dev/amrd0s1d 13065232 1109204 10910810 9% 663 1695079 0% /var > > /dev/label/vol1 1891668564 1494254268 246080812 86% 68883207 175586551 28% /vol1 > > /dev/label/vol2 1891959846 924337788 816265272 53% 59129223 185364087 24% /vol2 > > /dev/label/vol3 1892634994 1275336668 465887528 73% 31080812 213506706 13% /vol3 > > > > Even /var has over 1million. > > No. Your /var has just 663 inodes in use, and it has about > 1.7 million unused inodes which is just a waste. Oops! Thanks for the correction - I misread it in my pasting frenzy. :) It may be a waste, but perhaps the right answer would be in the form of a patch to make sysinstall create /var partitions with different settings, if you feel strongly about it. Me personally, in this case, I don't care about the space I lose here, since to me it is negligable. > Your other file systems use much more inodes, but they're > also much bigger (2 Tbyte) than mine, and they seem to > contain different kind of data. Right, this is typical for the types of data I store, which often average 8-16k per file, which I think is the default expectation for UFS2 filesystems, so I'm making a generalization that a majority of users also have a ~16k average filesize. > > I think your tests are interesting, > > however not very telling of many real-world scenarios. > > As mentioned above, my "test" was done on my multimedia > file system with an average file size of roughly 1 Mbyte. > Such file systems are quite real-world. :-) > > On a file system containing exclusively video files, innd > cycle buffers or similarly large files, the inode density > can be reduced even further. If you have a 2 Tbyte file > system that contains only a few thousand files, then you're > wasting 60 Gbytes for unused inode data. True - agreed, however I'm assuming most users of FreeBSD's UFS2 filesystem are in the 16k average filesize range. If the average users' average file size is larger, than the default newfs parameters should be changed, I just don't have any data or research to support that, so I'm not certain. > Of course, if you design a file system for different > purposes, your requirements might be completely different. > A maildir server or squid proxy server definitely requires > a much higher inode density, for example. If a filesystem were to be designed from scratch, having the inode density variable or automatically grow to fulfill the needs, would be the most efficient probably. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Anything that works is better than anything that doesn't. ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Tue Nov 15 09:54:20 2005 Return-Path: X-Original-To: freebsd-fs@FreeBSD.ORG Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E4AE616A41F for ; Tue, 15 Nov 2005 09:54:20 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (lurza.secnetix.de [83.120.8.8]) by mx1.FreeBSD.org (Postfix) with ESMTP id 55A9F43D46 for ; Tue, 15 Nov 2005 09:54:20 +0000 (GMT) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (hobmtu@localhost [127.0.0.1]) by lurza.secnetix.de (8.13.4/8.13.4) with ESMTP id jAF9sHuV072345 for ; Tue, 15 Nov 2005 10:54:18 +0100 (CET) (envelope-from oliver.fromme@secnetix.de) Received: (from olli@localhost) by lurza.secnetix.de (8.13.4/8.13.1/Submit) id jAF9sHLd072344; Tue, 15 Nov 2005 10:54:17 +0100 (CET) (envelope-from olli) Date: Tue, 15 Nov 2005 10:54:17 +0100 (CET) Message-Id: <200511150954.jAF9sHLd072344@lurza.secnetix.de> From: Oliver Fromme To: freebsd-fs@FreeBSD.ORG In-Reply-To: <4378DC18.2070103@centtech.com> X-Newsgroups: list.freebsd-fs User-Agent: tin/1.5.4-20000523 ("1959") (UNIX) (FreeBSD/4.11-STABLE (i386)) Cc: Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-fs@FreeBSD.ORG List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Nov 2005 09:54:21 -0000 Eric Anderson wrote: > Oliver Fromme wrote: > > [...] > > No. Your /var has just 663 inodes in use, and it has about > > 1.7 million unused inodes which is just a waste. > > Oops! Thanks for the correction - I misread it in my pasting frenzy. :) > It may be a waste, but perhaps the right answer would be in the form > of a patch to make sysinstall create /var partitions with different > settings, if you feel strongly about it. Well, I don't feel very strongly about sysinstall, but I do think that too few people read the tuning(7) manpage. :-) I think sysinstall's default values (which just uses newfs' defaults) are a good trade-off. If you run out of inodes, then you are in serious trouble -- you probably have to re- create the whole file system (dump, newfs, restore or simi- lar). But if you have way too many inodes, then you waste some space and fsck time, but that's not a critical problem to most users, because at least it keeps running. That's probably the reason why the default values provide a rather high inode density. And after all, you _can_ change it if you know what you're doing (after reading tuning(7) and otehr documentation). Even sysinstall pro- vides a way to enter newfs flags, so you can easily change the inode density from the beginning. It's also interesating to note that, historically, the /var partition is used to hold spool areas, such as the spool of news servers. INN's tradition spool layout (which is still popular for small servers because it allows better control) stores each article in a separate file, so you need a significant number of inodes in /var in that case. (Of course, for "big" news servers, you usually choose a different spool layout such as cycle buffers, and you don't put them on the /var partition but on their own optimized file system.) It all comes down to the fact that neither sysinstall nor newfs know in advance what purpose a file system will be used for, so they have no idea what default inode density would be suitable. So they choose rather conservative defaults for the "worst case", i.e. many inodes. It's up to the user to change the defaults if appropriate. Of course it's not an error to have way too many inodes. But I think it's a suboptimal setting, and it it always worth to think about the usage of the file system in ad- vance, before running newfs. Each inode takes 256 bytes in UFS2 (in UFS1 it's 128 bytes). On a 250 Gbyte disk (typical size nowadays), the default parameters will reserve space for 30 million inodes. That's 7,5 Gbyte reserved to inodes which will not be available to actual file data (and which adds to fsck time significantly). > Right, this is typical for the types of data I store, which often > average 8-16k per file, which I think is the default expectation for > UFS2 filesystems, so I'm making a generalization that a majority of > users also have a ~16k average filesize. I don't think that's true. The default values rather pre- sume the _minimum_ (not average) file size that most users will need, so that only very few users will hit the inode limit. If the newfs default was the expected average file size, then 50% of users would hit the limit (and then flood the mailing lists). As I explained above, the default (which is one inode per 8 kbyte of data if you use the standard bsize/fsize) is choosen to be a conservative value, so that only very few people will need to lower it. > True - agreed, however I'm assuming most users of FreeBSD's UFS2 > filesystem are in the 16k average filesize range. I don't think so. Nowadays, multimedia data makes a signi- ficant share of all data stored, and such files tend to be rather large. That's why they got their own file system in my server, so I can tune the newfs parameters for it, so I don't waste several Gbytes of space and don't have to wait half an hour for fsck. > If the average > users' average file size is larger, than the default newfs parameters > should be changed, As explained above, the newfs default parameters should be rather low, so they work for the "worst case". E.g. the source tree of FreeBSD RELENG_6 has indeed an average file size of 16082 bytes (I just looked a minute ago). But this is certainly not the typical use that takes up most of user's disk space. On my root file system (standard Free- BSD installation), the average file size is 42 Kbyte, on /var it's 37 kbyte, and on /usr it's 60 kbyte, even though it contains /usr/src and the ports collection (which is thousands of very small files). > > Of course, if you design a file system for different > > purposes, your requirements might be completely different. > > A maildir server or squid proxy server definitely requires > > a much higher inode density, for example. > > If a filesystem were to be designed from scratch, having the inode > density variable or automatically grow to fulfill the needs, would be > the most efficient probably. Yes, I agree completely. Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing Dienstleistungen mit Schwerpunkt FreeBSD: http://www.secnetix.de/bsd Any opinions expressed in this message may be personal to the author and may not necessarily reflect the opinions of secnetix in any way. "... there are two ways of constructing a software design: One way is to make it so simple that there are _obviously_ no deficiencies and the other way is to make it so complicated that there are no _obvious_ deficiencies." -- C.A.R. Hoare, ACM Turing Award Lecture, 1980 From owner-freebsd-fs@FreeBSD.ORG Tue Nov 15 13:07:05 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8068E16A41F for ; Tue, 15 Nov 2005 13:07:05 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2325D43D45 for ; Tue, 15 Nov 2005 13:07:04 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id jAFD735g083367 for ; Tue, 15 Nov 2005 07:07:04 -0600 (CST) (envelope-from anderson@centtech.com) Message-ID: <4379DD70.80106@centtech.com> Date: Tue, 15 Nov 2005 07:06:56 -0600 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.12) Gecko/20051021 X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <200511150954.jAF9sHLd072344@lurza.secnetix.de> In-Reply-To: <200511150954.jAF9sHLd072344@lurza.secnetix.de> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/1172/Tue Nov 15 03:31:32 2005 on mh1.centtech.com X-Virus-Status: Clean Subject: Re: UFS2 snapshots on large filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Nov 2005 13:07:05 -0000 Oliver Fromme wrote: > Eric Anderson wrote: > > Oliver Fromme wrote: > > > [...] > > > No. Your /var has just 663 inodes in use, and it has about > > > 1.7 million unused inodes which is just a waste. > > > > Oops! Thanks for the correction - I misread it in my pasting frenzy. :) > > It may be a waste, but perhaps the right answer would be in the form > > of a patch to make sysinstall create /var partitions with different > > settings, if you feel strongly about it. > > Well, I don't feel very strongly about sysinstall, but I do > think that too few people read the tuning(7) manpage. :-) > > I think sysinstall's default values (which just uses newfs' > defaults) are a good trade-off. If you run out of inodes, > then you are in serious trouble -- you probably have to re- > create the whole file system (dump, newfs, restore or simi- > lar). But if you have way too many inodes, then you waste > some space and fsck time, but that's not a critical problem > to most users, because at least it keeps running. > > That's probably the reason why the default values provide > a rather high inode density. And after all, you _can_ > change it if you know what you're doing (after reading > tuning(7) and otehr documentation). Even sysinstall pro- > vides a way to enter newfs flags, so you can easily change > the inode density from the beginning. > > It's also interesating to note that, historically, the /var > partition is used to hold spool areas, such as the spool > of news servers. INN's tradition spool layout (which is > still popular for small servers because it allows better > control) stores each article in a separate file, so you > need a significant number of inodes in /var in that case. > (Of course, for "big" news servers, you usually choose a > different spool layout such as cycle buffers, and you > don't put them on the /var partition but on their own > optimized file system.) > > It all comes down to the fact that neither sysinstall nor > newfs know in advance what purpose a file system will be > used for, so they have no idea what default inode density > would be suitable. So they choose rather conservative > defaults for the "worst case", i.e. many inodes. It's up > to the user to change the defaults if appropriate. > > Of course it's not an error to have way too many inodes. > But I think it's a suboptimal setting, and it it always > worth to think about the usage of the file system in ad- > vance, before running newfs. Each inode takes 256 bytes > in UFS2 (in UFS1 it's 128 bytes). On a 250 Gbyte disk > (typical size nowadays), the default parameters will > reserve space for 30 million inodes. That's 7,5 Gbyte > reserved to inodes which will not be available to actual > file data (and which adds to fsck time significantly). Yes, I agree with you on all the above, but honestly, I guess I think that 3% of a filesystem being used for inodes out-of-the-box on a 250GB partition, isn't such a big deal, considering 8% is set aside for root only use to keep the filesystem from getting too cluttered and performing poorly. You have to weigh the savings from the drop in inodes, vs the loss for fragmentation or non-optimal block usage. If you are so close in space usage that you need that few percent difference used from inodes, then I guess my thoughts are that better space forcasting should have been in place. I'm all for efficient usage of resources, but there's a point where the risk of reducing the inodes because you are not certain about the usage pattern of the disk over time is too high, versus letting a couple of GB disapper is insignificant. > > Right, this is typical for the types of data I store, which often > > average 8-16k per file, which I think is the default expectation for > > UFS2 filesystems, so I'm making a generalization that a majority of > > users also have a ~16k average filesize. > > I don't think that's true. The default values rather pre- > sume the _minimum_ (not average) file size that most users > will need, so that only very few users will hit the inode > limit. If the newfs default was the expected average file > size, then 50% of users would hit the limit (and then flood > the mailing lists). Well, I was stating our companies storage pattern here, and stating that the default for UFS2 appears to agree with our patterns, and simply making a generalization that many people might be in a similar situation as our company. I don't think it's true that choosing the average would yield 50% of the users having problems, but I see your point. In fact, I think McKusick makes mention of a study of the average file size being just under 16K (in 'Design and Implementation of the FreeBSD Operating System'), so that is why they made the choice of a 16k block size the default for UFS2. > As I explained above, the default (which is one inode per > 8 kbyte of data if you use the standard bsize/fsize) is > choosen to be a conservative value, so that only very few > people will need to lower it. > > > True - agreed, however I'm assuming most users of FreeBSD's UFS2 > > filesystem are in the 16k average filesize range. > > I don't think so. Nowadays, multimedia data makes a signi- > ficant share of all data stored, and such files tend to be > rather large. That's why they got their own file system in > my server, so I can tune the newfs parameters for it, so I > don't waste several Gbytes of space and don't have to wait > half an hour for fsck. I'm not sure where you came up with the 'multimedia data makes a significant share of all data stored' statistic, but I just don't know of a lot of companies that store multimedia files in such large quantities to justify these claims. It's very possible though that I am closed to the industry in which I am in, and so I don't see the 'other side'. > > If the average > > users' average file size is larger, than the default newfs parameters > > should be changed, > > As explained above, the newfs default parameters should be > rather low, so they work for the "worst case". E.g. the > source tree of FreeBSD RELENG_6 has indeed an average file > size of 16082 bytes (I just looked a minute ago). But this > is certainly not the typical use that takes up most of > user's disk space. On my root file system (standard Free- > BSD installation), the average file size is 42 Kbyte, on > /var it's 37 kbyte, and on /usr it's 60 kbyte, even though > it contains /usr/src and the ports collection (which is > thousands of very small files). > > > > Of course, if you design a file system for different > > > purposes, your requirements might be completely different. > > > A maildir server or squid proxy server definitely requires > > > a much higher inode density, for example. > > > > If a filesystem were to be designed from scratch, having the inode > > density variable or automatically grow to fulfill the needs, would be > > the most efficient probably. > > Yes, I agree completely. It would be interesting to do some sampling of a number of companies, and see what their mean/median/mode filesize is on production data. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Anything that works is better than anything that doesn't. ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Wed Nov 16 21:09:37 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0BECE16A420 for ; Wed, 16 Nov 2005 21:09:37 +0000 (GMT) (envelope-from user@dhp.com) Received: from shell.dhp.com (shell.dhp.com [199.245.105.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id C462043D53 for ; Wed, 16 Nov 2005 21:09:36 +0000 (GMT) (envelope-from user@dhp.com) Received: by shell.dhp.com (Postfix, from userid 896) id C68863131C; Wed, 16 Nov 2005 16:09:35 -0500 (EST) Date: Wed, 16 Nov 2005 16:09:35 -0500 (EST) From: user To: freebsd-fs@freebsd.org Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Subject: comments on todays ZFS announcement ? comparison to UFS2 on 6.0 ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Nov 2005 21:09:37 -0000 Hello, I found this announcement today - apparently ZFS is now available as a production item: http://www.opensolaris.org/os/community/zfs/ Can anyone take some time and do some quick feedback on it and compare and contrast it to UFS 2 in its 6.0-RELEASE incarnation ? Mainly I am curious - if an application has no ties or dependency on FreeBSD (or any other OS), is ZFS going to be the no-brainer, obvious choice ? I am not sure if ZFS snapshots survive reboots (solaris 9 snapshots did not), and I am fairly certain you cannot boot a ZFS device. Also, it looks like a very complex filesystem. But I am just a layman - can any of the very technical people here shed some light/thoughts/bigotries on ZFS, and ZFS vs. UFS2 ? Thanks. From owner-freebsd-fs@FreeBSD.ORG Wed Nov 16 21:55:09 2005 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 07A4916A41F for ; Wed, 16 Nov 2005 21:55:09 +0000 (GMT) (envelope-from julian@elischer.org) Received: from a50.ironport.com (a50.ironport.com [63.251.108.112]) by mx1.FreeBSD.org (Postfix) with ESMTP id A2C9943D49 for ; Wed, 16 Nov 2005 21:55:08 +0000 (GMT) (envelope-from julian@elischer.org) Received: from unknown (HELO [10.251.19.149]) ([10.251.19.149]) by a50.ironport.com with ESMTP; 16 Nov 2005 13:55:08 -0800 X-IronPort-Anti-Spam-Filtered: true Message-ID: <437BAABB.8020705@elischer.org> Date: Wed, 16 Nov 2005 13:55:07 -0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7.11) Gecko/20050727 X-Accept-Language: en-us, en MIME-Version: 1.0 To: user References: In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: comments on todays ZFS announcement ? comparison to UFS2 on 6.0 ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Nov 2005 21:55:09 -0000 user wrote: >Hello, > >I found this announcement today - apparently ZFS is now available as a >production item: > >http://www.opensolaris.org/os/community/zfs/ > >Can anyone take some time and do some quick feedback on it and compare and >contrast it to UFS 2 in its 6.0-RELEASE incarnation ? > >Mainly I am curious - if an application has no ties or dependency on >FreeBSD (or any other OS), is ZFS going to be the no-brainer, obvious >choice ? > >I am not sure if ZFS snapshots survive reboots (solaris 9 snapshots did >not), and I am fairly certain you cannot boot a ZFS device. Also, it >looks like a very complex filesystem. > > looks cool.. the uberblock must run red hot though unless htey have some way of moving it around. more detail would be needed.. Wonder if there is a paper on it.. >But I am just a layman - can any of the very technical people here shed >some light/thoughts/bigotries on ZFS, and ZFS vs. UFS2 ? > >Thanks. > >_______________________________________________ >freebsd-fs@freebsd.org mailing list >http://lists.freebsd.org/mailman/listinfo/freebsd-fs >To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Thu Nov 17 04:37:56 2005 Return-Path: X-Original-To: freebsd-fs@FreeBSD.org Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AFCF216A41F for ; Thu, 17 Nov 2005 04:37:56 +0000 (GMT) (envelope-from pfgshield-freebsd@yahoo.com) Received: from web32915.mail.mud.yahoo.com (web32915.mail.mud.yahoo.com [68.142.206.62]) by mx1.FreeBSD.org (Postfix) with SMTP id 4290343D46 for ; Thu, 17 Nov 2005 04:37:56 +0000 (GMT) (envelope-from pfgshield-freebsd@yahoo.com) Received: (qmail 10709 invoked by uid 60001); 17 Nov 2005 04:37:55 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:Received:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type; b=A1qJ86qyuGo7wzDmmwZw2pPd4AUCR0ea8VTUQ936oaKsX1QOQSOe/xpQY5CyGiSC2tUKg/MKgvpS1jhhyQhLkF5C3mvsFoY3Y/KDJ9fCCI+GLoMwk/TVWjNCp4/8J+eI7nD0rOrv+ifLZCmmXpS8CKR2UQKUKkptSETplFSWcjE= ; Message-ID: <20051117043755.10707.qmail@web32915.mail.mud.yahoo.com> Received: from [200.118.60.177] by web32915.mail.mud.yahoo.com via HTTP; Thu, 17 Nov 2005 05:37:55 CET Date: Thu, 17 Nov 2005 05:37:55 +0100 (CET) From: To: freebsd-fs@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Cc: user Subject: Re: comments on todays ZFS announcement ? comparison to UFS2 on 6.0 ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: pfgshield-freebsd@yahoo.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Nov 2005 04:37:56 -0000 Very, VERY cool! Looks to me like it does everything we were asking from XFS and more than we were asking from UFS2.* The pdf document here seems the right document to read to get a sense of comparison, but from the blogs something more technical will be released in the future: http://mediacast.sun.com/details.jsp?id=394 cheers, Pedro __________________ *Disclaimer: I'm not a FS guy. ___________________________________ Yahoo! Messenger: chiamate gratuite in tutto il mondo http://it.messenger.yahoo.com