From owner-freebsd-stable@FreeBSD.ORG Wed Oct 1 22:25:38 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4028B1065686; Wed, 1 Oct 2008 22:25:38 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [91.103.162.4]) by mx1.freebsd.org (Postfix) with ESMTP id BEE678FC0C; Wed, 1 Oct 2008 22:25:37 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from localhost (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 9583619E027; Thu, 2 Oct 2008 00:25:36 +0200 (CEST) Received: from [192.168.1.2] (r5bb235.net.upc.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 333D919E023; Thu, 2 Oct 2008 00:25:34 +0200 (CEST) Message-ID: <48E3F900.8020702@quip.cz> Date: Thu, 02 Oct 2008 00:26:08 +0200 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.7.12) Gecko/20050915 X-Accept-Language: cz, cs, en, en-us MIME-Version: 1.0 To: lhmwzy References: <78fb9d960809291927n60358006w7ef845e7cb40ed93@mail.gmail.com> <78fb9d960809301653o5cb09cefpf05eba0a9926b9fc@mail.gmail.com> <274267384.20080930223647@takeda.tk> <78fb9d960809302310s3f817505j6605420e451268e4@mail.gmail.com> <1031817271.20080930231836@takeda.tk> <78fb9d960809302329i5958966bh988c2531741e5c1@mail.gmail.com> <20081001071309.GA13616@icarus.home.lan> <78fb9d960810010015l14a98f56re49c9eb386305118@mail.gmail.com> In-Reply-To: <78fb9d960810010015l14a98f56re49c9eb386305118@mail.gmail.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: Jeremy Chadwick , freebsd-stable@freebsd.org Subject: Re: Would anybody port DragonFlyBSD's HAMMER fs to FreeBSD? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Oct 2008 22:25:38 -0000 lhmwzy wrote: > Yes,this is a way. > I would do as you said if I need to do so. > > 2008/10/1 Jeremy Chadwick : > >>On Wed, Oct 01, 2008 at 02:29:12PM +0800, lhmwzy wrote: >> >>>That's it. >>>Since we don't have the skill,what we can do is wait. >>> >>>Waiting is such a bad thing....... >> >>If this functionality is really something you want/need, you should >>consider finding a kernel programmer who would be willing to port it, >>for financial exchange (in English: you will be paying them $XX/hour >>to port it to FreeBSD). >> >>This has happened in the past for some key features. Like I said, it >>all depends on how much it matters to you. HAMMER seems good, but at this time, it is more important to finish ZFS integration in to FreeBSD. Fixing all known issues, more testing, wider audience and make it production ready. Not because ZFS is better, may be is worse - it does not metter. I think it is important to have one successful port finished than two filesystems in non-production state. FreeBSD is currently lag behind other operating systems in supported filesystems. UFS2 is insufficient for todays storage requirements. Once we have ZFS production ready, we can talk about another filesystems. I can't do any programming to port whatever filesystem, nor write patches. All I can do is testing and reporting - and I am doing it. I have some stresstests of ZFS. Currently I have one ZFS mount with 56 snapshots taken during heavy tasks like coping or removing large number of small files (mainly cp -R /usr/ports /tank/test/$i in loops plus taring / untaring tasks), some large files creation with dd on background etc. All is running fine on FreeBSD 7.0 amd64 with 4GB RAM and some kernel tunning. vm.kmem_size="1024M" vm.kmem_size_max="1024M" kern.maxvnodes="400000" vfs.zfs.prefetch_disable="1" vfs.zfs.arc_min="16M" vfs.zfs.arc_max="64M" There are 53202511 inodes on ZFS partition. Zpool was created over two slices of two disks (mirror): capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 434G 10.5G 75 1.24K 618K 5.76M mirror 434G 10.5G 75 1.24K 618K 5.76M ad4s2 - - 13 328 918K 5.76M ad6s2 - - 16 326 1.09M 5.76M ---------- ----- ----- ----- ----- ----- ----- I have no crash of ZFS, but as I read in mailing lists, there are still some problems, so let it be fixed and settle down before porting another good filesystem. Just my €0.02 Miroslav Lachman