From owner-freebsd-bugs@FreeBSD.ORG Sat Jun 28 04:18:41 2008 Return-Path: Delivered-To: freebsd-bugs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E1FDF1065677; Sat, 28 Jun 2008 04:18:41 +0000 (UTC) (envelope-from joao@matik.com.br) Received: from msrv.matik.com.br (msrv.matik.com.br [200.152.83.14]) by mx1.freebsd.org (Postfix) with ESMTP id 482B78FC16; Sat, 28 Jun 2008 04:18:41 +0000 (UTC) (envelope-from joao@matik.com.br) Received: from nbc.matik.com.br (nbc.matik.com.br [200.152.88.34] (may be forged)) by msrv.matik.com.br (8.14.2/8.14.2) with ESMTP id m5S4IL8B082926; Sat, 28 Jun 2008 01:18:21 -0300 (BRT) (envelope-from joao@matik.com.br) From: JoaoBR Organization: Infomatik To: freebsd-stable@freebsd.org Date: Sat, 28 Jun 2008 01:17:15 -0300 User-Agent: KMail/1.9.7 References: <200805180956.18211.joao@matik.com.br> <200805181220.33599.joao@matik.com.br> <20080518153911.GA22300@eos.sc1.parodius.com> In-Reply-To: <20080518153911.GA22300@eos.sc1.parodius.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200806280117.16057.joao@matik.com.br> X-Virus-Scanned: ClamAV 0.93.1/7579/Fri Jun 27 16:52:12 2008 on msrv.matik.com.br X-Virus-Status: Clean Cc: Torfinn Ingolfsen , Jeremy Chadwick , freebsd-bugs@freebsd.org, Greg Byshenk Subject: Re: possible zfs bug? lost all pools X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2008 04:18:42 -0000 On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote: =2E.. >>> and if necessary /etc/rc.d/zfs should start hostid or at least set REQU= IRE=20 >>> different and warn =2E.. >> >> I've been in the same boat you are, and I was told the same thing. I've >> documented the situation on my Wiki, and the necessary workarounds. >> >> http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issue > so I changed the rcorder as you can see in the attached filesile > http://suporte.matik.com.br/jm/zfs.rcfiles.tar.gz i'm coming back on this because I am convicted to zfs each day more and mo= re=20 and like to express my gratitude not only to whom made zfs but also and=20 specially to the people who brought it to FBSD - and: thank you guys making= =20 it public, this is really a step forward! my zfs related rc files changes(above) made my problems go away and like to= =20 share some other experience here as on Jeremie's page explained I had similare problems with zfs but seems I= =20 could get around them with (depending on machine's load) setting either to= =20 500, 1000 or 1500k vm.kmem_size* ... but seems main problem on FBSD is zfs= =20 recordsize, on ufs like partitions I set it to 64k and I never got panics a= ny=20 more, even with several zpools (as said as dangerous), cache_dirs for squid= =20 or mysql partitions might need lower values to get to there new and=20 impressive peaks.=20 this even seems to solve panics when copying large files from nfs|ufs to or= =20 from zfs ... so seems that FBSD do not like recordsize>64k ... I have now a mail server running, for almost two month, with N zfs volumes= =20 (one per user) in order simulating quotas (-/+ 1000 users) with success and= =20 completely stable and performance is outstanding under all loads web server, apache/php/mysql, gave up maior stability problems but=20 distributing depending on workload to zpools with different recordsizes and= =20 never >64k solved my problems and I am appearently panic free now I run almost scsi-only, only my test machines are sata, lowest conf is X2/4= G,=20 rest is X4 or opterons with 8g or more and I am extremely satisfied and hap= py=20 with zfs my backups are running twice as fast as on ufs, mirroring in comparims to=20 gmirror is fucking-incredible fast and the zfs snapshot thing deserves an=20 Oscar! ... and the zfs send|receive another so thank you all who had fingers in/on zfs! (sometimes I press reset at my= =20 home server only to see how fast it comes up) .. just kidding but true is:= =20 thank's again! zfs is thE fs. =2D-=20 Jo=E3o A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura. Service fornecido pelo Datacenter Matik https://datacenter.matik.com.br