From owner-freebsd-stable@FreeBSD.ORG Wed Dec 29 09:27:56 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB752106566B; Wed, 29 Dec 2010 09:27:56 +0000 (UTC) (envelope-from jyavenard@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 81AA98FC08; Wed, 29 Dec 2010 09:27:56 +0000 (UTC) Received: by iwn39 with SMTP id 39so10132772iwn.13 for ; Wed, 29 Dec 2010 01:27:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=bpY3IlSTs6ZA6lsypdvYaSR6MA3d+UsZOrlANU3UEUY=; b=OmIS4J/JYbtPg/j8JoutoLk9iW8sA7VqcnDoTtElCwFs8XQ4juPubr6HiFsgznL5+H ZKgMAJ4wunrj3pHgdt94Ii8M+wJHil1TAiMWWjTgqPxWn6YDMYZ3nVFWiaXrT/WQg4hK ih7gJwmDlakRiBUbNi3ZpEkb080rGq5ZFnPiE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=P3KVeuIwmE03x67q/9PDwp0NMnpfEBakuF/cL5XcI21xOZAfkYSuVR6qIJTwv914/F 6v8WswwB00BnqdPU52yqHJxZCcI2s9DlYNue7AiV3JtE6I0RHclnpaCv+Maq491It8yJ dCrmBKzdS/aufj6JM6StDz4JBSefUx0z2XdyE= MIME-Version: 1.0 Received: by 10.42.167.66 with SMTP id r2mr14667874icy.303.1293614875865; Wed, 29 Dec 2010 01:27:55 -0800 (PST) Received: by 10.42.101.209 with HTTP; Wed, 29 Dec 2010 01:27:55 -0800 (PST) In-Reply-To: <4D1AC0CC.7020907@DataIX.net> References: <4D181E51.30401@DataIX.net> <4D1A70B7.6090809@FreeBSD.org> <4D1AC0CC.7020907@DataIX.net> Date: Wed, 29 Dec 2010 20:27:55 +1100 Message-ID: From: Jean-Yves Avenard To: jhell Content-Type: text/plain; charset=ISO-8859-1 Cc: "freebsd-stable@freebsd.org" , Martin Matuska Subject: Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Dec 2010 09:27:57 -0000 On Wednesday, 29 December 2010, jhell wrote: > > Another note too, I think I read that you mentioned using the L2ARC and > slog device on the same disk.... You simply shouldn't do this it could > be contributing to the real cause and there is absolutely no gain in > either sanity or performance and you will end up bottle-necking your system. > >> And why would that be? I've read so many conflictinginformation on the matter over the past few days that I'm starting to wonder if there's an actual definitive answer on the matter or if anyone has a clue regarding what they're talking about. It ranges from, should only use raw disk to freebsd isn't solaris so slices are fine. Don't use slice because they can't be read by another OS use partitions.. It doesn't apply to SSD and so on.. The way I look at it, the only thing that would bottleneck access to that SSD drive, is the SATA interface itself. So using two drives, or two partitions on the same drive, I can't see how it would make much difference if any other than the traditional "I think I know" argument. Surely latency as with know it with hard drive do not apply to SSDs. Even within sun's official documentation, they are contradicting information, starting from the commands on how to add remove/cache of log device. It seems to me that tuning ZFS is very much like black magic, everyone has their own idea about what to do, and not once did I get to read conclusive evidence about what is best or find an information people actually agree on. As for using unofficial code, sure I accept that risk now. I made a conscious decision on using it, there's now no way to go back and I accept that. At the end of the day, it's the only thing that will make that code suitable for real world condition: testing. If that particular code isn't put under any actual stress how else are you going to know if its good or not. I don't really like reading between the lines of your post that I shouldn't be surprised should anything break or that it doesn't matter if it crashes. there's a deadlock occurring somewhere : it needs to be found. I know nothing about the ZFS code, and I could only do what I'm capable of under those circumstances: find a way to reproduce the problem consistently, report as much information as I have so someone more clueey will know what to do with it. Hope that makes sense Jean-Yves