From owner-freebsd-fs@FreeBSD.ORG Sun Mar 15 19:37:38 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 06372106566B for ; Sun, 15 Mar 2009 19:37:38 +0000 (UTC) (envelope-from ady@ady.ro) Received: from mail-ew0-f166.google.com (mail-ew0-f166.google.com [209.85.219.166]) by mx1.freebsd.org (Postfix) with ESMTP id 754408FC12 for ; Sun, 15 Mar 2009 19:37:37 +0000 (UTC) (envelope-from ady@ady.ro) Received: by ewy10 with SMTP id 10so3384573ewy.43 for ; Sun, 15 Mar 2009 12:37:36 -0700 (PDT) MIME-Version: 1.0 Received: by 10.210.120.17 with SMTP id s17mr3001028ebc.54.1237144186057; Sun, 15 Mar 2009 12:09:46 -0700 (PDT) Date: Sun, 15 Mar 2009 20:09:46 +0100 Message-ID: <78cb3d3f0903151209r46837d70m914a23e30a19060e@mail.gmail.com> From: Adrian Penisoara To: Pawel Jakub Dawidek Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org Subject: ETA for ZFS v. 13 Merge From HEAD ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Mar 2009 19:37:39 -0000 Hi Pawel, Coming back to the subject, when do you think we might have a merge of r185029 (import of ZFS version 13) from head back into -stable ? Is there anything we can help with to speed up the process (e.g. testing) ? PS: ZFS-FUSE on Linux has also reached v 13... Thank you, Adrian Penisoara ROFUG / EnterpriseBSD --------------------------- Date: Wed, 26 Nov 2008 10:52:41 +0100 From: Pawel Jakub Dawidek Subject: Re: svn commit: r185029 - in head: cddl/compat/opensolaris/include cddl/compat/opensolaris/misc cddl/contrib/opensolaris/cmd/zdb cddl/contrib/opensolaris/cmd/zfs cddl/contrib/opensolaris/cmd/zinject cd... To: Attila Nagy Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org Message-ID: <20081126095241.GA3188@garage.freebsd.pl> Content-Type: text/plain; charset="us-ascii" On Wed, Nov 26, 2008 at 10:15:58AM +0100, Attila Nagy wrote: > Hello, > > Pawel Jakub Dawidek wrote: > >Author: pjd > >Date: Mon Nov 17 20:49:29 2008 > >New Revision: 185029 > >URL: http://svn.freebsd.org/changeset/base/185029 > > > >Log: > > Update ZFS from version 6 to 13 and bring some FreeBSD-specific changes. > > > This, and other changes stabilized ZFS by a great level in HEAD. > Do you plan to MFC these to 7-STABLE? Yes, but ETA yet. -- Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! From owner-freebsd-fs@FreeBSD.ORG Sun Mar 15 22:40:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B7A83106564A; Sun, 15 Mar 2009 22:40:01 +0000 (UTC) (envelope-from ken@mthelicon.com) Received: from hercules.mthelicon.com (hercules.mthelicon.com [IPv6:2001:49f0:2023::2]) by mx1.freebsd.org (Postfix) with ESMTP id 7C98E8FC1B; Sun, 15 Mar 2009 22:40:01 +0000 (UTC) (envelope-from ken@mthelicon.com) Received: from PegaPegII (78-33-209-59.static.enta.net [78.33.209.59] (may be forged)) (authenticated bits=0) by hercules.mthelicon.com (8.14.3/8.14.3) with ESMTP id n2FMduTh044864; Sun, 15 Mar 2009 22:39:58 GMT (envelope-from ken@mthelicon.com) Message-ID: <4AE4493D5E9141E8812E4BC83FB5A2A5@PegaPegII> From: "Pegasus Mc Cleaft" To: "Adrian Penisoara" , "Pawel Jakub Dawidek" References: <78cb3d3f0903151209r46837d70m914a23e30a19060e@mail.gmail.com> In-Reply-To: <78cb3d3f0903151209r46837d70m914a23e30a19060e@mail.gmail.com> Date: Sun, 15 Mar 2009 22:39:57 -0000 Organization: Feathers MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Windows Mail 6.0.6001.18000 X-MimeOLE: Produced By Microsoft MimeOLE V6.0.6001.18049 X-Antivirus: avast! (VPS 090315-0, 15/03/2009), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org Subject: Re: ETA for ZFS v. 13 Merge From HEAD ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Pegasus Mc Cleaft List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Mar 2009 22:40:02 -0000 Hi Adrian, I am not sure, but I didnt think ZFS 13 was ever going to be merged into 7-stable. I thought the kernel memory requirements were to great (just going back in my memory on that one). Also, I think there are still a few bugs left with the zil being enabled (and/or prefetch) causing lockups on machine with a lot of IO. I know I have hit that bug a few times on my machine when using various torrent clients when they want to preallocate large amounts of diskspace. I personally cant wait until a later version of ZFS is imported that supports encryption. I can finally say good-bye to our GEOM ELI USB drives for backups!! Never the less, I am quite thankfull to thoes involved in porting V13 to FreeBSD. Its a wonderfull improvement and my FS of choice when installing on new machines (especially zfs boot) Best regards, Peg ----- Original Message ----- From: "Adrian Penisoara" To: "Pawel Jakub Dawidek" Cc: ; Sent: Sunday, March 15, 2009 7:09 PM Subject: ETA for ZFS v. 13 Merge From HEAD ? > Hi Pawel, > Coming back to the subject, when do you think we might have a merge of > r185029 (import of ZFS version 13) from head back into -stable ? > > Is there anything we can help with to speed up the process (e.g. testing) > ? > > PS: ZFS-FUSE on Linux has also reached v 13... > > Thank you, > Adrian Penisoara > ROFUG / EnterpriseBSD > > --------------------------- > Date: Wed, 26 Nov 2008 10:52:41 +0100 > From: Pawel Jakub Dawidek > Subject: Re: svn commit: r185029 - in head: > cddl/compat/opensolaris/include cddl/compat/opensolaris/misc > cddl/contrib/opensolaris/cmd/zdb > cddl/contrib/opensolaris/cmd/zfs > cddl/contrib/opensolaris/cmd/zinject cd... > To: Attila Nagy > Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, > src-committers@freebsd.org > Message-ID: <20081126095241.GA3188@garage.freebsd.pl> > Content-Type: text/plain; charset="us-ascii" > > On Wed, Nov 26, 2008 at 10:15:58AM +0100, Attila Nagy wrote: >> Hello, >> >> Pawel Jakub Dawidek wrote: >> >Author: pjd >> >Date: Mon Nov 17 20:49:29 2008 >> >New Revision: 185029 >> >URL: http://svn.freebsd.org/changeset/base/185029 >> > >> >Log: >> > Update ZFS from version 6 to 13 and bring some FreeBSD-specific > changes. >> > >> This, and other changes stabilized ZFS by a great level in HEAD. >> Do you plan to MFC these to 7-STABLE? > > Yes, but ETA yet. > > -- > Pawel Jakub Dawidek http://www.wheel.pl > pjd@FreeBSD.org http://www.FreeBSD.org > FreeBSD committer Am I Evil? Yes, I Am! > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Mar 16 11:06:54 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 71751106568B for ; Mon, 16 Mar 2009 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 5F37A8FC20 for ; Mon, 16 Mar 2009 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n2GB6sqX043231 for ; Mon, 16 Mar 2009 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n2GB6rCl043227 for freebsd-fs@FreeBSD.org; Mon, 16 Mar 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 16 Mar 2009 11:06:53 GMT Message-Id: <200903161106.n2GB6rCl043227@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Mar 2009 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132551 fs [zfs] ZFS locks up on extattr_list_link syscall o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132337 fs [zfs] [panic] kernel panic in zfs_fuid_create_cred o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132145 fs [panic] File System Hard Crashes f kern/132068 fs [zfs] page fault when using ZFS over NFS on 7.1-RELEAS o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/131086 fs [ext2fs] mkfs.ext2 creates rotten partition o kern/131084 fs [xfs] xfs destroys itself after copying data o kern/131081 fs [zfs] User cannot delete a file when a ZFS dataset is o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o bin/130105 fs [zfs] zfs send -R dumps core o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129174 fs [nfs] [zfs] [panic] NFS v3 Panic when under high load o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129084 fs [udf] [panic] [lor] udf panic: getblk: size(67584) > M f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/128633 fs [zfs] [lor] lock order reversal in zfs o kern/128514 fs [zfs] [mpt] problems with ZFS and LSILogic SAS/SATA Ad f kern/128173 fs [ext2fs] ls gives "Input/output error" on mounted ext3 o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file f kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li o kern/125149 fs [nfs] [panic] changing into .zfs dir from nfs client c f kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o bin/118249 fs mv(1): moving a directory changes its mtime o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po 49 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 16 18:09:06 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B8AE410656DB; Mon, 16 Mar 2009 18:09:06 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-gx0-f176.google.com (mail-gx0-f176.google.com [209.85.217.176]) by mx1.freebsd.org (Postfix) with ESMTP id 43CAC8FC14; Mon, 16 Mar 2009 18:09:06 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by gxk24 with SMTP id 24so1146422gxk.19 for ; Mon, 16 Mar 2009 11:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=bjnvT/CCzR2jcCtHGO0EDsMfE98Q04KLtdIeGtK9MAI=; b=Jx1VToL3e/mKXZYfqodtppsgHlepPaj9LkqGwDIvZmCeqXfK5w5U2nitwufYBV2k7O b08Cidn9OiTS+C+tXkjIXX5eLM1pw7BgRyrj7Ysap3xwfpVQcymr+ZlpnhHfpqBhNSHt +1l9VVfQ2vh+qu/0CPkwrq3sP78U6JEPflNmg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=O+yr69essfoNgKexh+9mr8pibPMMP5qXIPnvu6ul41+vwwI+CR08Z11Nu6c63UH1t2 w/yqM8DM3AG8lT/COzQ1b7ikbRF9YN1kWHA8DcGmiKiBmqqTNMyZSVLrfE14F4+wY46o SoppKFCnVdC+L58mT2MFxIV/xA9elzImqVPqI= MIME-Version: 1.0 Received: by 10.142.246.19 with SMTP id t19mr2226516wfh.9.1237226945095; Mon, 16 Mar 2009 11:09:05 -0700 (PDT) In-Reply-To: <4AE4493D5E9141E8812E4BC83FB5A2A5@PegaPegII> References: <78cb3d3f0903151209r46837d70m914a23e30a19060e@mail.gmail.com> <4AE4493D5E9141E8812E4BC83FB5A2A5@PegaPegII> Date: Mon, 16 Mar 2009 14:09:05 -0400 Message-ID: <5f67a8c40903161109le12b8afuc25b8c1ec1b6f70c@mail.gmail.com> From: Zaphod Beeblebrox To: Pegasus Mc Cleaft Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek , Adrian Penisoara , freebsd-hackers@freebsd.org Subject: Re: ETA for ZFS v. 13 Merge From HEAD ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Mar 2009 18:09:08 -0000 On Sun, Mar 15, 2009 at 6:39 PM, Pegasus Mc Cleaft wrote: > Hi Adrian, > > I am not sure, but I didnt think ZFS 13 was ever going to be merged into > 7-stable. I thought the kernel memory requirements were to great (just going > back in my memory on that one). Also, I think there are still a few bugs > left with the zil being enabled (and/or prefetch) causing lockups on machine > with a lot of IO. I know I have hit that bug a few times on my machine when > using various torrent clients when they want to preallocate large amounts of > diskspace. > > I personally cant wait until a later version of ZFS is imported that > supports encryption. I can finally say good-bye to our GEOM ELI USB drives > for backups!! Never the less, I am quite thankfull to thoes involved in > porting V13 to FreeBSD. Its a wonderfull improvement and my FS of choice > when installing on new machines (especially zfs boot) I think that you're touching on two entirely separate points here... What it takes to upgrade ZFS in -STABLE and what it takes to bring ZFS modules in to FreeBSD. I sincerely hope that ZFSv13 is planned for -STABLE. Last we left this issue, testing and a few kernel improvements were in the way. None of the kernel improvements were going to change the API, so the project was doable in -STABLE. That said, time marches on, 8.0-RELEASE draws ever nearer. When we were still several years out on 8.0 and ZFS was causing me more problems, I was much more keen to push for the port. I would still welcome it with open arms, but I'm not convinced that anyone is going to push it forward. The issue of encryption (along with many other issues) is tied to the ability of FreeBSD to compile and use ZFS modules. Just like netgraph modules extend the function of netgraph.ko and geom modules extend the base geom function, ZFS is designed (in Solaris, at least) to take modules. ZFS encryption is a module. I'm not clear on compression --- it would make sense that it is a module, but it seemingly got copied into FreeBSD as a core feature (and it may also be so in solaris). Anyways... is there any plans to allow for ZFS modules in FreeBSD? From owner-freebsd-fs@FreeBSD.ORG Tue Mar 17 23:04:58 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 901071065677 for ; Tue, 17 Mar 2009 23:04:58 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from tokyo01.jp.mail.your.org (tokyo01.jp.mail.your.org [204.9.54.5]) by mx1.freebsd.org (Postfix) with ESMTP id 0A6788FC15 for ; Tue, 17 Mar 2009 23:04:57 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from tokyo01.jp.mail.your.org (localhost.your.org [127.0.0.1]) by tokyo01.jp.mail.your.org (Postfix) with ESMTP id 21D122AD55AF for ; Tue, 17 Mar 2009 23:04:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=dragondata.com; h=cc :message-id:from:to:in-reply-to:content-type :content-transfer-encoding:mime-version:subject:date:references; s=selector1; bh=mRvDFuL5o1KwvM+OmLyXGohvwYc=; b=IgY69ChzUjNau4u AqYzMkOSHjGfTQpVj8ykM1fQ6oNdfMnkMIAQC04ipqzHcw1tB+8AIHQ6nT4b2xIv r3LIaeoH+qvB11yYOG/Ha0V7u+Cm9Dh2PA6Og88ptIJXO8COs8/yjiPgAmmpgehh FmSJRdOyYi30vNIVaNb7Rr8Q/xIo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=dragondata.com; h=cc:message-id :from:to:in-reply-to:content-type:content-transfer-encoding :mime-version:subject:date:references; q=dns; s=selector1; b=dw0 6qBA4Q04Vo5GssnMSqiI4GNK/0lLWptVmKsUu3WpizAFVnREvFduFxM+63UblKbN Uk5QHIYeyMMJyYc26APNQWRk5LSIxs4lwyzoyc0SUDfGMQwzBs51e8wXfHbzWcXw GnDw/xu68zghWpWdlA2mE3bi9aLk4jYo1CkCG4dk= Received: from mail.your.org (server2-a.your.org [216.14.97.66]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by tokyo01.jp.mail.your.org (Postfix) with ESMTPS id 6B5D92AD55A8 for ; Tue, 17 Mar 2009 23:04:55 +0000 (UTC) Received: from pool011.dhcp.your.org (pool011.dhcp.your.org [69.31.99.11]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.your.org (Postfix) with ESMTPSA id EAB7D2C900F; Tue, 17 Mar 2009 23:03:14 +0000 (UTC) Message-Id: From: Kevin Day To: Kevin Day In-Reply-To: <8E12CEFC-25DE-4B82-97BD-7ED717650089@dragondata.com> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Tue, 17 Mar 2009 18:04:52 -0500 References: <8E12CEFC-25DE-4B82-97BD-7ED717650089@dragondata.com> X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org Subject: Re: zio->io_cv deadlock X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Mar 2009 23:04:59 -0000 I've got a test environment where I can make this deadlock happen within 3-4 hours of use now. This is from -CURRENT as of yesterday. This server isn't trying to use zfs on root, so when it hangs I'm not quite as bad off. Here's a ps output: USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND UID PPID CPU PRI NI MWCHAN SL RE PAGEIN LIM TSIZ root 477 0.0 0.0 2180 656 ?? Is 9:34AM 0:00.00 /sbin/ devd 0 1 0 44 0 select 127 127 0 - 336 root 593 0.0 0.0 5780 1444 ?? Is 9:35AM 0:00.05 /usr/ sbin/syslog 0 1 0 44 0 select 21 127 0 - 36 root 811 0.0 0.0 24872 4172 ?? Is 9:35AM 0:00.00 /usr/ sbin/sshd 0 1 0 44 0 select 127 127 0 - 220 root 837 0.0 0.0 6836 1532 ?? Is 9:35AM 0:00.07 /usr/ sbin/cron - 0 1 0 44 0 nanslp 27 127 0 - 36 root 974 0.0 0.0 8232 2484 ?? Ss 9:38AM 1:19.01 screen 0 1 0 44 0 select 0 127 6 - 292 root 1612 0.0 0.0 38852 7856 ?? Ss 2:49PM 0:00.16 sshd: root@pts/0 0 811 0 44 0 select 0 127 0 - 220 root 1617 0.0 0.0 10188 2792 0 Is 2:49PM 0:00.01 -csh (csh) 0 1612 0 47 0 pause 127 127 0 - 304 root 1622 0.0 0.0 8232 2132 0 S+ 2:49PM 0:00.03 screen - x 0 1617 0 44 0 pause 1 127 0 - 292 root 975 0.0 0.0 10188 2704 1 Is 9:38AM 0:00.01 /bin/ csh 0 974 0 49 0 pause 127 127 0 - 304 root 980 0.0 1.1 794196 766248 1 D+ 9:38AM 66:50.20 rsync - ravH root 0 975 0 69 0 zio->i 127 127 1 - 344 root 982 0.0 0.2 181844 142444 1 I+ 9:38AM 69:32.28 rsync - ravH root 0 980 0 44 0 select 57 127 0 - 344 root 983 0.0 0.0 10188 2788 2 Ss 9:38AM 0:00.01 /bin/ csh 0 974 0 44 0 pause 0 127 0 - 304 root 1 0.0 0.0 2176 596 ?? ILs 9:34AM 0:00.01 /sbin/ init -- 0 0 0 44 0 wait 127 127 8 - 604 root 827 0.0 0.0 10796 3800 ?? Ss 9:35AM 0:00.50 sendmail: accept 0 1 0 44 0 select 3 127 1 - 628 smmsp 831 0.0 0.0 10796 3844 ?? Is 9:35AM 0:00.01 sendmail: Queue 25 1 0 44 0 pause 127 127 0 - 628 root 887 0.0 0.0 5776 1224 v0 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 888 0.0 0.0 5776 1224 v1 Is+ 9:35AM 0:00.00 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 889 0.0 0.0 5776 1224 v2 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 890 0.0 0.0 5776 1224 v3 Is+ 9:35AM 0:00.00 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 2 - 20 root 891 0.0 0.0 5776 1224 v4 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 892 0.0 0.0 5776 1224 v5 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 893 0.0 0.0 5776 1224 v6 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 894 0.0 0.0 5776 1224 v7 Is+ 9:35AM 0:00.01 /usr/ libexec/get 0 1 0 76 0 ttyin 127 127 0 - 20 root 981 0.0 0.0 27668 11220 1 S+ 9:38AM 172:25.63 ssh -l root 216. 0 980 0 44 0 select 1 127 0 - 120 root 2112 0.0 0.0 6892 1416 2 R+ 6:48PM 0:00.00 ps auxwwlv 0 983 0 44 0 - 127 0 0 - 28 root 0 0.0 0.0 0 128 ?? DLs 9:34AM 46:56.19 [kernel] 0 0 0 -68 0 - 127 127 0 - 0 root 2 0.0 0.0 0 16 ?? DL 9:34AM 0:01.23 [g_event] 0 0 0 -8 0 - 0 127 0 - 0 root 3 0.0 0.0 0 16 ?? DL 9:34AM 0:26.74 [g_up] 0 0 0 -8 0 - 0 127 0 - 0 root 4 0.0 0.0 0 16 ?? DL 9:34AM 5:51.49 [g_down] 0 0 0 -8 0 - 0 127 0 - 0 root 5 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [xpt_thrd] 0 0 0 -16 0 ccb_sc 127 127 0 - 0 root 6 0.0 0.0 0 16 ?? DL 9:34AM 0:00.14 [fdc0] 0 0 0 -16 0 - 0 127 0 - 0 root 7 0.0 0.0 0 24 ?? DL 9:34AM 0:00.00 [sctp_iterator] 0 0 0 -16 0 waitin 127 127 0 - 0 root 8 0.0 0.0 0 16 ?? DL 9:34AM 0:00.03 [pagedaemon] 0 0 0 -16 0 psleep 4 127 0 - 0 root 9 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [vmdaemon] 0 0 0 -16 0 psleep 127 127 0 - 0 root 10 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [audit] 0 0 0 -16 0 audit_ 127 127 0 - 0 root 11 800.0 0.0 0 128 ?? RL 9:34AM 3958:43.82 [idle] 0 0 0 171 0 - 127 127 0 - 0 root 12 0.0 0.0 0 400 ?? WL 9:34AM 6:03.01 [intr] 0 0 0 -60 0 - 127 127 0 - 0 root 13 0.0 0.0 0 16 ?? DL 9:34AM 0:44.77 [yarrow] 0 0 0 44 0 - 0 127 0 - 0 root 14 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [usbus0] 0 0 0 -64 0 wmsg 127 127 0 - 0 root 15 0.0 0.0 0 16 ?? DL 9:34AM 0:00.92 [usbus0] 0 0 0 -68 0 wmsg 2 127 0 - 0 root 16 0.0 0.0 0 16 ?? DL 9:34AM 0:00.56 [usbus0] 0 0 0 -68 0 wmsg 2 127 0 - 0 root 17 0.0 0.0 0 16 ?? DL 9:34AM 0:00.55 [usbus0] 0 0 0 -64 0 wmsg 2 127 0 - 0 root 18 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [usbus1] 0 0 0 -64 0 wmsg 127 127 0 - 0 root 19 0.0 0.0 0 16 ?? DL 9:34AM 0:00.92 [usbus1] 0 0 0 -68 0 wmsg 2 127 0 - 0 root 20 0.0 0.0 0 16 ?? DL 9:34AM 0:00.52 [usbus1] 0 0 0 -68 0 wmsg 2 127 0 - 0 root 21 0.0 0.0 0 16 ?? DL 9:34AM 0:00.60 [usbus1] 0 0 0 -64 0 wmsg 2 127 0 - 0 root 22 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [pagezero] 0 0 0 76 0 pgzero 127 127 0 - 0 root 23 0.0 0.0 0 16 ?? DL 9:34AM 0:00.14 [bufdaemon] 0 0 0 -16 0 psleep 0 127 0 - 0 root 24 0.0 0.0 0 16 ?? DL 9:34AM 0:24.78 [syncer] 0 0 0 44 0 zfsvfs 127 127 0 - 0 root 25 0.0 0.0 0 16 ?? DL 9:34AM 0:02.17 [vnlru] 0 0 0 44 0 vlruwt 0 127 0 - 0 root 26 0.0 0.0 0 16 ?? DL 9:34AM 0:00.21 [softdepflush] 0 0 0 -16 0 sdflus 0 127 0 - 0 root 88 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 89 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 90 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 91 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 92 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 93 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 94 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 95 0.0 0.0 0 16 ?? DL 9:34AM 0:00.00 [system_taskq] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 104 0.0 0.0 0 16 ?? DL 9:34AM 5:11.71 [arc_reclaim_thr 0 0 0 44 0 arc_re 0 127 0 - 0 root 105 0.0 0.0 0 16 ?? DL 9:34AM 0:00.15 [l2arc_feed_thre 0 0 0 -16 0 l2arc_ 0 127 0 - 0 root 936 0.0 0.0 0 16 ?? DL 9:37AM 0:00.55 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 937 0.0 0.0 0 16 ?? DL 9:37AM 0:00.17 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 938 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 939 0.0 0.0 0 16 ?? DL 9:37AM 0:08.89 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 940 0.0 0.0 0 16 ?? DL 9:37AM 0:31.71 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 941 0.0 0.0 0 16 ?? DL 9:37AM 0:08.91 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 942 0.0 0.0 0 16 ?? DL 9:37AM 0:08.76 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 943 0.0 0.0 0 16 ?? DL 9:37AM 0:08.79 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 944 0.0 0.0 0 16 ?? DL 9:37AM 0:09.21 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 945 0.0 0.0 0 16 ?? DL 9:37AM 0:08.83 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 946 0.0 0.0 0 16 ?? DL 9:37AM 0:08.88 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 947 0.0 0.0 0 16 ?? DL 9:37AM 1:34.28 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 948 0.0 0.0 0 16 ?? DL 9:37AM 1:34.28 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 949 0.0 0.0 0 16 ?? DL 9:37AM 1:34.25 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 950 0.0 0.0 0 16 ?? DL 9:37AM 1:34.30 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 951 0.0 0.0 0 16 ?? DL 9:37AM 1:34.74 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 952 0.0 0.0 0 16 ?? DL 9:37AM 1:34.38 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 953 0.0 0.0 0 16 ?? DL 9:37AM 1:34.36 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 954 0.0 0.0 0 16 ?? DL 9:37AM 1:34.14 [spa_zio] 0 0 0 44 0 tq->tq 127 127 0 - 0 root 955 0.0 0.0 0 16 ?? DL 9:37AM 2:46.35 [spa_zio] 0 0 0 44 0 zfsvfs 127 127 0 - 0 root 956 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 957 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 958 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 959 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 960 0.0 0.0 0 16 ?? DL 9:37AM 0:00.00 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 961 0.0 0.0 0 16 ?? DL 9:37AM 0:00.11 [spa_zio] 0 0 0 -16 0 tq->tq 127 127 0 - 0 root 962 0.0 0.0 0 16 ?? DL 9:37AM 0:24.24 [vdev:worker da1 0 0 0 44 0 vgeom: 127 127 0 - 0 root 963 0.0 0.0 0 16 ?? DL 9:37AM 0:00.21 [txg_thread_ente 0 0 0 -16 0 tx->tx 127 127 0 - 0 root 964 0.0 0.0 0 28 ?? DL 9:37AM 1:19.07 [txg_thread_ente 0 0 0 44 0 tx->tx 127 127 0 - 0 root 965 0.0 0.0 0 16 ?? DL 9:37AM 0:12.34 [zil_clean] 0 0 0 -16 0 tq->tq 127 127 0 - 0 Note that syncer and one spa_zio are stuck in zfsvfs, and my rsync process is frozen in zio->io_cv. zpool makes everything look okay: cs03# zpool iostat -v capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- z 549G 13.0T 14 133 709K 12.9M da1 549G 13.0T 14 133 709K 12.9M ---------- ----- ----- ----- ----- ----- ----- cs03# zpool status -v pool: z state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM z ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors ZFS related sysctls: vfs.zfs.arc_meta_limit: 26214400 vfs.zfs.arc_meta_used: 70401408 vfs.zfs.mdcomp_disable: 0 vfs.zfs.arc_min: 140928537 vfs.zfs.arc_max: 104857600 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.prefetch_disable: 0 vfs.zfs.recover: 0 vfs.zfs.txg.synctime: 5 vfs.zfs.txg.timeout: 30 vfs.zfs.scrub_limit: 10 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.ramp_rate: 2 vfs.zfs.vdev.time_shift: 6 vfs.zfs.vdev.min_pending: 4 vfs.zfs.vdev.max_pending: 35 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.version.zpl: 3 vfs.zfs.version.vdev_boot: 1 vfs.zfs.version.spa: 13 vfs.zfs.version.dmu_backup_stream: 1 vfs.zfs.version.dmu_backup_header: 2 vfs.zfs.version.acl: 1 vfs.zfs.debug: 0 vfs.zfs.super_owner: 0 kstat.zfs.misc.arcstats.hits: 57819155 kstat.zfs.misc.arcstats.misses: 5590858 kstat.zfs.misc.arcstats.demand_data_hits: 63981 kstat.zfs.misc.arcstats.demand_data_misses: 635 kstat.zfs.misc.arcstats.demand_metadata_hits: 47525277 kstat.zfs.misc.arcstats.demand_metadata_misses: 4114419 kstat.zfs.misc.arcstats.prefetch_data_hits: 19665 kstat.zfs.misc.arcstats.prefetch_data_misses: 809 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 10210232 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1474995 kstat.zfs.misc.arcstats.mru_hits: 11018143 kstat.zfs.misc.arcstats.mru_ghost_hits: 937056 kstat.zfs.misc.arcstats.mfu_hits: 37133185 kstat.zfs.misc.arcstats.mfu_ghost_hits: 3428689 kstat.zfs.misc.arcstats.deleted: 7789751 kstat.zfs.misc.arcstats.recycle_miss: 6302888 kstat.zfs.misc.arcstats.mutex_miss: 9406 kstat.zfs.misc.arcstats.evict_skip: 418277598 kstat.zfs.misc.arcstats.hash_elements: 9872 kstat.zfs.misc.arcstats.hash_elements_max: 124517 kstat.zfs.misc.arcstats.hash_collisions: 112246 kstat.zfs.misc.arcstats.hash_chains: 96 kstat.zfs.misc.arcstats.hash_chain_max: 3 kstat.zfs.misc.arcstats.p: 92080153 kstat.zfs.misc.arcstats.c: 140928537 kstat.zfs.misc.arcstats.c_min: 140928537 kstat.zfs.misc.arcstats.c_max: 104857600 kstat.zfs.misc.arcstats.size: 141645696 kstat.zfs.misc.arcstats.hdr_size: 2617264 kstat.zfs.misc.arcstats.l2_hits: 0 kstat.zfs.misc.arcstats.l2_misses: 0 kstat.zfs.misc.arcstats.l2_feeds: 0 kstat.zfs.misc.arcstats.l2_rw_clash: 0 kstat.zfs.misc.arcstats.l2_writes_sent: 0 kstat.zfs.misc.arcstats.l2_writes_done: 0 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 kstat.zfs.misc.arcstats.l2_evict_reading: 0 kstat.zfs.misc.arcstats.l2_free_on_write: 0 kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 0 kstat.zfs.misc.arcstats.l2_hdr_size: 0 kstat.zfs.misc.arcstats.memory_throttle_count: 0 kstat.zfs.misc.vdev_cache_stats.delegations: 671323 kstat.zfs.misc.vdev_cache_stats.hits: 4416731 kstat.zfs.misc.vdev_cache_stats.misses: 349266 There does seem to be something stuck in the syncer: vfs.worklist_len: 11 (that doesn't go down or move at all), but that doesn't tell me much. Next time this happens, is there anything else I should look at? -- Kevin On Feb 8, 2009, at 10:59 PM, Kevin Day wrote: > > I'm playing with a -CURRENT install from a couple of weeks ago. > Everything seems okay for a few days, then eventually every process > ends up stuck in zio->io_cv. If I go to the console, it's responsive > until I try logging in, then login is stuck in zio->io_cv as well. > Ctrl-Alt-Esc drops me into ddb, but then ddb hangs instantly. > > Nothing on the console or syslog before it hangs. > > Anyone seen anything similar? > > -- Kevin > > > Possibly relevant info: > > 8 core Opteron > 64GB RAM > > da1 at twa0 bus 0 target 0 lun 1 > da1: Fixed Direct Access SCSI-5 device > da1: 100.000MB/s transfers > da1: 4678158MB (9580867585 512 byte sectors: 255H 63S/T 596381C) > > server5# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > z 4.44T 1.19T 3.25T 26% ONLINE - > > server5# zpool status -v > pool: z > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > z ONLINE 0 0 0 > da1 ONLINE 0 0 0 > > errors: No known data errors > > server5# cat /boot/loader.conf > vm.kmem_size_max="2048M" > vm.kmem_size="2048M" > vfs.zfs.arc_max="100M" > zfs_load="YES" > vfs.root.mountfrom="zfs:z" > > > (tried lowering arc_max, didn't seem to help) > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Mar 18 06:34:32 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98EB21065670 for ; Wed, 18 Mar 2009 06:34:32 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.184]) by mx1.freebsd.org (Postfix) with ESMTP id DA1508FC16 for ; Wed, 18 Mar 2009 06:34:31 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: by nf-out-0910.google.com with SMTP id b11so66534nfh.33 for ; Tue, 17 Mar 2009 23:34:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=ZnJLGNn5PArgnOSBntNTvjbXTSxb3AR0QT84Nt6lqHU=; b=p/TUGL3meOscXrMrZatyeVk7EP3AfvScJvDXUo+6eHBLOcVgfLdfhbxEN67/7WDF3U pTWJo7HpuVY+8c7A2qUA/7j5jA5C1V0TogVNjMkhJi4NjiVZGxkzFXBRhlnJLrWAVucw J9+xt2OfXRUr67h7rDxuPq7G1M2TlFHjzv/fY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=xRE4vrVZU+RpXDwkuI5UWzkW+TX38+AP6lYKe6/TC0+WsonButKD++RwIflcA9Xgzp /Szr0Xyu7jcM8bdzdUxhIjsYAOTFaenoTzbUJGELs5Zy6UbuhxDXob9BoLi2lZoyaynO HZxh8HXxMbqNloM0b72CpbotaoYscyppb9it0= MIME-Version: 1.0 Received: by 10.216.18.199 with SMTP id l49mr344963wel.23.1237356292934; Tue, 17 Mar 2009 23:04:52 -0700 (PDT) Date: Wed, 18 Mar 2009 02:04:52 -0400 Message-ID: From: grarpamp To: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org Content-Type: multipart/mixed; boundary=0016e64c2a484d0bb304655e769f Cc: Subject: ZFS version list [was ETA for ZFS ver: n] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Mar 2009 06:34:32 -0000 --0016e64c2a484d0bb304655e769f Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit ZFS version list [was ETA for ZFS ver: n] I needed raw, bit reliable, stable, encrypted storage. ZFS gave all but the last part so far. None of the features since v6 were useful to me. And as with most software, there are surely tons of fixes and optimizations being handled silently that are useful. Additions at or before v6 that were nifty: compression hot spares raidz2 ditto blocks sha256 - chained back to the uberblock thing Integrated crypto will be very useful, simply to eliminate that GEOM. Even if GBDE and GELI are cool :) Hopefully ZFS will include a strong 256 bit cipher along with other options. My guess is that it will be out from SUN midyear, before FBSD 8.0, and thus a potential for 8.0. The ZFS iSCSI bit might be cool. Putting things like that all under the ZFS hierarchy could be sickly entertaining :) If BSD chflags(2) schg, as on UFS, does or will work on ZFS, that's cool. See the Solaris chmod command. FBSD could very well have magically encrypted user homedirs that make use of some of the inherent ZFS [delegation, etc?] features. login could be hacked as could sshd or possibly pamify things. Haven't really thought about it other than Apple has it. Don't know about other BSD's. It is awesome that FBSD has ZFS! No matter what gets done when, thanks for all the work on it... past, present and on into future. Version list attached for people to reference... --0016e64c2a484d0bb304655e769f Content-Type: text/plain; charset=US-ASCII; name="zfs_ver_list.txt" Content-Disposition: attachment; filename="zfs_ver_list.txt" Content-Transfer-Encoding: base64 X-Attachment-Id: f_fsflywc8 Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KaHR0cDovL29wZW5zb2xh cmlzLm9yZy9vcy9jb21tdW5pdHkvemZzL3ZlcnNpb24vPG4+Lwo9PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09CgpaRlMgUG9vbCBWZXJzaW9uIDE0CgpUaGlzIHZlcnNpb24g aW5jbHVkZXMgc3VwcG9ydCBmb3IgdGhlIGZvbGxvd2luZyBmZWF0dXJlOgoKICAgICogcGFzc3Ro cm91Z2gteCBhY2xpbmhlcml0IHByb3BlcnR5IHN1cHBvcnQKClRoaXMgZmVhdHVyZSBpcyBhdmFp bGFibGUgaW46CgogICAgKiBTb2xhcmlzIEV4cHJlc3MgQ29tbXVuaXR5IEVkaXRpb24sIGJ1aWxk IDEwMwoKVGhlIHJlbGF0ZWQgYnVnIGFuZCBQU0FSQyBjYXNlIGZvciB0aGUgdmVyc2lvbiAxNCBj aGFuZ2UgYXJlOgoKICAgICogNjc2NTE2NiBOZWVkIHRvIHByb3ZpZGUgbWVjaGFuaXNtIHRvIG9w dGlvbmFsbHkgaW5oZXJpdAogICAgQUNFX0VYRUNVVEUKICAgICogUFNBUkMgMjAwOC82NTkgTmV3 IFpGUyAicGFzc3Rocm91Z2gteCIgQUNMIGluaGVyaXRhbmNlIHJ1bGVzCgo9PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09CgpaRlMgUG9vbCBWZXJzaW9uIDEzCgpUaGlzIHZl cnNpb24gaW5jbHVkZXMgc3VwcG9ydCBmb3IgdGhlIGZvbGxvd2luZyBmZWF0dXJlczoKCiAgICAq IHVzZWRieXNuYXBzaG90cyBwcm9wZXJ0eQogICAgKiB1c2VkYnljaGlsZHJlbiBwcm9wZXJ0eQog ICAgKiB1c2VkYnlyZWZyZXNlcnZhdGlvbiBwcm9wZXJ0eQogICAgKiB1c2VkYnlkYXRhc2V0IHBy b3BlcnR5CgpUaGVzZSBmZWF0dXJlcyBhcmUgYXZhaWxhYmxlIGluOgoKICAgICogU29sYXJpcyBF eHByZXNzIENvbW11bml0eSBFZGl0aW9uLCBidWlsZCA5OAoKVGhlIHJlbGF0ZWQgYnVnIGFuZCBQ U0FSQyBjYXNlIGZvciB2ZXJzaW9uIDEzIGNoYW5nZSBpczoKCiAgICAqIDY3MzA3OTkgd2FudCBz bmFwdXNlZCBwcm9wZXJ0eQogICAgKiBQU0FSQyAyMDA4LzUxOCBaRlMgc3BhY2UgYWNjb3VudGlu ZyBlbmhhbmNlbWVudHMKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0K ClpGUyBQb29sIFZlcnNpb24gMTIKClRoaXMgdmVyc2lvbiBpbmNsdWRlcyBzdXBwb3J0IGZvciB0 aGUgZm9sbG93aW5nIGZlYXR1cmU6CgogICAgKiBQcm9wZXJ0aWVzIGZvciBTbmFwc2hvdHMKClRo aXMgZmVhdHVyZSBpcyBhdmFpbGFibGUgaW46CgogICAgKiBTb2xhcmlzIEV4cHJlc3MgQ29tbXVu aXR5IEVkaXRpb24sIGJ1aWxkIDk2CgpUaGUgcmVsYXRlZCBidWcgZm9yIHRoZSB2ZXJzaW9uIDEy IGNoYW5nZSBpczoKCiAgICAqIDY3MDE3OTcgd2FudCB1c2VyIHByb3BlcnRpZXMgb24gc25hcHNo b3RzCgo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CgpaRlMgUG9vbCBW ZXJzaW9uIDExCgpUaGlzIHZlcnNpb24gaW5jbHVkZXMgc3VwcG9ydCBmb3IgdGhlIGZvbGxvd2lu ZyBmZWF0dXJlOgoKICAgICogSW1wcm92ZWQgenBvb2wgc2NydWIgLyByZXNpbHZlciBwZXJmb3Jt YW5jZQoKVGhpcyBmZWF0dXJlIGlzIGF2YWlsYWJsZSBpbjoKCiAgICAqIFNvbGFyaXMgRXhwcmVz cyBDb21tdW5pdHkgRWRpdGlvbiwgYnVpbGQgOTQKClRoZSByZWxhdGVkIGJ1ZyBmb3IgdGhlIHZl cnNpb24gMTEgY2hhbmdlIGlzOgoKICAgICogNjM0MzY2NyBzY3J1Yi9yZXNpbHZlciBoYXMgdG8g c3RhcnQgb3ZlciB3aGVuIGEgc25hcHNob3QgaXMKICAgIHRha2VuCiAgICAqIChOb3RlLCB0aGlz IGJ1ZyBpcyBmaXhlZCB3aGVuIHVzaW5nIGJ1aWxkIDk0IGV2ZW4gd2l0aCBvbGRlcgogICAgcG9v bCB2ZXJzaW9ucy4gSG93ZXZlciwgdXBncmFkaW5nIHRoZSBwb29sIGNhbiBpbXByb3ZlIHNjcnVi CiAgICBwZXJmb3JtYW5jZSB3aGVuIHRoZXJlIGFyZSBtYW55IGZpbGVzeXN0ZW1zLCBzbmFwc2hv dHMsIGFuZAogICAgY2xvbmVzLikKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT0KClpGUyBQb29sIFZlcnNpb24gMTAKClRoaXMgdmVyc2lvbiBpbmNsdWRlcyBzdXBwb3J0 IGZvciB0aGUgZm9sbG93aW5nIGZlYXR1cmU6CgogICAgKiBEZXZpY2VzIGNhbiBiZSBhZGRlZCB0 byBhIHN0b3JhZ2UgcG9vbCBhcyAiY2FjaGUgZGV2aWNlcy4iCiAgICBUaGVzZSBkZXZpY2VzIHBy b3ZpZGUgYW4gYWRkaXRpb25hbCBsYXllciBvZiBjYWNoaW5nIGJldHdlZW4KICAgIG1haW4gbWVt b3J5IGFuZCBkaXNrLiBVc2luZyBjYWNoZSBkZXZpY2VzIHByb3ZpZGVzIHRoZSBncmVhdGVzdAog ICAgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQgZm9yIHJhbmRvbSByZWFkLXdvcmtsb2FkcyBvZiBt b3N0bHkKICAgIHN0YXRpYyBjb250ZW50LgoKVGhpcyBmZWF0dXJlIGlzIGF2YWlsYWJsZSBpbiB0 aGUgU29sYXJpcyBFeHByZXNzIENvbW11bml0eSBFZGl0aW9uLApidWlsZCA3OC4KClRoZSBTb2xh cmlzIDEwIDEwLzA4IHJlbGVhc2UgaW5jbHVkZXMgWkZTIHBvb2wgdmVyc2lvbiAxMCwgYnV0CnN1 cHBvcnQgZm9yIGNhY2hlIGRldmljZXMgaXMgbm90IGluY2x1ZGVkIGluIHRoaXMgU29sYXJpcyBy ZWxlYXNlLgoKVGhlIHJlbGF0ZWQgYnVnIGZvciB0aGUgdmVyc2lvbiAxMCBjaGFuZ2UgaXM6Cgog ICAgKiA2NTM2MDU0IHNlY29uZCB0aWVyICgiZXh0ZXJuYWwiKSBBUkMKCj09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT0KClpGUyBQb29sIFZlcnNpb24gOQoKVGhpcyB2ZXJz aW9uIGluY2x1ZGVzIHN1cHBvcnQgZm9yIHRoZSBmb2xsb3dpbmcgZmVhdHVyZXM6CgogICAgKiBJ biBhZGRpdGlvbiB0byB0aGUgZXhpc3RpbmcgWkZTIHF1b3RhIGFuZCByZXNlcnZhdGlvbiBmZWF0 dXJlcywKICAgIHRoaXMgcmVsZWFzZSBpbmNsdWRlcyBkYXRhc2V0IHF1b3RhcyBhbmQgcmVzZXJ2 YXRpb25zIHRoYXQgZG8KICAgIG5vdCBpbmNsdWRlIGRlc2NlbmRlbnQgZGF0YXNldHMsIHN1Y2gg YXMgc25hcHNob3RzIGFuZCBjbG9uZXMsCiAgICBpbiB0aGUgc3BhY2UgY29uc3VtcHRpb24uICgi emZzIHNldCByZWZxdW90YSIgYW5kICJ6ZnMgc2V0CiAgICByZWZyZXNlcnZhdGlvbiIuKQoKICAg ICogQSByZXNlcnZhdGlvbiBpcyBhdXRvbWF0aWNhbGx5IHNldCB3aGVuIGEgbm9uLXNwYXJzZSBa RlMKICAgIHZvbHVtZSBpcyBjcmVhdGVkIHRoYXQgbWF0Y2hlcyB0aGUgc2l6ZSBvZiB0aGUgdm9s dW1lLiBUaGlzCiAgICByZWxlYXNlIHByb3ZpZGVzIGFuIGltbWVkaWF0ZSByZXNlcnZhdGlvbiBm ZWF0dXJlIHNvIHRoYXQgeW91CiAgICBzZXQgYSByZXNlcnZhdGlvbiBvbiBhIG5vbi1zcGFyc2Ug dm9sdW1lIHdpdGggZW5vdWdoIHNwYWNlIHRvCiAgICB0YWtlIHNuYXBzaG90cyBhbmQgbW9kaWZ5 IHRoZSBjb250ZW50cyBvZiB0aGUgdm9sdW1lLgoKICAgICogQ0lGUyBzZXJ2ZXIgc3VwcG9ydAoK VGhlc2UgZmVhdHVyZXMgYXJlIGF2YWlsYWJsZSBpbiBTb2xhcmlzIEV4cHJlc3MgQ29tbXVuaXR5 IEVkaXRpb24sCmJ1aWxkIDc3LgoKVGhlIHJlbGF0ZWQgYnVncyBmb3IgdmVyc2lvbiA5IGNoYW5n ZXMgYXJlOgoKICAgICogNjQzMTI3NyB3YW50IGZpbGVzeXN0ZW0tb25seSBxdW90YXMKICAgICog NjQ4MzY3NyBuZWVkIGltbWVkaWF0ZSByZXNlcnZhdGlvbgogICAgKiA2NjE3MTgzIENJRlMgU2Vy dmljZSAgUFNBUkMgMjAwNi83MTUKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT0KClpGUyBQb29sIFZlcnNpb24gOAoKVGhpcyB2ZXJzaW9uIG5vdyBzdXBwb3J0cyB0aGUg YWJpbGl0eSB0byBkZWxlZ2F0ZSB6ZnMoMU0pIGFkbWluaXN0cmF0aXZlCnRhc2tzIHRvIG9yZGlu YXJ5IHVzZXJzLgoKVGhpcyBmZWF0dXJlIGlzIGF2YWlsYWJsZSBpbjoKCiAgICAqIFNvbGFyaXMg RXhwcmVzcyBDb21tdW5pdHkgRWRpdGlvbiwgYnVpbGQgNjkKICAgICogU29sYXJpcyAxMCAxMC8w OCByZWxlYXNlCgpUaGUgcmVsYXRlZCBidWcgZm9yIHRoZSB2ZXJzaW9uIDggY2hhbmdlIGlzOgoK ICAgICogNjM0OTQ3MCBpbnZlc3RpZ2F0ZSBub24tcm9vdCByZXN0b3JlL2JhY2t1cAoKPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQoKWkZTIFBvb2wgVmVyc2lvbiA3CgpU aGlzIHZlcnNpb24gaW5jbHVkZXMgc3VwcG9ydCBmb3IgdGhlIGZvbGxvd2luZyBmZWF0dXJlOgoK VGhlIFpGUyBJbnRlbnQgTG9nIChaSUwpIHNhdGlzZmllcyB0aGUgbmVlZCBvZiBzb21lIGFwcGxp Y2F0aW9ucwp0byBrbm93IHRoZSBkYXRhIHRoZXkgY2hhbmdlZCBpcyBvbiBzdGFibGUgc3RvcmFn ZSBvbiByZXR1cm4gZnJvbQphIHN5c3RlbSBjYWxsLiBUaGUgSW50ZW50IExvZyBob2xkcyByZWNv cmRzIG9mIHRob3NlIHN5c3RlbSBjYWxscwphbmQgdGhleSBhcmUgcmVwbGF5ZWQgaWYgdGhlIHN5 c3RlbSBwb3dlciBmYWlscyBvciBwYW5pY3MgaWYgdGhleQpoYXZlIG5vdCBiZWVuIGNvbW1pdHRl ZCB0byB0aGUgbWFpbiBwb29sLiBXaGVuIHRoZSBJbnRlbnQgTG9nIGlzCmFsbG9jYXRlZCBmcm9t IHRoZSBtYWluIHBvb2wsIGl0IGFsbG9jYXRlcyBibG9ja3MgdGhhdCBjaGFpbiB0aHJvdWdoCnRo ZSBwb29sLiBUaGlzIHZlcnNpb24gYWRkcyB0aGUgY2FwYWJpbGl0eSB0byBzcGVjaWZ5IGEgc2Vw YXJhdGUKSW50ZW50IExvZyBkZXZpY2Ugb3IgZGV2aWNlcy4KClRoaXMgZmVhdHVyZSBpcyBhdmFp bGFibGUgaW46CgogICAgKiBTb2xhcmlzIEV4cHJlc3MgQ29tbXVuaXR5IEVkaXRpb24sIGJ1aWxk IDY4CiAgICAqIFNvbGFyaXMgMTAgMTAvMDggcmVsZWFzZQoKVGhlIHJlbGF0ZWQgYnVnIGZvciB0 aGUgdmVyc2lvbiA3IGNoYW5nZSBpczoKCiAgICAqIDYzMzk2NDAgTWFrZSBaSUwgdXNlIE5WUkFN IHdoZW4gYXZhaWxhYmxlLgoKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PQoKWkZTIFBvb2wgVmVyc2lvbiA2CgpUaGlzIHZlcnNpb24gaW5jbHVkZXMgc3VwcG9ydCBmb3Ig dGhlIGZvbGxvd2luZyBmZWF0dXJlOgoKICAgICogJ2Jvb3RmcycgcG9vbCBwcm9wZXJ0eQoKVGhp cyBmZWF0dXJlIGlzIGF2YWlsYWJsZSBpbjoKCiAgICAqIFNvbGFyaXMgRXhwcmVzcyBDb21tdW5p dHkgRWRpdGlvbiwgYnVpbGQgNjIKICAgICogU29sYXJpcyAxMCAxMC8wOCByZWxlYXNlCgpUaGUg cmVsYXRlZCBidWdzIGZvciB2ZXJzaW9uIDYgY2hhbmdlcyBhcmUgYXMgZm9sbG93czoKCiAgICAq IDQ5Mjk4OTAgWkZTIEJvb3Qgc3VwcG9ydCBmb3IgdGhlIHg4NiBwbGF0Zm9ybQogICAgKiA2NDc5 ODA3IHBvb2xzIG5lZWQgcHJvcGVydGllcwoKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PQoKWkZTIFBvb2wgVmVyc2lvbiA1CgpUaGlzIHZlcnNpb24gaW5jbHVkZXMgc3Vw cG9ydCBmb3IgdGhlIGZvbGxvd2luZyBmZWF0dXJlOgoKICAgICogZ3ppcCBjb21wcmVzc2lvbiBm b3IgWkZTIGRhdGFzZXRzCgpUaGlzIGZlYXR1cmUgaXMgYXZhaWxhYmxlIGluOgoKICAgICogU29s YXJpcyBFeHByZXNzIENvbW11bml0eSBFZGl0aW9uLCBidWlsZCA2MgogICAgKiBTb2xhcmlzIDEw IDEwLzA4IHJlbGVhc2UKClRoZSByZWxhdGVkIGJ1ZyBmb3IgdGhlIHZlcnNpb24gNSBjaGFuZ2Vz IGlzOgoKICAgICogNjUzNjYwNiBnemlwIGNvbXByZXNzaW9uIGZvciBaRlMKCj09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KClpGUyBQb29sIFZlcnNpb24gNAoKVGhpcyB2 ZXJzaW9uIGluY2x1ZGVzIHN1cHBvcnQgZm9yIHRoZSBmb2xsb3dpbmcgZmVhdHVyZToKCiAgICAq IHpwb29sIGhpc3RvcnkKClRoaXMgZmVhdHVyZSBpcyBhdmFpbGFibGUgaW46CgogICAgKiBTb2xh cmlzIEV4cHJlc3MgQ29tbXVuaXR5IEVkaXRpb24sIGJ1aWxkIDYyCiAgICAqIFNvbGFyaXMgMTAg OC8wNyByZWxlYXNlCgpUaGUgcmVsYXRlZCBidWdzIGZvciB2ZXJzaW9uIDQgY2hhbmdlcyBhcmUg YXMgZm9sbG93czoKCiAgICAqIDY1Mjk0MDYgenBvb2wgaGlzdG9yeSBuZWVkcyB0byBidW1wIHRo ZSBvbi1kaXNrIHZlcnNpb24KICAgICogNjM0Mzc0MSB3YW50IHRvIHN0b3JlIGEgY29tbWFuZCBo aXN0b3J5IG9uIGRpc2sKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0K ClpGUyBQb29sIFZlcnNpb24gMwoKVGhpcyB2ZXJzaW9uIGluY2x1ZGVzIHN1cHBvcnQgZm9yIHRo ZSBmb2xsb3dpbmcgZmVhdHVyZXM6CgogICAgKiBIb3Qgc3BhcmVzCiAgICAqIERvdWJsZS1wYXJp dHkgUkFJRC1aIChyYWlkejIpCiAgICAqIEltcHJvdmVkIFJBSUQtWiBhY2NvdW50aW5nCgpUaGVz ZSBmZWF0dXJlcyBhcmUgYXZhaWxhYmxlIGluOgoKICAgICogU29sYXJpcyBFeHByZXNzIENvbW11 bml0eSBFZGl0aW9uLCBidWlsZCA0MgogICAgKiBTb2xhcmlzIDEwIDExLzA2IHJlbGVhc2UsIChi dWlsZCAzKQoKVGhlIHJlbGF0ZWQgYnVncyBmb3IgdmVyc2lvbiAzIGNoYW5nZXMgYXJlIGFzIGZv bGxvd3M6CgogICAgKiA2NDA1OTY2IEhvdCBTcGFyZSBzdXBwb3J0IGluIFpGUwogICAgKiA2NDE3 OTc4IGRvdWJsZSBwYXJpdHkgUkFJRC1aIGEuay5hLiBSQUlENgogICAgKiA2Mjg4NDg4IGR1IHJl cG9ydHMgbWlzbGVhZGluZyBzaXplIG9uIFJBSUQtWgoKPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PQoKWkZTIFBvb2wgVmVyc2lvbiAyCgpUaGlzIHZlcnNpb24gaW5jbHVk ZXMgc3VwcG9ydCBmb3IgIkRpdHRvIEJsb2NrcyIsIG9yIHJlcGxpY2F0ZWQKbWV0YWRhdGEuIER1 ZSB0byB0aGUgdHJlZS1saWtlIHN0cnVjdHVyZSBvZiB0aGUgWkZTIG9uLWRpc2sgZm9ybWF0LAph biB1bmNvcnJlY3RhYmxlIGVycm9yIGluIGEgbGVhZiBibG9jayBtYXkgYmUgcmVsYXRpdmVseSBi ZW5pZ24sCndoaWxlIGFuIHVuY29ycmVjdGFibGUgZXJyb3IgaW4gcG9vbCBtZXRhZGF0YSBjYW4g cmVzdWx0IGluIGFuCnVub3BlbmFibGUgcG9vbC4gVGhpcyBmZWF0dXJlIGludHJvZHVjZXMgYXV0 b21hdGljIHJlcGxpY2F0aW9uIG9mCm1ldGFkYXRhICh1cCB0byAzIGNvcGllcyBvZiBlYWNoIGJs b2NrKSBpbmRlcGVuZGVudCBvZiBhbnkgdW5kZXJseWluZwpwb29sLXdpZGUgcmVkdW5kYW5jeS4g Rm9yIGV4YW1wbGUsIG9uIGEgcG9vbCB3aXRoIGEgc2luZ2xlIG1pcnJvciwKdGhlIG1vc3QgY3Jp dGljYWwgbWV0YWRhdGEgd2lsbCBhcHBlYXIgdGhyZWUgdGltZXMgb24gZWFjaCBzaWRlIG9mCnRo ZSBtaXJyb3IsIGZvciBhIHRvdGFsIG9mIHNpeCBjb3BpZXMuIFRoaXMgZW5zdXJlcyB0aGF0IHdo aWxlIHVzZXIKZGF0YSBtYXkgYmUgbG9zdCBkdWUgdG8gY29ycnVwdGlvbiwgYWxsIGRhdGEgaW4g dGhlIHBvb2wgd2lsbCBiZQpkaXNjb3ZlcmFibGUgYW5kIHRoZSBwb29sIHdpbGwgc3RpbGwgYmUg dXNhYmxlLiBUaGlzIHdpbGwgYmUgZXhwYW5kZWQKaW4gdGhlIGZ1dHVyZSB0byBhbGxvdyB1c2Vy IGRhdGEgcmVwbGljYXRpb24gb24gYSBwZXItZGF0YXNldCBiYXNpcy4KClRoaXMgZmVhdHVyZSB3 YXMgaW50ZWdyYXRlZCBvbiA0LzEwLzA2IHdpdGggdGhlIGZvbGxvd2luZyBidWcgZml4OgoKNjQx MDY5OCBaRlMgbWV0YWRhdGEgbmVlZHMgdG8gYmUgbW9yZSBoaWdobHkgcmVwbGljYXRlZCAoZGl0 dG8gYmxvY2tzKQoKVGhpcyBmZWF0dXJlIGlzIGF2YWlsYWJsZSBpbjoKCiAgICAqIFNvbGFyaXMg RXhwcmVzcyBDb21tdW5pdHkgRWRpdGlvbiwgYnVpbGQgMzgKICAgICogU29sYXJpcyAxMCAxMC8w NiByZWxlYXNlIChidWlsZCAwOSkKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT0KClpGUyBQb29sIFZlcnNpb24gMQoKVGhpcyBpcyB0aGUgaW5pdGlhbCBaRlMgb24tZGlz ayBmb3JtYXQgYXMgaW50ZWdyYXRlZCBvbiAxMC8zMS8wNS4KRHVyaW5nIHRoZSBuZXh0IHNpeCBt b250aHMgb2YgaW50ZXJuYWwgdXNlLCB0aGVyZSB3ZXJlIGEgZmV3IG9uLWRpc2sKZm9ybWF0IGNo YW5nZXMgdGhhdCBkaWQgbm90IHJlc3VsdCBpbiBhIHZlcnNpb24gbnVtYmVyIGNoYW5nZSwgYnV0 CnJlc3VsdGVkIGluIGEgZmxhZyBkYXkgc2luY2UgZWFybGllciB2ZXJzaW9ucyBjb3VsZCBub3Qg cmVhZCB0aGUKbmV3ZXIgY2hhbmdlcy4gVGhlIGZpcnN0IG9mZmljaWFsIHJlbGVhc2VzIHN1cHBv cnRpbmcgdGhpcyB2ZXJzaW9uCmFyZToKCiAgICAqIFNvbGFyaXMgRXhwcmVzcyBDb21tdW5pdHkg RWRpdGlvbiwgYnVpbGQgMzYKICAgICogU29sYXJpcyAxMCA2LzA2IHJlbGVhc2UKCkVhcmxpZXIg cmVsZWFzZXMgbWF5IG5vdCBzdXBwb3J0IHRoaXMgdmVyc2lvbiwgZGVzcGl0ZSBiZWluZyBmb3Jt YXR0ZWQKd2l0aCB0aGUgc2FtZSBvbi1kaXNrIG51bWJlci4gVGhpcyBpcyBkdWUgdG86Cgo2Mzg5 MzY4IGZhdCB6YXAgc2hvdWxkIHVzZSAxNmsgYmxvY2tzICh3aXRoIGJhY2t3YXJkcyBjb21wYXRh YmlsaXR5KQo2MzkwNjc3IHZlcnNpb24gbnVtYmVyIGNoZWNraW5nIG1ha2VzIHVwZ3JhZGVzIGNo YWxsZW5naW5nCgo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Cgo= --0016e64c2a484d0bb304655e769f-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 18 09:25:08 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DF037106567C for ; Wed, 18 Mar 2009 09:25:08 +0000 (UTC) (envelope-from asmrookie@gmail.com) Received: from mail-bw0-f164.google.com (mail-bw0-f164.google.com [209.85.218.164]) by mx1.freebsd.org (Postfix) with ESMTP id 377C98FC1E for ; Wed, 18 Mar 2009 09:25:07 +0000 (UTC) (envelope-from asmrookie@gmail.com) Received: by bwz8 with SMTP id 8so409168bwz.43 for ; Wed, 18 Mar 2009 02:25:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=scjrV3ZI3x22zBwdKrG1BFbgeR8uQQTLLcgRvDaF3vo=; b=vnNsGWFa/GYrhDhKBsJfCWQSI0WBbuwpkjB/HwUnJ9mDlxQsplv6gFI1je/sy/zncL Cq6DBGuTsASFdR2JHLcGxiakWxpBLopvP0mNzrNuCUkKFn4ntxVgbI5JUnKfK7u5mAHa 4C4xntukrlwV2CmbOhQ1v3tFa2o55TtGudsLk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=uwBHd2Bcg1W3D/kMgvNVlvP2kK/WmRDf8emGeYzgqcTVcO6DcaKlVvl3K6zCPq6CWE BbBlOEscsIk+3TTkdcI4hFHANpV2GmvTXYIvppOqq0nPKn3NQ0KT4l5h9+A07NU2Yjyn rUqAq1GV4pKbuTGVk1AKZcQtgwOqjTsjMRDM8= MIME-Version: 1.0 Sender: asmrookie@gmail.com Received: by 10.223.104.74 with SMTP id n10mr819446fao.5.1237366754969; Wed, 18 Mar 2009 01:59:14 -0700 (PDT) In-Reply-To: <20090314203215.GA41617@deviant.kiev.zoral.com.ua> References: <200903140450.n2E4o3to011990@freefall.freebsd.org> <20090314102135.GA93077@x2.osted.lan> <20090314203215.GA41617@deviant.kiev.zoral.com.ua> Date: Wed, 18 Mar 2009 09:59:14 +0100 X-Google-Sender-Auth: ce1aa07741d1ad1a Message-ID: <3bbf2fe10903180159x10d2c721rf9ff4147a5c75ec7@mail.gmail.com> From: Attilio Rao To: Kostik Belousov Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Yoshihiro Ota , Peter Holm , freebsd-fs@freebsd.org Subject: Re: kern/132597: [tmpfs] [panic] tmpfs-related panic while interrupting a port build on tmpfs WRKDIR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Mar 2009 09:25:09 -0000 2009/3/14, Kostik Belousov : > On Sat, Mar 14, 2009 at 11:21:35AM +0100, Peter Holm wrote: > > On Sat, Mar 14, 2009 at 04:50:03AM +0000, Yoshihiro Ota wrote: > > > The following reply was made to PR kern/132597; it has been noted by GNATS. > > > > > > From: Yoshihiro Ota > > > To: bug-followup@FreeBSD.org > > > Cc: bf2006a@yahoo.com > > > Subject: Re: kern/132597: [tmpfs] [panic] tmpfs-related panic while > > > interrupting a port build on tmpfs WRKDIR > > > Date: Sat, 14 Mar 2009 00:42:58 -0400 > > > > > > Which ports were you compiling when panic happened? > > > > > > Hiro > > > > The panic in this PR looks a lot like the one I reported to attilio@ > > > > http://people.freebsd.org/~pho/stress/log/attilio022.txt > > > > It was just regular FS load that provoked it. > > > It seems to be quite clean what is going on there. In fact, there are > two issues: > > First is the usual problem of DOTDOT lookup that shall be fixed in style > of vn_vget_ino() by busying mp before unlocking dvp. > > Second one is the reason for the panic. The tmpfs vnode is unlocked, and > then corresponding tmpfs _node_ is passed to the tmpfs_alloc_vp(). > Since the vnode may be reclaimed after the unlock, passed node might > become freed. Then, the tmpfs_alloc_vp() would operate on the freed > memory. So I have a question. In the tmpfs_lookup() there is dvp with gets vhold() before to unlock the dvp vnode lock. That should not be enough to prevent recycling and freeing of the structure? Thanks, Attilio -- Peace can only be achieved by understanding - A. Einstein From owner-freebsd-fs@FreeBSD.ORG Wed Mar 18 13:10:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 08717106564A; Wed, 18 Mar 2009 13:10:03 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.terabit.net.ua (mail.terabit.net.ua [195.137.202.147]) by mx1.freebsd.org (Postfix) with ESMTP id 98F368FC0C; Wed, 18 Mar 2009 13:10:02 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from skuns.zoral.com.ua ([91.193.166.194] helo=mail.zoral.com.ua) by mail.terabit.net.ua with esmtps (TLSv1:AES256-SHA:256) (Exim 4.63 (FreeBSD)) (envelope-from ) id 1LjvWx-000GV8-TI; Wed, 18 Mar 2009 15:10:00 +0200 Received: from deviant.kiev.zoral.com.ua (root@deviant.kiev.zoral.com.ua [10.1.1.148]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id n2ID9k2b095637 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 18 Mar 2009 15:09:47 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.3/8.14.3) with ESMTP id n2ID9kra005541; Wed, 18 Mar 2009 15:09:46 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.3/8.14.3/Submit) id n2ID9k6e005540; Wed, 18 Mar 2009 15:09:46 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Wed, 18 Mar 2009 15:09:46 +0200 From: Kostik Belousov To: Attilio Rao Message-ID: <20090318130946.GD41617@deviant.kiev.zoral.com.ua> References: <200903140450.n2E4o3to011990@freefall.freebsd.org> <20090314102135.GA93077@x2.osted.lan> <20090314203215.GA41617@deviant.kiev.zoral.com.ua> <3bbf2fe10903180159x10d2c721rf9ff4147a5c75ec7@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="CqcVIibhaBEAt2VI" Content-Disposition: inline In-Reply-To: <3bbf2fe10903180159x10d2c721rf9ff4147a5c75ec7@mail.gmail.com> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: ClamAV version 0.94.2, clamav-milter version 0.94.2 on skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-4.4 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua X-Virus-Scanned: mail.terabit.net.ua 1LjvWx-000GV8-TI 501d19269db1232c3dacc32d39e2d138 X-Terabit: YES Cc: Yoshihiro Ota , Peter Holm , freebsd-fs@freebsd.org Subject: Re: kern/132597: [tmpfs] [panic] tmpfs-related panic while interrupting a port build on tmpfs WRKDIR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Mar 2009 13:10:03 -0000 --CqcVIibhaBEAt2VI Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Mar 18, 2009 at 09:59:14AM +0100, Attilio Rao wrote: > 2009/3/14, Kostik Belousov : > > On Sat, Mar 14, 2009 at 11:21:35AM +0100, Peter Holm wrote: > > > On Sat, Mar 14, 2009 at 04:50:03AM +0000, Yoshihiro Ota wrote: > > > > The following reply was made to PR kern/132597; it has been noted = by GNATS. > > > > > > > > From: Yoshihiro Ota > > > > To: bug-followup@FreeBSD.org > > > > Cc: bf2006a@yahoo.com > > > > Subject: Re: kern/132597: [tmpfs] [panic] tmpfs-related panic while > > > > interrupting a port build on tmpfs WRKDIR > > > > Date: Sat, 14 Mar 2009 00:42:58 -0400 > > > > > > > > Which ports were you compiling when panic happened? > > > > > > > > Hiro > > > > > > The panic in this PR looks a lot like the one I reported to attilio@ > > > > > > http://people.freebsd.org/~pho/stress/log/attilio022.txt > > > > > > It was just regular FS load that provoked it. > > > > > > It seems to be quite clean what is going on there. In fact, there are > > two issues: > > > > First is the usual problem of DOTDOT lookup that shall be fixed in sty= le > > of vn_vget_ino() by busying mp before unlocking dvp. > > > > Second one is the reason for the panic. The tmpfs vnode is unlocked, a= nd > > then corresponding tmpfs _node_ is passed to the tmpfs_alloc_vp(). > > Since the vnode may be reclaimed after the unlock, passed node might > > become freed. Then, the tmpfs_alloc_vp() would operate on the freed > > memory. >=20 > So I have a question. > In the tmpfs_lookup() there is dvp with gets vhold() before to unlock > the dvp vnode lock. > That should not be enough to prevent recycling and freeing of the structu= re? No. The only thing that prevents vnode reclaim is the vnode lock. Both vhold and vref only prevent struct vnode * pointer from becoming invalid, i.e. freeing vnode memory, and also keep vnode interlock and lock functional. The difference between vhold and vref is that vref() prevents non-forced unmounts from reclaiming such vnode. --CqcVIibhaBEAt2VI Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (FreeBSD) iEYEARECAAYFAknA8nsACgkQC3+MBN1Mb4i7sQCcCBDjBw8royQdW2SghABIGxtF p5sAoIlw+wfCLbMBBleyv/TOAABGwrkL =gLse -----END PGP SIGNATURE----- --CqcVIibhaBEAt2VI--