From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 11:51:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 36755106566C for ; Sun, 20 Dec 2009 11:51:10 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 76E4F8FC12 for ; Sun, 20 Dec 2009 11:51:09 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 17EF51CC520; Sun, 20 Dec 2009 12:51:07 +0100 (CET) X-CRM114-Version: 20090423-BlameSteveJobs ( TRE 0.7.6 (BSD) ) MF-ACE0E1EA [pR: 13.8549] X-CRM114-CacheID: sfid-20091220_12510_A3982488 X-CRM114-Status: Good ( pR: 13.8549 ) Message-ID: <4B2E0FA9.1050003@fsn.hu> Date: Sun, 20 Dec 2009 12:51:05 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: mjacob@freebsd.org References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> In-Reply-To: <4AEB6D79.5070703@feral.com> X-Stationery: 0.4.10 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.3 (people.fsn.hu); Sun, 20 Dec 2009 12:51:06 +0100 (CET) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 11:51:10 -0000 Matthew Jacob wrote: > Hussain Ali wrote: >> >> ZFS doesnt suffice for may use cases - so just wondering if this is in >> the works. >> >> > Which use cases can you name? Reliable data storage. :( Sadly, ZFS in FreeBSD is still very far from being stable. For example I have NFS servers running on ZFS, and they freeze about every week. It seems it's related to NFS. I can't even get to the debugger. After sending an NMI, the kernel writes "NMI ... going to debugger" eight times (those machines have 8 CPU cores) and nothing happens, I can only reset. Another machine just looses ZFS access (all processes stuck in IO) on i386 if I run rtorrent with unlimited bandwidth with some torrents, or some disk intensive spam filtering. Access to UFS filesystems are still OK. Also, running UFS and ZFS seems to have problems in 8-STABLE with UFS eating out memory from ZFS. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 11:54:50 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 66D021065672 for ; Sun, 20 Dec 2009 11:54:50 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f211.google.com (mail-ew0-f211.google.com [209.85.219.211]) by mx1.freebsd.org (Postfix) with ESMTP id E6C298FC12 for ; Sun, 20 Dec 2009 11:54:49 +0000 (UTC) Received: by ewy3 with SMTP id 3so5185601ewy.13 for ; Sun, 20 Dec 2009 03:54:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=MPxZPjAbgQf+2jh+iDA2QYedS9oQ8DO2lRMvxmunbMA=; b=DDknTHA2Ho3z14jTEb7l8+A2eUsUfPxQTByJDBaqayd/pzErbgbaF1cot31oDhbbvq eHT24K7VZ5E/aFVZLF3CWHcyT168NHZB+M26s5pv8B8/WV3Mxct2hB3nfRkJt9V9g0Bo kjZAyQlUNl02HlDK27YADhLHuxI0V2CRQ/fWg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=PyBntIi/UpHnaFBhwpx2eTmEtEe/Y5yYsCHQuKaceVgs0ZMXG36z+ulGV4Nhcf08jT GCGHvBaaaOilhA6eElM0hfAgzNfzcx0dGOKbJIYtqK/nMn7uLT9T4ECJVOG0IZmeBGRW ZHbLK8gHn/ui732doab76Ps604F5CcF2eVYEQ= MIME-Version: 1.0 Received: by 10.216.88.21 with SMTP id z21mr2438319wee.60.1261310088484; Sun, 20 Dec 2009 03:54:48 -0800 (PST) In-Reply-To: <4B2E0FA9.1050003@fsn.hu> References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> Date: Sun, 20 Dec 2009 06:54:48 -0500 Message-ID: From: Thomas Burgess To: Attila Nagy Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 11:54:50 -0000 I think it depends on hardware and setup. I've noticed the "zfs problem" with SOME machines when it comes to rtorrent (the rtorrent process will be stuck "waiting for disk" but on other machines it's fine. The machines i've had the most problem with are single drive less than 2 gb ram. I've got rtorrent and zfs working fine on plenty of machines with 2-3 hard drives and 4-8 gb ram. On Sun, Dec 20, 2009 at 6:51 AM, Attila Nagy wrote: > Matthew Jacob wrote: > >> Hussain Ali wrote: >> >>> >>> ZFS doesnt suffice for may use cases - so just wondering if this is in >>> the works. >>> >>> >>> >> Which use cases can you name? >> > Reliable data storage. :( > > Sadly, ZFS in FreeBSD is still very far from being stable. For example I > have NFS servers running on ZFS, and they freeze about every week. It seems > it's related to NFS. > I can't even get to the debugger. After sending an NMI, the kernel writes > "NMI ... going to debugger" eight times (those machines have 8 CPU cores) > and nothing happens, I can only reset. > > Another machine just looses ZFS access (all processes stuck in IO) on i386 > if I run rtorrent with unlimited bandwidth with some torrents, or some disk > intensive spam filtering. Access to UFS filesystems are still OK. > > Also, running UFS and ZFS seems to have problems in 8-STABLE with UFS > eating out memory from ZFS. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 12:09:18 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7E1D8106566C; Sun, 20 Dec 2009 12:09:18 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 891238FC0A; Sun, 20 Dec 2009 12:09:17 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id EF6D51CC5E2; Sun, 20 Dec 2009 13:09:15 +0100 (CET) X-CRM114-Version: 20090423-BlameSteveJobs ( TRE 0.7.6 (BSD) ) MF-ACE0E1EA [pR: 23.0207] X-CRM114-CacheID: sfid-20091220_13091_7B0D6D98 X-CRM114-Status: Good ( pR: 23.0207 ) Message-ID: <4B2E13E9.9000108@fsn.hu> Date: Sun, 20 Dec 2009 13:09:13 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Thomas Burgess References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> In-Reply-To: X-Stationery: 0.4.10 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.3 (people.fsn.hu); Sun, 20 Dec 2009 13:09:15 +0100 (CET) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 12:09:18 -0000 For that problem, it can be true, the machine in speak has only 1 GB RAM (i386), although 8 disks. The freeze is a different beast, I've got it on 32-64 GB RAM machines (with NFS), and on 8 GB machines serving stuff with ftp/http/rsync/etc (no NFS). I'm not sure that the NFS and the non-NFS case is the same though. Thomas Burgess wrote: > > I think it depends on hardware and setup. I've noticed the "zfs > problem" with SOME machines when it comes to rtorrent (the rtorrent > process will be stuck "waiting for disk" but on other machines it's > fine. > > The machines i've had the most problem with are single drive less than > 2 gb ram. > > I've got rtorrent and zfs working fine on plenty of machines with 2-3 > hard drives and 4-8 gb ram. > > On Sun, Dec 20, 2009 at 6:51 AM, Attila Nagy > wrote: > > Matthew Jacob wrote: > > Hussain Ali wrote: > > > ZFS doesnt suffice for may use cases - so just wondering > if this is in > the works. > > > > Which use cases can you name? > > Reliable data storage. :( > > Sadly, ZFS in FreeBSD is still very far from being stable. For > example I have NFS servers running on ZFS, and they freeze about > every week. It seems it's related to NFS. > I can't even get to the debugger. After sending an NMI, the kernel > writes "NMI ... going to debugger" eight times (those machines > have 8 CPU cores) and nothing happens, I can only reset. > > Another machine just looses ZFS access (all processes stuck in IO) > on i386 if I run rtorrent with unlimited bandwidth with some > torrents, or some disk intensive spam filtering. Access to UFS > filesystems are still OK. > > Also, running UFS and ZFS seems to have problems in 8-STABLE with > UFS eating out memory from ZFS. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org > " > > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 12:18:00 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 885201065692; Sun, 20 Dec 2009 12:18:00 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f211.google.com (mail-ew0-f211.google.com [209.85.219.211]) by mx1.freebsd.org (Postfix) with ESMTP id CD6518FC22; Sun, 20 Dec 2009 12:17:59 +0000 (UTC) Received: by ewy3 with SMTP id 3so5196249ewy.13 for ; Sun, 20 Dec 2009 04:17:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=IT8Fye7z/bpSD6IOjKscAGDXkzY8GaLtApdNxfzOM3U=; b=rjbaQDkmbRk8muAfFFz99YYzg10QhTKp2hrmAEW4qb5btWOwu6iFxXOk7RnToihYdB 8o0FR+C/XEnT5NBRRheQ6z/B8dE9KO3FtOmeDfeZrMs49RAFka/fGVrYvKVQKqfkOFtT w2XERqoLPYIvPyqWfWr5Lfd1mv+5AcQx5Oe1A= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=af9ZfnBMpQwfMO573+wHonRydlZ2dEhoz2jwmlRhQIejUveT8wZlGAtbtYvbiy9zHO 2gV8vDiLkEUIXo8bjuGGOaxiqGk0zy8JwLzbS+W4IZvxiL9vhfkRY0UQNQtcjARB/v27 6XecaAWlhtJWi3HkBm4UErFeq5vxv2cOKrKU8= MIME-Version: 1.0 Received: by 10.216.93.14 with SMTP id k14mr1993951wef.152.1261311478481; Sun, 20 Dec 2009 04:17:58 -0800 (PST) In-Reply-To: <4B2E13E9.9000108@fsn.hu> References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E13E9.9000108@fsn.hu> Date: Sun, 20 Dec 2009 07:17:58 -0500 Message-ID: From: Thomas Burgess To: Attila Nagy Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 12:18:00 -0000 i remember reading that NFS needs tuning with ZFS even on solaris so you might want to look into that....i'm not an expert though. I CAN say this though. I have a machine with 12 drives and 8gb ram that i use for samba. FreeBSD 8.0 ZFS v13 It has only 10 or so clients but it has no problem maxing out 2 gigabit lines and it never freezes. It took some tuning for samba but it works great. On Sun, Dec 20, 2009 at 7:09 AM, Attila Nagy wrote: > For that problem, it can be true, the machine in speak has only 1 GB RAM > (i386), although 8 disks. > The freeze is a different beast, I've got it on 32-64 GB RAM machines (with > NFS), and on 8 GB machines serving stuff with ftp/http/rsync/etc (no NFS). > I'm not sure that the NFS and the non-NFS case is the same though. > > Thomas Burgess wrote: > >> >> I think it depends on hardware and setup. I've noticed the "zfs problem" >> with SOME machines when it comes to rtorrent (the rtorrent process will be >> stuck "waiting for disk" but on other machines it's fine. >> The machines i've had the most problem with are single drive less than 2 >> gb ram. >> >> I've got rtorrent and zfs working fine on plenty of machines with 2-3 hard >> drives and 4-8 gb ram. >> >> On Sun, Dec 20, 2009 at 6:51 AM, Attila Nagy > bra@fsn.hu>> wrote: >> >> Matthew Jacob wrote: >> >> Hussain Ali wrote: >> >> >> ZFS doesnt suffice for may use cases - so just wondering >> if this is in >> the works. >> >> >> Which use cases can you name? >> >> Reliable data storage. :( >> >> Sadly, ZFS in FreeBSD is still very far from being stable. For >> example I have NFS servers running on ZFS, and they freeze about >> every week. It seems it's related to NFS. >> I can't even get to the debugger. After sending an NMI, the kernel >> writes "NMI ... going to debugger" eight times (those machines >> have 8 CPU cores) and nothing happens, I can only reset. >> >> Another machine just looses ZFS access (all processes stuck in IO) >> on i386 if I run rtorrent with unlimited bandwidth with some >> torrents, or some disk intensive spam filtering. Access to UFS >> filesystems are still OK. >> >> Also, running UFS and ZFS seems to have problems in 8-STABLE with >> UFS eating out memory from ZFS. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to >> "freebsd-fs-unsubscribe@freebsd.org >> " >> >> >> > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 15:53:30 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADA68106566C for ; Sun, 20 Dec 2009 15:53:30 +0000 (UTC) (envelope-from hali@datapipe.com) Received: from EXFESMQ02.datapipe-corp.net (exfesmq02.datapipe-corp.net [64.106.130.125]) by mx1.freebsd.org (Postfix) with ESMTP id 70F718FC08 for ; Sun, 20 Dec 2009 15:53:30 +0000 (UTC) Received: from EXMBSMQ01.datapipe-corp.net ([64.106.130.123]) by EXFESMQ02.datapipe-corp.net ([64.106.130.125]) with mapi; Sun, 20 Dec 2009 10:43:13 -0500 From: Hussain Ali To: Thomas Burgess , Attila Nagy Date: Sun, 20 Dec 2009 10:43:13 -0500 Thread-Topic: Plans for Logged/Journaled UFS Thread-Index: AcqBbo7DO9fwv41ZSJWRIxzkGq3YzgAGdj/H Message-ID: References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E13E9.9000108@fsn.hu>, In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" Subject: RE: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 15:53:30 -0000 _______________________________________ From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf= Of Thomas Burgess [wonslung@gmail.com] Sent: Sunday, December 20, 2009 7:17 AM To: Attila Nagy Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS i remember reading that NFS needs tuning with ZFS even on solaris so you might want to look into that....i'm not an expert though. I CAN say this though. I have a machine with 12 drives and 8gb ram that i use for samba. FreeBSD 8.0 ZFS v13 It has only 10 or so clients but it has no problem maxing out 2 gigabit lines and it never freezes. It took some tuning for samba but it works great. On Sun, Dec 20, 2009 at 7:09 AM, Attila Nagy wrote: > For that problem, it can be true, the machine in speak has only 1 GB RAM > (i386), although 8 disks. > The freeze is a different beast, I've got it on 32-64 GB RAM machines (wi= th > NFS), and on 8 GB machines serving stuff with ftp/http/rsync/etc (no NFS)= . > I'm not sure that the NFS and the non-NFS case is the same though. > > Thomas Burgess wrote: > >> >> I think it depends on hardware and setup. I've noticed the "zfs problem= " >> with SOME machines when it comes to rtorrent (the rtorrent process will = be >> stuck "waiting for disk" but on other machines it's fine. >> The machines i've had the most problem with are single drive less than 2 >> gb ram. >> >> I've got rtorrent and zfs working fine on plenty of machines with 2-3 ha= rd >> drives and 4-8 gb ram. >> >> On Sun, Dec 20, 2009 at 6:51 AM, Attila Nagy > bra@fsn.hu>> wrote: >> >> Matthew Jacob wrote: >> >> Hussain Ali wrote: >> >> >> ZFS doesnt suffice for may use cases - so just wondering >> if this is in >> the works. >> >> >> Which use cases can you name? >> >> Reliable data storage. :( >> >> Sadly, ZFS in FreeBSD is still very far from being stable. For >> example I have NFS servers running on ZFS, and they freeze about >> every week. It seems it's related to NFS. >> I can't even get to the debugger. After sending an NMI, the kernel >> writes "NMI ... going to debugger" eight times (those machines >> have 8 CPU cores) and nothing happens, I can only reset. >> >> Another machine just looses ZFS access (all processes stuck in IO) >> on i386 if I run rtorrent with unlimited bandwidth with some >> torrents, or some disk intensive spam filtering. Access to UFS >> filesystems are still OK. >> >> Also, running UFS and ZFS seems to have problems in 8-STABLE with >> UFS eating out memory from ZFS. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to >> "freebsd-fs-unsubscribe@freebsd.org >> " >> >> >> > ZFS is great for data warehousing and non high performance/high concurrent = operations (Seen this on Solaris and FreeBSD). I have about 350TB in FreeB= SD's ZFS and its been running very stable but this is a data warehouse and = I am not running off the shelf components either (HP DL 385's, Nexsan Satab= easts). Ever use ZFS as a target for a replication for large and active My= SQL DB, you would notice days worth of lag within a week (on same gear as t= he primay!). My cases maybe on the non typical average FreeBSD user usage, = but for any large hosting environment I would assume its similar. Anyway seems like we may see a journaled UFS in FreeBSD sometime soon. I ho= pe the changes are minimal so its MFC'ed back into [78]-Stable. http://jeffr-tech.livejournal.com/22716.html This message may contain confidential or privileged information. If you ar= e not the intended recipient, please advise us immediately and delete this = message. See http://www.datapipe.com/emaildisclaimer.aspx for further info= rmation on confidentiality and the risks of non-secure electronic communica= tion. If you cannot access these links, please notify us by reply message a= nd we will send the contents to you. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 16:17:08 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6134F106566C; Sun, 20 Dec 2009 16:17:08 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 393518FC0A; Sun, 20 Dec 2009 16:17:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBKGH8tB071996; Sun, 20 Dec 2009 16:17:08 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBKGH89l071992; Sun, 20 Dec 2009 16:17:08 GMT (envelope-from linimon) Date: Sun, 20 Dec 2009 16:17:08 GMT Message-Id: <200912201617.nBKGH89l071992@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141800: [zfs] [patch] zfs pool update to v14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 16:17:08 -0000 Old Synopsis: [PATCH] zfs pool update to v14 New Synopsis: [zfs] [patch] zfs pool update to v14 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Dec 20 16:16:51 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141800 From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 17:59:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ECD2A10656B7; Sun, 20 Dec 2009 17:59:37 +0000 (UTC) (envelope-from mj@feral.com) Received: from ns1.feral.com (ns1.feral.com [192.67.166.1]) by mx1.freebsd.org (Postfix) with ESMTP id C815A8FC1D; Sun, 20 Dec 2009 17:59:37 +0000 (UTC) Received: from [10.8.0.2] (remotevpn [10.8.0.2]) by ns1.feral.com (8.14.3/8.14.3) with ESMTP id nBKHxUsM021549 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 20 Dec 2009 09:59:31 -0800 (PST) (envelope-from mj@feral.com) Message-ID: <4B2E65FC.9070609@feral.com> Date: Sun, 20 Dec 2009 09:59:24 -0800 From: Matthew Jacob Organization: Feral Software User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: Attila Nagy References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> In-Reply-To: <4B2E0FA9.1050003@fsn.hu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender DNS name whitelisted, not delayed by milter-greylist-4.2.3 (ns1.feral.com [10.8.0.1]); Sun, 20 Dec 2009 09:59:34 -0800 (PST) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 17:59:38 -0000 >>> >> Which use cases can you name? > Reliable data storage. :( Jeez, I wrote this months ago. Do you feel that improving UFS is a better way to go? I'm currently working at a place that still won't use UFS2. But this is just for /root, /var && /usr. Data storage for user data is another matter entirely, and unless you can hide UFS2/UFS3/... under a unified namespace, I don't think that this is the way to go. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 18:48:48 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 15851106566B for ; Sun, 20 Dec 2009 18:48:48 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id C03808FC12 for ; Sun, 20 Dec 2009 18:48:47 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApoEADAALkuDaFvI/2dsb2JhbADSHYIxgX0EgWU X-IronPort-AV: E=Sophos;i="4.47,428,1257138000"; d="scan'208";a="59921961" Received: from darling.cs.uoguelph.ca ([131.104.91.200]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 20 Dec 2009 13:48:46 -0500 Received: from localhost (localhost.localdomain [127.0.0.1]) by darling.cs.uoguelph.ca (Postfix) with ESMTP id 139789428F3; Sun, 20 Dec 2009 13:48:46 -0500 (EST) X-Virus-Scanned: amavisd-new at darling.cs.uoguelph.ca Received: from darling.cs.uoguelph.ca ([127.0.0.1]) by localhost (darling.cs.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ufjwt36IMpav; Sun, 20 Dec 2009 13:48:45 -0500 (EST) Received: from muncher.cs.uoguelph.ca (muncher.cs.uoguelph.ca [131.104.91.102]) by darling.cs.uoguelph.ca (Postfix) with ESMTP id 2C83E9428EF; Sun, 20 Dec 2009 13:48:45 -0500 (EST) Received: from localhost (rmacklem@localhost) by muncher.cs.uoguelph.ca (8.11.7p3+Sun/8.11.6) with ESMTP id nBKIvxv03245; Sun, 20 Dec 2009 13:57:59 -0500 (EST) X-Authentication-Warning: muncher.cs.uoguelph.ca: rmacklem owned process doing -bs Date: Sun, 20 Dec 2009 13:57:59 -0500 (EST) From: Rick Macklem X-X-Sender: rmacklem@muncher.cs.uoguelph.ca To: Bob Friesenhahn In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org Subject: Re: am-utils/NFS mount lockups in 8.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 18:48:48 -0000 On Sat, 19 Dec 2009, Bob Friesenhahn wrote: > After upgrading my FreeBSD system from FreeBSD 7.2 to 8.0, the am-utils > automounter is experiencing difficulty with managing the NFS client mounts to > my Solaris 10U8 system. There have never been any difficulties before. This > is when using the NFSv3/TCP in the default kernel and not the new NFSv4 > implementation. If it matters, this is for NFS exports from a ZFS pool, with > an exported filesystem per user. > > The problem I see is that the initial mount is instantaneous and works great. > After the mount times out, the re-mount produces several "NFS timeout" > messages to the widow where the accessing program is running. Sometimes this > remount succeeds, but if it fails, then the program is left locked up forever > waiting for the NFS mount. Meanwhile connectivity between the FreeBSD system > and the Solaris system is fine, as illustrated by excellent connectivity with > SSH and no lost packets via 'ping'. > You could try the patch at: http://people.freebsd.org/~rmacklem/patches/freebsd8-clntvc.patch (Fixes an issue w.r.t. client side TCP reconnects and didn't quite make it into 8.0.) I am trying to keep a list of FreeBSD8.0 NFS fixes at: http://people.freebsd.org/~rmacklem Good luck with it, rick From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 18:57:02 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D8CCF1065741 for ; Sun, 20 Dec 2009 18:57:02 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 8FD5D8FC20 for ; Sun, 20 Dec 2009 18:57:01 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id nBKIv161006247; Sun, 20 Dec 2009 12:57:01 -0600 (CST) Date: Sun, 20 Dec 2009 12:57:01 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Rick Macklem In-Reply-To: Message-ID: References: User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Sun, 20 Dec 2009 12:57:01 -0600 (CST) Cc: freebsd-fs@freebsd.org Subject: Re: am-utils/NFS mount lockups in 8.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 18:57:02 -0000 On Sun, 20 Dec 2009, Rick Macklem wrote: > You could try the patch at: > http://people.freebsd.org/~rmacklem/patches/freebsd8-clntvc.patch > (Fixes an issue w.r.t. client side TCP reconnects and didn't quite make > it into 8.0.) > > I am trying to keep a list of FreeBSD8.0 NFS fixes at: > http://people.freebsd.org/~rmacklem Ok, thanks. TCP reconnect does seem like where the failure occurs. Do you know if these fixes will make it into a kernel updated via 'freebsd-update' or will it be necessary to patch and build a custom kernel? Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 19:03:57 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4160E1065676 for ; Sun, 20 Dec 2009 19:03:57 +0000 (UTC) (envelope-from giffunip@tutopia.com) Received: from web113517.mail.gq1.yahoo.com (web113517.mail.gq1.yahoo.com [98.136.167.57]) by mx1.freebsd.org (Postfix) with SMTP id 019548FC18 for ; Sun, 20 Dec 2009 19:03:56 +0000 (UTC) Received: (qmail 19310 invoked by uid 60001); 20 Dec 2009 18:37:15 -0000 Message-ID: <712903.15604.qm@web113517.mail.gq1.yahoo.com> X-YMail-OSG: a3HCBJcVM1m8EWS9ONMxChqTo7lcoC4YztN.VDOjDinzShDEtvmKn73TOq.YAs8tj4XygzRIudKqPp8AwtHz6DlagwjOSfrYqHbk1afZflznVoXTzRJIFEiPZcSSkA8Mc5OevKD5eVS_eDUbQ1X920hR5trY7ScG528bhFPj_5DYKgo_HDObls34QRmLP40VWrsvQPAoQEsyBXiRK.9_N3QJx978P.hHGlsx_t6fA0zLDKc6ifd4n5UpjKDK32YZo5nEGs7UBHkHcYghbdyPJTXViwoh4dPY8WHZ1ApODuQz_AoD00IDSAXNvYie_BlnB7loTzV13oFcZlPLwC971GKOig-- Received: from [190.157.123.47] by web113517.mail.gq1.yahoo.com via HTTP; Sun, 20 Dec 2009 10:37:15 PST X-RocketYMMF: giffunip X-Mailer: YahooMailRC/240.3 YahooMailWebService/0.8.100.260964 Date: Sun, 20 Dec 2009 10:37:15 -0800 (PST) From: "Pedro F. Giffuni" To: Hussain Ali MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 19:03:57 -0000 Just wondering... What's wrong with gjournal(8) ? cheers, Pedro. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 21:22:59 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1A9461065670; Sun, 20 Dec 2009 21:22:59 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 6E9ED8FC12; Sun, 20 Dec 2009 21:22:58 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 7A99D1CC85F; Sun, 20 Dec 2009 22:22:56 +0100 (CET) X-CRM114-Version: 20090423-BlameSteveJobs ( TRE 0.7.6 (BSD) ) MF-ACE0E1EA [pR: 13.6776] X-CRM114-CacheID: sfid-20091220_22225_C7AABC08 X-CRM114-Status: Good ( pR: 13.6776 ) Message-ID: <4B2E95AE.9040402@fsn.hu> Date: Sun, 20 Dec 2009 22:22:54 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Matthew Jacob References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E65FC.9070609@feral.com> In-Reply-To: <4B2E65FC.9070609@feral.com> X-Stationery: 0.4.10 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.3 (people.fsn.hu); Sun, 20 Dec 2009 22:22:55 +0100 (CET) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 21:22:59 -0000 Matthew Jacob wrote: > >>>> >>> Which use cases can you name? >> Reliable data storage. :( > Jeez, I wrote this months ago. > > Do you feel that improving UFS is a better way to go? No, I think ZFS is the good way (although it has its problems as well). And I'm very grateful to the guys who worked on this. I've just summed my experiences, which tells me ZFS is still not ready for prime time. Where UFS keeps running for years, ZFS suddenly crashes, or worse, just freezes, in a way, which is hard to debug for the average user (a crashdump is easy, but when I can't even go to the debugger, that's hard). I hope that things will settle down and ZFS will be as much reliable in FreeBSD as UFS is now (or even better, I've had some bad crashes with UFS thanks to on-disk data corruption). From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 22:28:15 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 95CE51065672 for ; Sun, 20 Dec 2009 22:28:15 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f211.google.com (mail-ew0-f211.google.com [209.85.219.211]) by mx1.freebsd.org (Postfix) with ESMTP id 235408FC1C for ; Sun, 20 Dec 2009 22:28:14 +0000 (UTC) Received: by ewy3 with SMTP id 3so5535792ewy.13 for ; Sun, 20 Dec 2009 14:28:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=WudWoODVF+Eqdc56a7t7fReTib/am7gbcr+R+pny+U8=; b=AbcrPPI43OcbjN2+TMO+HmI2wklW9OtHj2FkeU9x3lutHQltyCqZ2u0SzU0HhSnkVt XPFQh8c4zHzrHTd2hchHqlKVG78LmvmpivP+x+wxsZ1jJuVPuBxO2rcFqcn/JJm2R8zy kw8gvRUkXeU2lF12vNW8XM0d0GPN7y4UTtbIY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=L0C4K0TTnlhGCrLwfZPSHVNT/sHff5qP5iEh1nDxkj6JidWLy5KnUHMhcouSkEvj/R d42USmnsaeAD+Fi1P2rGk/t1K/UnNo1HiiROfw6l/1MvFa8PxX5RoXxQjwDd1JKEm99n d/UDpfRsbPnOxM5OSiM+rOJxxqW0zcRDtCUM4= MIME-Version: 1.0 Received: by 10.216.86.144 with SMTP id w16mr2218774wee.59.1261348094065; Sun, 20 Dec 2009 14:28:14 -0800 (PST) In-Reply-To: <4B2E95AE.9040402@fsn.hu> References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E65FC.9070609@feral.com> <4B2E95AE.9040402@fsn.hu> Date: Sun, 20 Dec 2009 17:28:14 -0500 Message-ID: From: Thomas Burgess To: Attila Nagy Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org, Matthew Jacob Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 22:28:15 -0000 each version of FreeBSD to have ZFS has gotten better. 7.0 was cool, but buggy. 7.1 fixed some stuff, but required a lot of tuning 7.2 amd64 made it possible to run without any tuning on some systems and now in 8.0 amd64 it's quite smooth for me. I think the benefits outweigh the issues. ZFS recently saved me from some serious data loss due to a failing raid controller. end to end data integrity via checksums is great. On Sun, Dec 20, 2009 at 4:22 PM, Attila Nagy wrote: > Matthew Jacob wrote: > >> >> >>>>> >>>> Which use cases can you name? >>>> >>> Reliable data storage. :( >>> >> Jeez, I wrote this months ago. >> >> Do you feel that improving UFS is a better way to go? >> > No, I think ZFS is the good way (although it has its problems as well). And > I'm very grateful to the guys who worked on this. > I've just summed my experiences, which tells me ZFS is still not ready for > prime time. Where UFS keeps running for years, ZFS suddenly crashes, or > worse, just freezes, in a way, which is hard to debug for the average user > (a crashdump is easy, but when I can't even go to the debugger, that's > hard). > I hope that things will settle down and ZFS will be as much reliable in > FreeBSD as UFS is now (or even better, I've had some bad crashes with UFS > thanks to on-disk data corruption). > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 20 23:20:02 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F3E8F106566C; Sun, 20 Dec 2009 23:20:01 +0000 (UTC) (envelope-from ambsd@raisa.eu.org) Received: from raisa.eu.org (raisa.eu.org [83.17.178.202]) by mx1.freebsd.org (Postfix) with ESMTP id F02F08FC0C; Sun, 20 Dec 2009 23:20:00 +0000 (UTC) Received: from bolt.zol (62-121-98-25.home.aster.pl [62.121.98.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by raisa.eu.org (Postfix) with ESMTP id AB883241; Mon, 21 Dec 2009 00:23:07 +0100 (CET) Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: "Ben Schumacher" References: <9859143f0912142036k3dd0758fmc9cee9b6f2ce4698@mail.gmail.com> <9859143f0912162237q50fe147ej428905abf63c61b@mail.gmail.com> Date: Mon, 21 Dec 2009 00:19:51 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Emil Smolenski" Message-ID: In-Reply-To: <9859143f0912162237q50fe147ej428905abf63c61b@mail.gmail.com> User-Agent: Opera Mail/10.10 (FreeBSD) Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org Subject: Re: SUIDDIR on ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Dec 2009 23:20:02 -0000 On Thu, 17 Dec 2009 07:37:31 +0100, Ben Schumacher wrote: >>> At any rate, I've been considering switching this to a ZFS RAIDZ now >>> that FreeBSD 8 is released and it seems that folks think it's stable, >>> but I'm curious if it can provide the SUIDDIR functionality I'm >>> currently using. >> Yes, it can. From my point of view it works the same way as on UFS. > Thanks for your response... I don't know that that's quite right. In fact, you're right. I used only the "g+s" file mode and it worked for both UFS and ZFS. Sorry for the confusion. > Any clues would be appreciated. Maybe ZVOL will be sufficient? It just works: # zfs create -V 1g tank/tmp/test1 # newfs /dev/zvol/tank/tmp/test1 # mkdir /tmp/test1 # mount -o suiddir /dev/zvol/tank/tmp/test1 /tmp/test1 # mkdir /tmp/test1/user1dir # chmod 4777 /tmp/test1/user1dir # chown user1:user1 /tmp/test1/user1dir # su - user2 $ cd /tmp/test1/user1dir $ touch test $ ll test -rw------- 1 user1 user1 - 0 Dec 21 00:14 test -- am From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 01:40:57 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EC00B1065672 for ; Mon, 21 Dec 2009 01:40:57 +0000 (UTC) (envelope-from mj@feral.com) Received: from ns1.feral.com (ns1.feral.com [192.67.166.1]) by mx1.freebsd.org (Postfix) with ESMTP id AED5D8FC0C for ; Mon, 21 Dec 2009 01:40:57 +0000 (UTC) Received: from [10.8.0.2] (remotevpn [10.8.0.2]) by ns1.feral.com (8.14.3/8.14.3) with ESMTP id nBL1etul026659 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 20 Dec 2009 17:40:55 -0800 (PST) (envelope-from mj@feral.com) Message-ID: <4B2ED222.4050304@feral.com> Date: Sun, 20 Dec 2009 17:40:50 -0800 From: Matthew Jacob Organization: Feral Software User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: Thomas Burgess References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E65FC.9070609@feral.com> <4B2E95AE.9040402@fsn.hu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender DNS name whitelisted, not delayed by milter-greylist-4.2.3 (ns1.feral.com [10.8.0.1]); Sun, 20 Dec 2009 17:40:57 -0800 (PST) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 01:40:58 -0000 > end to end data integrity via checksums is great. > > Mandatory given how close disk densities are to random bit failure. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 02:15:22 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B32D106566C for ; Mon, 21 Dec 2009 02:15:22 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from ey-out-2122.google.com (ey-out-2122.google.com [74.125.78.26]) by mx1.freebsd.org (Postfix) with ESMTP id DDEBE8FC13 for ; Mon, 21 Dec 2009 02:15:21 +0000 (UTC) Received: by ey-out-2122.google.com with SMTP id d26so2366336eyd.3 for ; Sun, 20 Dec 2009 18:15:21 -0800 (PST) MIME-Version: 1.0 Received: by 10.216.93.14 with SMTP id k14mr2187868wef.152.1261361720985; Sun, 20 Dec 2009 18:15:20 -0800 (PST) In-Reply-To: <4B2ED222.4050304@feral.com> References: <20091030223225.GI5120@datapipe.com> <4AEB6D79.5070703@feral.com> <4B2E0FA9.1050003@fsn.hu> <4B2E65FC.9070609@feral.com> <4B2E95AE.9040402@fsn.hu> <4B2ED222.4050304@feral.com> Date: Sun, 20 Dec 2009 21:15:20 -0500 Message-ID: From: Thomas Burgess To: Matthew Jacob Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 02:15:22 -0000 yes, it just saved me bit time. I had a raid controller failing on me. It cased errors across several drives. I had set up ZFS in passthrough mode so it was able to find and repair the errors. Without scrubs and this feature i'd just be SOL On Sun, Dec 20, 2009 at 8:40 PM, Matthew Jacob wrote: > > end to end data integrity via checksums is great. >> >> >> > Mandatory given how close disk densities are to random bit failure. > > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 10:43:07 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 94269106566C for ; Mon, 21 Dec 2009 10:43:07 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx1.freebsd.org (Postfix) with ESMTP id 01E1D8FC14 for ; Mon, 21 Dec 2009 10:43:06 +0000 (UTC) Received: by fxm10 with SMTP id 10so2045772fxm.14 for ; Mon, 21 Dec 2009 02:43:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:mime-version :content-type:from:in-reply-to:date:cc:content-transfer-encoding :message-id:references:to:x-mailer; bh=bAz2GNzFX+3VBcHP2uaIFOGMwWnyoggQZnTFS45j0Dc=; b=s32yX11CU00syVRg+YvEwZ/YdzkmYprbSqFPH1YAdT050+KSJyXjYZDr9PG3ou12Gk DZQmBs9SIGi5/FurGXxrHqbDD3u/e8Fem0z2Sr20tzSJAXYCrQYd3cSDHJsOH+F8NPLL Lqltvo6Lu06KSTETAnnRqJVMhcZHKJOm/NHe0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=FonOg+npjQos5cw56xQUwuuGhFG5vTf9/+lwMwWaYvgBkJ2MrWcZf6VdsTCKqAJPEJ WArOCqgu2ycM+/v3SsVDFCK2sGvTuGG2FPbH/+UAfvyDq9MoEvIZ50u7AID296fDNwBf Qr33GasAw92N0wWTkpTqVuVXPh3O7iTGffqnk= Received: by 10.103.127.29 with SMTP id e29mr1454408mun.79.1261392185813; Mon, 21 Dec 2009 02:43:05 -0800 (PST) Received: from ?10.32.23.105? ([195.34.111.178]) by mx.google.com with ESMTPS id 25sm4679553mul.50.2009.12.21.02.43.03 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 21 Dec 2009 02:43:03 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Nikolay Denev In-Reply-To: <712903.15604.qm@web113517.mail.gq1.yahoo.com> Date: Mon, 21 Dec 2009 12:43:01 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> To: Pedro F. Giffuni X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 10:43:07 -0000 On Dec 20, 2009, at 8:37 PM, Pedro F. Giffuni wrote: > Just wondering... >=20 > What's wrong with gjournal(8) ? >=20 > cheers, >=20 > Pedro. >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" gjournal(8) journals everything, that is all data and metadata are = journaled. Which can help with random writes, but essentially cuts linear write = throughput in half. Regards, Niki Denev From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 11:06:54 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D44001065672 for ; Mon, 21 Dec 2009 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A8E638FC24 for ; Mon, 21 Dec 2009 11:06:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBLB6sZY004071 for ; Mon, 21 Dec 2009 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBLB6s5Z004069 for freebsd-fs@FreeBSD.org; Mon, 21 Dec 2009 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 21 Dec 2009 11:06:54 GMT Message-Id: <200912211106.nBLB6s5Z004069@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/141800 fs [zfs] [patch] zfs pool update to v14 o kern/141718 fs [zfs] [panic] kernel panic when 'zfs rename' is used o o kern/141685 fs [zfs] zfs corruption on adaptec 5805 raid controller o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141257 fs [gvinum] No puedo crear RAID5 por SW con gvinum o kern/141235 fs [disklabel] 8.0 no longer provides /dev entries for al o kern/141194 fs [tmpfs] tmpfs treats the size option as mod 2^32 o kern/141177 fs [zfs] fsync() on FIFO causes panic() on zfs o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140682 fs [netgraph] [panic] random panic in netgraph o kern/140661 fs [zfs] /boot/loader fails to work on a GPT/ZFS-only sys o kern/140640 fs [zfs] snapshot crash o kern/140433 fs [zfs] [panic] panic while replaying ZIL after crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs o bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/139363 fs [nfs] diskless root nfs mount from non FreeBSD server o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138524 fs [msdosfs] disks and usb flashes/cards with Russian lab o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138367 fs [tmpfs] [panic] 'panic: Assertion pages > 0 failed' wh o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/138109 fs [extfs] [patch] Minor cleanups to the sys/gnu/fs/ext2f f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135594 fs [zfs] Single dataset unresponsive with Samba o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133980 fs [panic] [ffs] panic: ffs_valloc: dup alloc o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [panic] panic: ffs_truncate: read-only filesystem o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/127659 fs [tmpfs] tmpfs memory leak o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS p kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 151 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 15:06:41 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 35C1A1065672 for ; Mon, 21 Dec 2009 15:06:41 +0000 (UTC) (envelope-from giffunip@tutopia.com) Received: from web113517.mail.gq1.yahoo.com (web113517.mail.gq1.yahoo.com [98.136.167.57]) by mx1.freebsd.org (Postfix) with SMTP id E94AD8FC1E for ; Mon, 21 Dec 2009 15:06:40 +0000 (UTC) Received: (qmail 48794 invoked by uid 60001); 21 Dec 2009 15:06:40 -0000 Message-ID: <240049.46806.qm@web113517.mail.gq1.yahoo.com> X-YMail-OSG: qbr3iNoVM1k1QGHYooESCmezMlnVStW4babYau8Q8tvB58InYXKZMxfHkBcI67yvaMzc4IxWn1GD3CDIMi.jtloJ6PgROBOK4C9L5NRo.M5XodzOWI8ukorUPQz7ukdwrVmeqmXNySeshHIY7tj16IFWEDNwFADmAzuGIYCtiE_k4J8ybdsOQntIdkAM803BZPZVCbjOxCbTJLv6yc3qVnUzgWgNN3p666dQQYF8L_z9Iq8wKcnysyG8v828LW2gUiRtDFP7vww8OW8n9L9ZrkPcAvts3C9onooVupMJkylBEtrfLrme5MAWjU0VUwby7zzIv3a4JoaJLhaG4MziLA-- Received: from [190.157.123.47] by web113517.mail.gq1.yahoo.com via HTTP; Mon, 21 Dec 2009 07:06:40 PST X-RocketYMMF: giffunip X-Mailer: YahooMailRC/240.3 YahooMailWebService/0.8.100.260964 References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> Date: Mon, 21 Dec 2009 07:06:40 -0800 (PST) From: "Pedro F. Giffuni" To: Nikolay Denev In-Reply-To: <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 15:06:41 -0000 ----- Original Message ---- > > On Dec 20, 2009, at 8:37 PM, Pedro F. Giffuni wrote: > > > Just wondering... > > > > What's wrong with gjournal(8) ? > > ... > > gjournal(8) journals everything, that is all data and metadata are journaled. > Which can help with random writes, but essentially cuts linear write throughput > in half. > > Regards, > Niki Denev I recall ext3fs also journals everything by default and still is very popular. I am asking because I've been playing a bit with Aditya's ext2fs (mostly UFS1) and one of the ideas there is adding gjournal support instead of starting from scratch. Pedro.. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 16:24:22 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CDE371065676 for ; Mon, 21 Dec 2009 16:24:22 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 817888FC1E for ; Mon, 21 Dec 2009 16:24:22 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApoEAPgvL0uDaFvH/2dsb2JhbADWUYQuBA X-IronPort-AV: E=Sophos;i="4.47,431,1257138000"; d="scan'208";a="58619896" Received: from danube.cs.uoguelph.ca ([131.104.91.199]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 21 Dec 2009 11:24:20 -0500 Received: from localhost (localhost.localdomain [127.0.0.1]) by danube.cs.uoguelph.ca (Postfix) with ESMTP id C93801085ACA; Mon, 21 Dec 2009 11:24:20 -0500 (EST) X-Virus-Scanned: amavisd-new at danube.cs.uoguelph.ca Received: from danube.cs.uoguelph.ca ([127.0.0.1]) by localhost (danube.cs.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FbVens+2U9Wk; Mon, 21 Dec 2009 11:24:20 -0500 (EST) Received: from muncher.cs.uoguelph.ca (muncher.cs.uoguelph.ca [131.104.91.102]) by danube.cs.uoguelph.ca (Postfix) with ESMTP id E849A1085987; Mon, 21 Dec 2009 11:24:19 -0500 (EST) Received: from localhost (rmacklem@localhost) by muncher.cs.uoguelph.ca (8.11.7p3+Sun/8.11.6) with ESMTP id nBLGXbb18672; Mon, 21 Dec 2009 11:33:37 -0500 (EST) X-Authentication-Warning: muncher.cs.uoguelph.ca: rmacklem owned process doing -bs Date: Mon, 21 Dec 2009 11:33:37 -0500 (EST) From: Rick Macklem X-X-Sender: rmacklem@muncher.cs.uoguelph.ca To: Bob Friesenhahn In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org Subject: Re: am-utils/NFS mount lockups in 8.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 16:24:22 -0000 On Sun, 20 Dec 2009, Bob Friesenhahn wrote: >> You could try the patch at: >> http://people.freebsd.org/~rmacklem/patches/freebsd8-clntvc.patch >> (Fixes an issue w.r.t. client side TCP reconnects and didn't quite make >> it into 8.0.) >> >> I am trying to keep a list of FreeBSD8.0 NFS fixes at: >> http://people.freebsd.org/~rmacklem > > Ok, thanks. TCP reconnect does seem like where the failure occurs. > > Do you know if these fixes will make it into a kernel updated via > 'freebsd-update' or will it be necessary to patch and build a custom kernel? > The ones currently listed are in stable/8. Being new to FreeBSD, I'm not sure what "freebsd-update" does. I had assumed that they'll be in 8.1, but I'm not sure, rick From owner-freebsd-fs@FreeBSD.ORG Mon Dec 21 19:01:53 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 72EEF1065670; Mon, 21 Dec 2009 19:01:53 +0000 (UTC) (envelope-from benschumacher@gmail.com) Received: from mail-pz0-f185.google.com (mail-pz0-f185.google.com [209.85.222.185]) by mx1.freebsd.org (Postfix) with ESMTP id 3E92C8FC26; Mon, 21 Dec 2009 19:01:53 +0000 (UTC) Received: by pzk15 with SMTP id 15so3736713pzk.3 for ; Mon, 21 Dec 2009 11:01:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=DmlAHvKwjig1TrFKRbFruqbbEZskBVmuAsVQKbzScpk=; b=cr3GM6svOJXuWaChrRsXAAQyY4r6I9R+qyx5R3mdHbRlyjA9veOdlVIFXF58+LaTpr DTDKhfi2EFRc7LbjS9jFMI3aQpCLN0PrPzMK2OSA+nxN06UlftrNz0HUzEX7sVInvpn8 Cx7LgclhWtDM0+gtlAnsx/ij2mTLN/g8gbdvk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=nuynsZbD5ixDc2Gu7G5d2raFginSYJYDl69rhlPB5PoRlKp6rJK99hN5VwIK5p9Zd8 kExhGpMp0bSWCyFxN2m7Vr3pXQRH7yoCT5LBpe+a/ZN3Ckks+QDKGpsDxbC97RmmzFa5 P4b9KBslBArwRUQu9pMX8OuakzG2Dh0V36wG0= MIME-Version: 1.0 Sender: benschumacher@gmail.com Received: by 10.143.21.34 with SMTP id y34mr5103988wfi.16.1261422112761; Mon, 21 Dec 2009 11:01:52 -0800 (PST) In-Reply-To: References: <9859143f0912142036k3dd0758fmc9cee9b6f2ce4698@mail.gmail.com> <9859143f0912162237q50fe147ej428905abf63c61b@mail.gmail.com> Date: Mon, 21 Dec 2009 12:01:52 -0700 X-Google-Sender-Auth: d16fa3c567c6d856 Message-ID: <9859143f0912211101v18dc4f4bjd0ee55bf5846dc35@mail.gmail.com> From: Ben Schumacher To: Emil Smolenski Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org Subject: Re: SUIDDIR on ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Dec 2009 19:01:53 -0000 On Sun, Dec 20, 2009 at 4:19 PM, Emil Smolenski wrote: > =C2=A0In fact, you're right. I used only the "g+s" file mode and it worke= d for > both UFS and ZFS. Sorry for the confusion. > >> Any clues would be appreciated. > > =C2=A0Maybe ZVOL will be sufficient? It just works: > > # zfs create -V 1g tank/tmp/test1 > # newfs /dev/zvol/tank/tmp/test1 > # mkdir /tmp/test1 > # mount -o suiddir /dev/zvol/tank/tmp/test1 /tmp/test1 > # mkdir /tmp/test1/user1dir > # chmod 4777 /tmp/test1/user1dir > # chown user1:user1 /tmp/test1/user1dir > # su - user2 > $ cd /tmp/test1/user1dir > $ touch test > $ ll test > -rw------- =C2=A01 user1 =C2=A0user1 =C2=A0- 0 Dec 21 00:14 test Emil- Yes. That works. I had that thought shortly after sending my last email. I'm going to try it. I guess the downside is that it doesn't give me the dyanmic size abilities of ZFS, but I can probably just dedicate a large chunk of storage to the ZVOL. Thanks, Ben From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 00:01:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1D009106566B; Tue, 22 Dec 2009 00:01:52 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id ED6AF8FC18; Tue, 22 Dec 2009 00:01:51 +0000 (UTC) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTPS id 9140346B03; Mon, 21 Dec 2009 19:01:51 -0500 (EST) Date: Tue, 22 Dec 2009 00:01:51 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: "Pedro F. Giffuni" In-Reply-To: <240049.46806.qm@web113517.mail.gq1.yahoo.com> Message-ID: References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org, jeff@FreeBSD.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 00:01:52 -0000 On Mon, 21 Dec 2009, Pedro F. Giffuni wrote: >> gjournal(8) journals everything, that is all data and metadata are >> journaled. Which can help with random writes, but essentially cuts linear >> write throughput in half. > > I recall ext3fs also journals everything by default and still is very > popular. > > I am asking because I've been playing a bit with Aditya's ext2fs (mostly > UFS1) and one of the ideas there is adding gjournal support instead of > starting from scratch. I'm CC'ing Jeff Roberson, who perhaps can comment on his on-going project to merge journaling techniques with soft updates in UFS (which is just meta-data journaling, but hopefully will address many of the fsck/bgfsck-related concerns people have). Robert N M Watson Computer Laboratory University of Cambridge From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 01:17:44 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB0F31065670 for ; Tue, 22 Dec 2009 01:17:44 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: from mail-yw0-f172.google.com (mail-yw0-f172.google.com [209.85.211.172]) by mx1.freebsd.org (Postfix) with ESMTP id 8CBEA8FC08 for ; Tue, 22 Dec 2009 01:17:44 +0000 (UTC) Received: by ywh2 with SMTP id 2so5990105ywh.27 for ; Mon, 21 Dec 2009 17:17:43 -0800 (PST) Received: by 10.150.38.4 with SMTP id l4mr12026784ybl.340.1261442804484; Mon, 21 Dec 2009 16:46:44 -0800 (PST) Received: from ?10.0.1.198? (udp022762uds.hawaiiantel.net [72.234.79.107]) by mx.google.com with ESMTPS id 6sm2196328ywd.22.2009.12.21.16.46.41 (version=SSLv3 cipher=RC4-MD5); Mon, 21 Dec 2009 16:46:43 -0800 (PST) Date: Mon, 21 Dec 2009 14:47:49 -1000 (HST) From: Jeff Roberson X-X-Sender: jroberson@desktop To: Robert Watson In-Reply-To: Message-ID: References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org, jeff@FreeBSD.org, "Pedro F. Giffuni" Subject: SU+J, journaled softupdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 01:17:44 -0000 On Tue, 22 Dec 2009, Robert Watson wrote: > I'm CC'ing Jeff Roberson, who perhaps can comment on his on-going project to > merge journaling techniques with soft updates in UFS (which is just meta-data > journaling, but hopefully will address many of the fsck/bgfsck-related > concerns people have). Hi folks, I have blogged a little bit about my plans at http://jeffr_tech.livejournal.com/. Briefly, I have created a small intent journal that works in concert with softupdates to eliminate the requirement for fsck on boot. Instead fsck has been augmented with a recovery process that reads the journal and corrects the filesystem. The recovery process is very quick and scales with the size of the journal, not the filesystem. The journal is enabled with tunefs on an existing ffs filesystem. The filesystem must be clean to enable. The filesystem must be clean before mounting with a legacy implementation but it is otherwise metadata compatible. A legacy fsck will destroy the journal but otherwise there are no other problems. Presently the code does not support non-journaled softupdates but I will rectify that. Peter Holm (pho@) has been helping me test and once I've fixed the final bugs revealed by his stress2 suite I will make a release and provide instructions. I'm hoping many people will be interested so we can quickly get to some confidence with the patch. One thing to keep in mind is that at runtime this can only slow down softupdates. It is additional overhead which I have attempted to minimize but there is no way around it. To eliminate fsck you must pay a price. This also can not cope with disks that lie about write ordering coupled with power failures. In this case the normal fsck can recover the file system. Cheers, Jeff > > Robert N M Watson > Computer Laboratory > University of Cambridge > From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 02:10:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CEE5C106566C for ; Tue, 22 Dec 2009 02:10:03 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 8FD478FC1D for ; Tue, 22 Dec 2009 02:10:03 +0000 (UTC) Received: by email.octopus.com.au (Postfix, from userid 1002) id B85655CB8D0; Tue, 22 Dec 2009 12:46:47 +1100 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (ppp121-44-16-106.lns20.syd6.internode.on.net [121.44.16.106]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id AB8CD5CB8B5; Tue, 22 Dec 2009 12:46:43 +1100 (EST) Message-ID: <4B302A6D.3000408@modulus.org> Date: Tue, 22 Dec 2009 13:09:49 +1100 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: freebsd-fs , jeff@FreeBSD.org References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 02:10:03 -0000 Is there any provision to put the journal on a seperate device? This can solve the performance degradation issue. - Andrew From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 02:48:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 59908106568B for ; Tue, 22 Dec 2009 02:48:31 +0000 (UTC) (envelope-from benschumacher@gmail.com) Received: from mail-pw0-f44.google.com (mail-pw0-f44.google.com [209.85.160.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2EB3A8FC0A for ; Tue, 22 Dec 2009 02:48:30 +0000 (UTC) Received: by pwi15 with SMTP id 15so3877937pwi.3 for ; Mon, 21 Dec 2009 18:48:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=2gpsTgaEay9y4KSMfa7K9kJCshxjw9xjXZ+TkBo6io8=; b=aw+/RBfzzF09tXkeFpwCBjECmsvCluNIe5h4O3TaADKUa7PUW4aiwVhI607m3P88uX aZ+ostSZkVw3shCTCkUzH1AXHZJexpYii4g5FNlaYyMy9S2XO4u9Yt9m3jwXz5tfOZof R5VWoKW6CjFXjV8msjF9cLXCVudtntUajIx5k= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=vJI0N+ly0zODZy2lSY0h99QOkzyF7plHc/S5S/uU6Yolfp1cT/D72P3k0+dFFXByuZ tufCRrgQRkldq3iAGBthgORNRoUhXc/kDEYRpZJ/vrcmAmiKWJAl71sCRDTFpnxAeG4F aIClByue2s95tv5PDiEtZ4LE5a3gzCkzWEI8w= MIME-Version: 1.0 Received: by 10.143.26.41 with SMTP id d41mr3266594wfj.13.1261450110557; Mon, 21 Dec 2009 18:48:30 -0800 (PST) In-Reply-To: <4B302A6D.3000408@modulus.org> References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> <4B302A6D.3000408@modulus.org> Date: Mon, 21 Dec 2009 19:48:30 -0700 Message-ID: <9859143f0912211848n447183dfs7e0e4bd02e52c5ad@mail.gmail.com> From: Ben Schumacher To: freebsd-fs Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 02:48:31 -0000 On Mon, Dec 21, 2009 at 7:09 PM, Andrew Snow wrote: > Is there any provision to put the journal on a seperate device? =C2=A0Thi= s can > solve the performance degradation issue. I'm not sure that would provide much more reliability, however. If you're adding a second device to use the journal you risk losing data if either device fails... given how small the data is (I believe 32 bytes per entry), I'm not sure it's really worthwhile. Most of what I've read in the past about FreeBSD's FS interactions seems to imply that the lower-level (ATA-layer) is not terribly optimized for larger drives and faster interfaces available these days. I'm sure I could go Google up some references, but I think it's been well covered in the past. Cheers, Ben From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 03:28:45 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB77D1065679 for ; Tue, 22 Dec 2009 03:28:45 +0000 (UTC) (envelope-from davidn04@gmail.com) Received: from mail-qy0-f176.google.com (mail-qy0-f176.google.com [209.85.221.176]) by mx1.freebsd.org (Postfix) with ESMTP id 749D28FC13 for ; Tue, 22 Dec 2009 03:28:45 +0000 (UTC) Received: by qyk6 with SMTP id 6so2547099qyk.3 for ; Mon, 21 Dec 2009 19:28:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=UPsOctBFiaz4qafYZiT/Kt8is7HVlOd4aFUCrw69xOA=; b=Ejv6g4B1PVN3xUukYbIGrAJoEiirg1q/SPBPBtOt20s9i69ghVR0WR3kwSYXSrVdjB LXqYkdyrk3Gn8CQIR3MmZSNwQiDNNGfhqxxe9IfM6XigQY5sDMxDnqdWR/GRXMfTE2AG 997UiTiCftxZY+aQTR3MNLtSt9FfJVMAnD+Os= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=QtXdFxLTZUlvzechwXX2dJPg5sU7HOINlvza9lY9kXmzpvH+dMJtVh6GI7HiwdAFtk e7EgI2IGAzPdnybmHqAez+f1IpIOyr/L0XPJFIwqnFLcAbO4vnhvLdw//G+QUMRWssjz JGF6xeznXAooJ+VZ85ns3U9m5fYMZ5hjrv9HY= MIME-Version: 1.0 Received: by 10.229.43.79 with SMTP id v15mr3741231qce.40.1261452524352; Mon, 21 Dec 2009 19:28:44 -0800 (PST) In-Reply-To: <240049.46806.qm@web113517.mail.gq1.yahoo.com> References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> Date: Tue, 22 Dec 2009 14:28:44 +1100 Message-ID: <4d7dd86f0912211928y78ce91ect8b86dc1ef4a8d0d1@mail.gmail.com> From: David N To: "Pedro F. Giffuni" Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 03:28:45 -0000 2009/12/22 Pedro F. Giffuni : > ----- Original Message ---- > >> >> On Dec 20, 2009, at 8:37 PM, Pedro F. Giffuni wrote: >> >> > Just wondering... >> > >> > What's wrong with gjournal(8) ? >> > > ... >> >> gjournal(8) journals everything, that is all data and metadata are journaled. >> Which can help with random writes, but essentially cuts linear write throughput >> in half. >> >> Regards, >> Niki Denev > > > I recall ext3fs also journals everything by default and still is very popular. > > I am asking because I've been playing a bit with Aditya's ext2fs (mostly UFS1) > and one of the ideas there is adding gjournal support instead of starting from > scratch. > > Pedro.. > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > Hi, Ext3 by default only journals the meta-data http://en.wikipedia.org/wiki/Ext3 >From http://wiki.archlinux.org/index.php/Ext3_Filesystem_Tips "By default, ext3 partitions mount with the 'ordered' data mode. In this mode, all data is written to the main filesystem and its metadata is committed to the journal, whose blocks are logically grouped into transactions to decrease disk I/O" You have to enable full journalling by setting the option in fstab to journal everything. Regards David N From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 03:52:39 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E150D1065676 for ; Tue, 22 Dec 2009 03:52:39 +0000 (UTC) (envelope-from giffunip@tutopia.com) Received: from web113501.mail.gq1.yahoo.com (web113501.mail.gq1.yahoo.com [98.136.167.41]) by mx1.freebsd.org (Postfix) with SMTP id 9A5408FC18 for ; Tue, 22 Dec 2009 03:52:39 +0000 (UTC) Received: (qmail 3964 invoked by uid 60001); 22 Dec 2009 03:52:38 -0000 Message-ID: <865465.3781.qm@web113501.mail.gq1.yahoo.com> X-YMail-OSG: pveCf3YVM1kh0_Z6QkdYEFQQ5VP34K8LIcDzTHC06i_uVyCi6wcVw16iWOTXwT4IL._pyUhOA1qxqt8.yY_tk1kqvaXq1v6.Hse_oWhLh0p0R2wbeqlpYIDLz5GWWy4LngNrnLTujVaSETokl_1w4fl2BWcdxcaTRtm.GAXKP0.diNvE7rCU.qMhYK48ScB.JElaZ_qy7Y2NvuRV4v28tqzgNYjkjHgjxWq84sCakiI2Vi24JYK4RSy0hg6bdUMu061kzqDqawG4_4tFh5Ms4ap3OK0h39nCnEfHusUir2crwgAlTjWhS5RhM9vim_ZIKb1p3G10WOgIhUDBLrswbCrAYnvVJUN46PTA_.Lagik18DrZQKSeegAM.b96duSPKrOdEtwhnpz3_0OEReeVnuHXi2PbDUklRqxHqPgRQYnorH14WCeGYGmv9SbUTyuq7qmt3sHYxwnLU4bBZtWT9iNM5eCn5hYWpA8nrQaEuS5VMEMKSD13_7P64mmmTkvgWWhBon1YcIPCI95AP3deEs8uhs9BkqvYTOKqEyw_ninalQLJWhgJFXXySlvrb0OFPRvs346SD8HsnJbnIrQmM.2nIH0j0cgpt1fDwJsz7DQ._mTVt9.7eEdUJzcWC7Jj Received: from [190.157.123.47] by web113501.mail.gq1.yahoo.com via HTTP; Mon, 21 Dec 2009 19:52:38 PST X-RocketYMMF: giffunip X-Mailer: YahooMailRC/240.3 YahooMailWebService/0.8.100.260964 References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> <4d7dd86f0912211928y78ce91ect8b86dc1ef4a8d0d1@mail.gmail.com> Date: Mon, 21 Dec 2009 19:52:38 -0800 (PST) From: "Pedro F. Giffuni" To: David N In-Reply-To: <4d7dd86f0912211928y78ce91ect8b86dc1ef4a8d0d1@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 03:52:40 -0000 ----- Original Message ---- ... > > Hi, > > Ext3 by default only journals the meta-data > http://en.wikipedia.org/wiki/Ext3 > > From http://wiki.archlinux.org/index.php/Ext3_Filesystem_Tips > "By default, ext3 partitions mount with the 'ordered' data mode. In > this mode, all data is written to the main filesystem and its metadata > is committed to the journal, whose blocks are logically grouped into > transactions to decrease disk I/O" > > You have to enable full journalling by setting the option in fstab to > journal everything. > > Regards > David N I stand corrected. Apparently the data=journal mode is not bad at all though: http://www.ibm.com/developerworks/linux/library/l-fs8.html cheers, Pedro. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 17:01:32 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D97F8106568B; Tue, 22 Dec 2009 17:01:32 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id B07658FC19; Tue, 22 Dec 2009 17:01:32 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBMH1Wp0097918; Tue, 22 Dec 2009 17:01:32 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBMH1WPh097914; Tue, 22 Dec 2009 17:01:32 GMT (envelope-from linimon) Date: Tue, 22 Dec 2009 17:01:32 GMT Message-Id: <200912221701.nBMH1WPh097914@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141897: [msdosfs] [panic] Kernel panic. msdofs: file name length 266 to large. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 17:01:32 -0000 Old Synopsis: Kernel panic. msdofs: file name length 266 to large. New Synopsis: [msdosfs] [panic] Kernel panic. msdofs: file name length 266 to large. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Dec 22 17:01:08 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141897 From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 19:28:28 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6FA18106568B for ; Tue, 22 Dec 2009 19:28:28 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 37BA28FC1E for ; Tue, 22 Dec 2009 19:28:27 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id nBMJSQNo029332; Tue, 22 Dec 2009 13:28:27 -0600 (CST) Date: Tue, 22 Dec 2009 13:28:26 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Rick Macklem In-Reply-To: Message-ID: References: User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Tue, 22 Dec 2009 13:28:27 -0600 (CST) Cc: freebsd-fs@freebsd.org Subject: Re: am-utils/NFS mount lockups in 8.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 19:28:28 -0000 On Sun, 20 Dec 2009, Rick Macklem wrote: >> > You could try the patch at: > http://people.freebsd.org/~rmacklem/patches/freebsd8-clntvc.patch > (Fixes an issue w.r.t. client side TCP reconnects and didn't quite make > it into 8.0.) > > I am trying to keep a list of FreeBSD8.0 NFS fixes at: > http://people.freebsd.org/~rmacklem It seems that this patch has eliminated the issues I was seeing. No problems for over a day now, and previously the problem became evident in just a few minutes. Thank you very much for your assistance. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 22:45:56 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 23B8D106566C for ; Tue, 22 Dec 2009 22:45:56 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yw0-f172.google.com (mail-yw0-f172.google.com [209.85.211.172]) by mx1.freebsd.org (Postfix) with ESMTP id D13DD8FC19 for ; Tue, 22 Dec 2009 22:45:55 +0000 (UTC) Received: by ywh2 with SMTP id 2so6924489ywh.27 for ; Tue, 22 Dec 2009 14:45:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:content-type :content-transfer-encoding:subject:date:message-id:to:mime-version :x-mailer; bh=r7sTZSER3Lo07PMSPAZRMFhEKAn0nF/qG7dpYt0YuZY=; b=HrwgwYxtFt0j/M89v44fG2Bgl+jjrOybDj8uNiU7vL+9YaqNITsQjdYJqC5KjYcp2b LnwdkWf720A/aN42vlub2gugjDoMEiQxqDxa6h60Ej3bcpMl2pSa+7JWimg4SKA7PZqF omxA+GnVJ/A0i5CGwNTzBcCHoVcnRe6Z7uozI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:content-type:content-transfer-encoding:subject:date:message-id :to:mime-version:x-mailer; b=RBnqATI9cBTqmVeen78oRIMru7PwHbz/NblyHqKAJOm4kd1RWuaEZhz1zEUc4OepSv kzNyC861EY0M9khJR1gD+ImcxO3MHOltzmVAG4ZHXGMHLBO+V9LLZ7BCa5Jx0flDW2kZ CheG2UtLxXL/Vl47uNCcRwKAsPWYIqJymNwM0= Received: by 10.150.106.15 with SMTP id e15mr2886735ybc.300.1261520334281; Tue, 22 Dec 2009 14:18:54 -0800 (PST) Received: from ?192.168.42.92? (70-36-134-162.dsl.dynamic.sonic.net [70.36.134.162]) by mx.google.com with ESMTPS id 4sm2998966yxd.34.2009.12.22.14.18.52 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 22 Dec 2009 14:18:53 -0800 (PST) From: Steven Schlansker Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Tue, 22 Dec 2009 14:18:20 -0800 Message-Id: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) Subject: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 22:45:56 -0000 Hello fellow FreeBSDers, I've got a nice shiny ZFS raidz2 array set up, but it's gotten stuck in a DEGRADED state and I can't figure out how to recover! Here's the array as it stands now: [steven@universe:~]% sudo zpool status pool: universe state: DEGRADED scrub: scrub in progress for 0h9m, 2.19% done, 6h41m to go config: NAME STATE READ WRITE CKSUM universe DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 ad16 ONLINE 0 0 0 replacing UNAVAIL 0 5.93K 0 = insufficient replicas 3961920099899285277 UNAVAIL 0 7.11K 0 was = /dev/concat/back0/old 6170688083648327969 UNAVAIL 0 7.11K 0 was = /dev/ad12 ad8 ONLINE 0 0 0 concat/back2 ONLINE 0 0 0 ad10 ONLINE 0 0 0 concat/ad4ex ONLINE 0 0 0 ad24 ONLINE 0 0 0 concat/ad6ex ONLINE 0 0 0 errors: No known data errors One of my drives failed. I replaced it, and in the process of replacing accidentally pulled the other drive. Now, I can't seem to fix it in any = way - [steven@universe:~]% sudo zpool replace universe 3961920099899285277 = ad26 cannot replace 3961920099899285277 with ad26: cannot replace a replacing = device [steven@universe:~]% sudo zpool replace universe 6170688083648327969 = ad26 cannot replace 6170688083648327969 with ad26: cannot replace a replacing = device [steven@universe:~]% sudo zpool detach universe 6170688083648327969 cannot detach 6170688083648327969: no valid replicas [steven@universe:~]% sudo zpool detach universe 3961920099899285277 cannot detach 3961920099899285277: no valid replicas Any thoughts? As a corollary, you may notice some funky concat business going on. This is because I have drives which are very slightly different in size = (< 1MB) and whenever one of them goes down and I bring the pool up, it helpfully = (?) expands the pool by a whole megabyte then won't let the drive back in. This is extremely frustrating... is there any way to fix that? I'm eventually going to keep expanding each of my drives one megabyte at a = time using gconcat and space on another drive! Very frustrating... Thank you for any suggestions, Steven= From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 23:15:49 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D64C41065692 for ; Tue, 22 Dec 2009 23:15:49 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) by mx1.freebsd.org (Postfix) with ESMTP id 948CA8FC27 for ; Tue, 22 Dec 2009 23:15:49 +0000 (UTC) Received: from localhost (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 7F58619E046; Wed, 23 Dec 2009 00:15:47 +0100 (CET) Received: from [192.168.1.2] (r5bb235.net.upc.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 2716919E045; Wed, 23 Dec 2009 00:15:45 +0100 (CET) Message-ID: <4B315320.5050504@quip.cz> Date: Wed, 23 Dec 2009 00:15:44 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.6) Gecko/20091206 SeaMonkey/2.0.1 MIME-Version: 1.0 To: Steven Schlansker References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> In-Reply-To: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 23:15:49 -0000 Steven Schlansker wrote: > As a corollary, you may notice some funky concat business going on. > This is because I have drives which are very slightly different in size (< 1MB) > and whenever one of them goes down and I bring the pool up, it helpfully (?) > expands the pool by a whole megabyte then won't let the drive back in. > This is extremely frustrating... is there any way to fix that? I'm > eventually going to keep expanding each of my drives one megabyte at a time > using gconcat and space on another drive! Very frustrating... You can avoid it by partitioning the drives to the well known 'minimal' size (size of smallest disk) and use the partition instead of raw disk. For example ad12s1 instead of ad12 (if you creat slices by fdisk) of ad12p1 (if you creat partitions by gpart) You can also use labels instead of device name. Miroslav Lachman From owner-freebsd-fs@FreeBSD.ORG Tue Dec 22 23:34:44 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 024A41065692; Tue, 22 Dec 2009 23:34:44 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: from mail-gx0-f218.google.com (mail-gx0-f218.google.com [209.85.217.218]) by mx1.freebsd.org (Postfix) with ESMTP id A5CA98FC08; Tue, 22 Dec 2009 23:34:43 +0000 (UTC) Received: by gxk10 with SMTP id 10so6474047gxk.3 for ; Tue, 22 Dec 2009 15:34:43 -0800 (PST) Received: by 10.101.144.2 with SMTP id w2mr14628588ann.158.1261524882959; Tue, 22 Dec 2009 15:34:42 -0800 (PST) Received: from ?10.0.1.198? (udp022762uds.hawaiiantel.net [72.234.79.107]) by mx.google.com with ESMTPS id 22sm2574528ywh.30.2009.12.22.15.34.40 (version=SSLv3 cipher=RC4-MD5); Tue, 22 Dec 2009 15:34:42 -0800 (PST) Date: Tue, 22 Dec 2009 13:35:52 -1000 (HST) From: Jeff Roberson X-X-Sender: jroberson@desktop To: Andrew Snow In-Reply-To: <4B302A6D.3000408@modulus.org> Message-ID: References: <712903.15604.qm@web113517.mail.gq1.yahoo.com> <3612709F-15CA-4A59-86B1-2674BAA2936D@gmail.com> <240049.46806.qm@web113517.mail.gq1.yahoo.com> <4B302A6D.3000408@modulus.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs , jeff@FreeBSD.org Subject: Re: Plans for Logged/Journaled UFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Dec 2009 23:34:44 -0000 On Tue, 22 Dec 2009, Andrew Snow wrote: > > Is there any provision to put the journal on a seperate device? This can > solve the performance degradation issue. Currently there is no plan for it. The amount of journal data is so small compared to typical journaled filesystems I don't think we'll see a significant slowdown due to the journal writes. Consider the operation of creating a new file. We must write a cyl group to allocate a bitmap, an inode block to initialize the new inode, a directory block to add the entry, and potentially the directory inode to update timestamps, directory size, etc. So 4 block size writes if there is no directory allocation. The journal adds one 32byte entry to a block size write that can commonly hold 512 entries if you can aggregate them. Another way to look at it is that you can create 512 files (ignoring directory block allocation) before you need to write the journal. With full metadata block journaling you're writing the full size of each of those 4 blocks every time a change is made. On a 16k block filesystem that's 64k vs our 32 bytes. We can accomplish this because the recovery operation is smart enough to parse all the metadata and see how far along the operation is. Softupdates orders writes well enough that we only have to handle a small number of failure cases and can make big assumptions about the state of the filesystem. If any of these assumptions are not true we still have a full fsck to fall back on. Thanks, Jeff > > > - Andrew > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 01:41:54 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3744C106566C for ; Wed, 23 Dec 2009 01:41:54 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f218.google.com (mail-fx0-f218.google.com [209.85.220.218]) by mx1.freebsd.org (Postfix) with ESMTP id C151B8FC18 for ; Wed, 23 Dec 2009 01:41:53 +0000 (UTC) Received: by fxm10 with SMTP id 10so3713978fxm.14 for ; Tue, 22 Dec 2009 17:41:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=ufcpjyeOSa+CNdHRrrK9wwIbv+Byq3e5F8m4Q5ieeQg=; b=nRo38Uaood4xHWJevPY6gNMzJ61XQH09Lk+6P6c4YU1LFfZcfaNvsDZHS60sgklC3r WJmyr7c4iKS8MZgIdj5UzP5GoDSXXtEtu6SdKr7bYFaWX/uTDw6U9ZebLdM0rnqymTm8 fMpD8Si7LLxLbEDyLcS3AjLDw4E1u2aEHtT7M= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=wGdPpBYIhELRKOudg3NGqsDNeU5053Y+czAVUSW5UH1CZFNrQDtbN3yQAj68nNmeHr PNQU54/RTNjc0jLxqq/6tCRoVHtRBUsXZLtQ8lq81dNyBnM64hCYavbq0ul+T5iE3/n0 I+CONNKHfXvRoaH8iNKYpG6LaIMj4vmhbu164= MIME-Version: 1.0 Received: by 10.239.168.138 with SMTP id k10mr1179575hbe.100.1261532512687; Tue, 22 Dec 2009 17:41:52 -0800 (PST) In-Reply-To: <4B315320.5050504@quip.cz> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> Date: Tue, 22 Dec 2009 20:41:52 -0500 Message-ID: <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> From: Rich To: Miroslav Lachman <000.fbsd@quip.cz> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 01:41:54 -0000 http://kerneltrap.org/mailarchive/freebsd-fs/2009/9/30/6457763 may be useful to you - it's what we did when we got stuck in a resilver loop. I recall being in the same state you're in right now at one point, and getting out of it from there. I think if you apply that patch, you'll be able to cancel the resilver, and then resilver again with the device you'd like to resilver with. - Rich On Tue, Dec 22, 2009 at 6:15 PM, Miroslav Lachman <000.fbsd@quip.cz> wrote: > Steven Schlansker wrote: >> >> As a corollary, you may notice some funky concat business going on. >> This is because I have drives which are very slightly different in size = (< >> =A01MB) >> and whenever one of them goes down and I bring the pool up, it helpfully >> (?) >> expands the pool by a whole megabyte then won't let the drive back in. >> This is extremely frustrating... is there any way to fix that? =A0I'm >> eventually going to keep expanding each of my drives one megabyte at a >> time >> using gconcat and space on another drive! =A0Very frustrating... > > You can avoid it by partitioning the drives to the well known 'minimal' s= ize > (size of smallest disk) and use the partition instead of raw disk. > For example ad12s1 instead of ad12 (if you creat slices by fdisk) > of ad12p1 (if you creat partitions by gpart) > > You can also use labels instead of device name. > > Miroslav Lachman > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --=20 If you are over 80 years old and accompanied by your parents, we will cash your check. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 09:12:48 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3C6D6106568B for ; Wed, 23 Dec 2009 09:12:48 +0000 (UTC) (envelope-from patpro@patpro.net) Received: from rack.patpro.net (rack.patpro.net [193.30.227.216]) by mx1.freebsd.org (Postfix) with ESMTP id BFC6C8FC08 for ; Wed, 23 Dec 2009 09:12:47 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by rack.patpro.net (Postfix) with ESMTP id 1E6DF14 for ; Wed, 23 Dec 2009 10:12:47 +0100 (CET) X-Virus-Scanned: amavisd-new at patpro.net Received: from amavis-at-patpro.net ([127.0.0.1]) by localhost (rack.patpro.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id V9KIKJsr6R+S for ; Wed, 23 Dec 2009 10:12:46 +0100 (CET) Received: from [IPv6:::1] (localhost [127.0.0.1]) by rack.patpro.net (Postfix) with ESMTP for ; Wed, 23 Dec 2009 10:12:46 +0100 (CET) Message-Id: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> From: Patrick Proniewski To: freebsd-fs@freebsd.org Content-Type: multipart/signed; boundary=Apple-Mail-1--215476577; micalg=sha1; protocol="application/pkcs7-signature" Mime-Version: 1.0 (Apple Message framework v936) Date: Wed, 23 Dec 2009 10:12:44 +0100 X-Mailer: Apple Mail (2.936) X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: snapshot implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 09:12:48 -0000 --Apple-Mail-1--215476577 Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Hello, I'm playing a little bit with freebsd snapshots (on UFS, freebsd 6.4). And I can't find the answer to one of my questions. If I understand correctly: - at time=0 a snapshot contains nothing but a bit/block map, and every block in the block map is either a pointer to "not used" or "not copied". The pointer to "not used" is used when the corresponding block on the live file system is empty. The pointer "not copied" is used when the corresponding block on the live FS has not changed since time=0. - at time>0 every non-empty block on the live FS that is to be modified, is first copied in the snapshot, and then, the pending modification is committed. The pointer in the snapshot's block map changes from "not copied" to the address of the copied block in the snapshot. But what about empty blocks? I can't find any information about them. It seems logical to me that empty blocks receiving new data on the live FS will stay as pointers to "not used" in the snapshot, instead of pointing to an empty block that would be copied in the snapshot and grow it's size. I've not found any piece of documentation that clarify this. By the way, I'm also interested in ZFS: is the snapshot technology available in ZFS the same as the one available in UFS? thanks, patpro --Apple-Mail-1--215476577-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 16:41:36 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 14C9A106568F for ; Wed, 23 Dec 2009 16:41:36 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id D25F88FC1E for ; Wed, 23 Dec 2009 16:41:35 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id nBNGfYKA006843; Wed, 23 Dec 2009 10:41:34 -0600 (CST) Date: Wed, 23 Dec 2009 10:41:34 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Patrick Proniewski In-Reply-To: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> Message-ID: References: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 23 Dec 2009 10:41:35 -0600 (CST) Cc: freebsd-fs@freebsd.org Subject: Re: snapshot implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 16:41:36 -0000 On Wed, 23 Dec 2009, Patrick Proniewski wrote: > By the way, I'm also interested in ZFS: is the snapshot technology > available in ZFS the same as the one available in UFS? I don't know anything about snapshots in UFS, but snapshots in ZFS are certainly remarkably different. ZFS uses copy-on-write (COW) whenever a data block is updated and snapshot creation simply adds a new reference to existing blocks. The snapshot is made available as a (usually) hidden directory (/filesystem/.zfs/snapshot/snapname) which contains the complete filesystem content at the time the snapshot was taken. In my experience, ZFS snapshots usually take less than a second to complete. They are so efficient that some systems have snapshots scheduled to be taken every five minutes as a defense against user/application error. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 17:16:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 55E571065692 for ; Wed, 23 Dec 2009 17:16:01 +0000 (UTC) (envelope-from chreo@chreo.net) Received: from kontorsmtp2.one.com (kontorsmtp2.one.com [195.47.247.17]) by mx1.freebsd.org (Postfix) with ESMTP id 1E5498FC14 for ; Wed, 23 Dec 2009 17:16:00 +0000 (UTC) Received: from [10.0.0.12] (h-250-220.A218.priv.bahnhof.se [85.24.250.220]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by kontorsmtp2.one.com (Postfix) with ESMTP id A215026C0043B for ; Wed, 23 Dec 2009 17:00:06 +0000 (UTC) Message-ID: <4B324C95.5060601@chreo.net> Date: Wed, 23 Dec 2009 18:00:05 +0100 From: Chreo User-Agent: Thunderbird 2.0.0.23 (X11/20090817) MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Zpool on GELI halts during import X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 17:16:01 -0000 Hi there, I've a raidz zpool on 6 1,5TB vdevs running on GELI GEOMs. It has been performing fine until a few days ago when all of a sudden it started to fail being imported. This is on 8-stable zpool import reports: zpool import pool: Ocean id: 18338821095971722517 state: ONLINE status: One or more devices contains corrupted data. action: The pool can be imported using its name or numeric identifier. see: http://www.sun.com/msg/ZFS-8000-4J config: Ocean ONLINE raidz1 ONLINE label/Disk_1.eli ONLINE label/Disk_2.eli ONLINE label/Disk_3.eli ONLINE label/Disk_4.eli UNAVAIL corrupted data label/Disk_5.eli UNAVAIL corrupted data label/Disk_6.eli ONLINE but doing the actual import simply cause the process to halt it's progress (at least a ktrace does not reveal any activity). I've also tried doing the import using: zpool import -f -o failmode=continue Ocean with the same result. Right, so my first thought was that this has been caused by the notorious uberblock issue (where the . Unfortunately doing zdb -uuv -e Ocean also stops responding (no disk activity noted after 1s of running the command). zdb -l /dev/label/Disk_X.eli Reports identical data for all 6 drives (they only differ in the GUID as expected). Now running on 7-stable instead (the system was upgraded a monthe ago) does not change the behaviour fundamentally, import still stops responding, however zdb -e Ocean now cause a segfault so it's not much of an "improvment". Any ideas as to how I could get this into a importable state or find out what went backwards? Cheers Christian Elmerot From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 20:26:38 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3B03F106566C for ; Wed, 23 Dec 2009 20:26:38 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id BA47C8FC1A for ; Wed, 23 Dec 2009 20:26:37 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=MORDOR) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NNXmy-0003h3-Jq for freebsd-fs@freebsd.org; Wed, 23 Dec 2009 21:26:36 +0100 Date: Wed, 23 Dec 2009 21:26:12 +0100 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <1696529130.20091223212612@pyro.de> CC: freebsd-fs@freebsd.org In-Reply-To: <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Transfer-Encoding: quoted-printable X-Spam-Score: 0.1 (/) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: Hi, I opted for two 12-disc raidz2. Reasons were: Space is more important than performance. But performance is very poor - have a look at the iostats - sometimes nothing really seems to happen for up to ten seconds, or very little data gets written. Might this be a problem of the amd64 system having only 4GB of RAM? Any tuneable sysctls? Enabling prefetch didn't help...: [...] Content analysis details: (0.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP 1.6 MISSING_HEADERS Missing To: header X-Spam-Flag: NO Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 20:26:38 -0000 Hi, I opted for two 12-disc raidz2.=20 Reasons were: Space is more important than performance. But performance is very poor - have a look at the iostats - sometimes nothi= ng really seems to happen for up to ten seconds, or very little data gets written. Might t= his be a problem of the amd64 system having only 4GB of RAM? Any tuneable sysctls? Enabling = prefetch didn't help...: capacity operations bandwidth capac= ity operations bandwidth pool used avail read write read write pool used a= vail read write read write ---------- ----- ----- ----- ----- ----- ----- ---------- ----- -= ---- ----- ----- ----- ----- backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 1.23K 0 153M backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 510 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 510 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 835 0 104M backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 645 0 68.7M backup-7 1.23T 130G 0 0 0 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 474 0 59.3M 0=09store-1 616G 1= 5.6T 0 4 0 5.49K backup-7 1.23T 130G 494 0 61.8M 0=09store-1 616G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 53 0 6.73M 0=09store-1 617G 1= 5.6T 0 19 0 39.4K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 30 0 140K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 33 0 72.8K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 65 0 162K backup-7 1.23T 130G 71 0 8.92M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 636 0 79.4M 0=09store-1 617G 1= 5.6T 0 4 0 10.5K backup-7 1.23T 130G 485 0 60.6M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 85 0 10.7M 0=09store-1 617G 1= 5.6T 0 15 0 39.9K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 17 0 40.4K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 31 0 73.3K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 62 0 156K backup-7 1.23T 130G 17 0 2.24M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 189 0 23.6M 0=09store-1 617G 1= 5.6T 0 9 0 14.0K backup-7 1.23T 130G 2 74 133K 247K=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 12 0 17.5K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 1 0 255K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 100 0 12.6M backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 1012 0 126M backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 33 0 137K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 19 0 293K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 153 0 19.2M backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 109 0 13.7M backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 4 0 15.5K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 1.54K 0 188M backup-7 1.23T 130G 6 0 888K 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 556 0 69.5M 0=09store-1 617G 1= 5.6T 0 8 0 11.0K backup-7 1.23T 130G 500 0 62.5M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 5 0 766K 0=09store-1 617G 1= 5.6T 0 16 0 30.4K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 29 0 136K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 38 0 84.7K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 57 0 145K backup-7 1.23T 130G 92 0 11.6M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 497 0 62.1M 0=09store-1 617G 1= 5.6T 0 8 0 9.48K backup-7 1.23T 130G 534 0 66.7M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 153 0 19.2M 0=09store-1 617G 1= 5.6T 0 12 0 12.0K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 26 0 101K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 34 0 76.3K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 62 0 155K backup-7 1.23T 130G 75 0 9.48M 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 690 0 86.3M 0=09store-1 617G 1= 5.6T 0 6 0 12.0K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 10 0 14.0K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 1 0 255K backup-7 1.23T 130G 0 0 0 0=09store-1 617G 1= 5.6T 0 0 0 0 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 20:52:06 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D6947106568B for ; Wed, 23 Dec 2009 20:52:06 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id 679768FC22 for ; Wed, 23 Dec 2009 20:52:06 +0000 (UTC) Received: by fxm27 with SMTP id 27so7891171fxm.3 for ; Wed, 23 Dec 2009 12:52:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=+7TgeAFrw5DFDQbg3G9m2XPXgWIiLEgDe/2vs79450Q=; b=RD1BCdzM/0YOx/OgOEhu+UGODGMlqGIizEwdr15p+ZBDXRVhK9I0Rb/SDsN2QpHtJ5 FYZL0gA/EAy+HPnJpKeGjnlSJSqiFPkNfc57TL5pu3ZC6xBM/d6s+qQaN3Y31HiC6jzv V0LUACn9azfSZae8l4FMKb4XFPvb/ogJ22FME= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=WOUoS6BbduNLTutdMvRabq2vlPHGQAtxrRz0xPlsGuLQ3Eme7ridx+EvijwG+7xDRI FdO6ikMZRKPIjtKX7dWBNOQStPLNEo39TZxZvccrMVDkSnCEpGSVpjZ2+et65uiFXwHc /W+XnyIznuxMKpK3o4riDJPES61mZveVtXviw= MIME-Version: 1.0 Received: by 10.239.139.154 with SMTP id t26mr1228151hbt.74.1261601524119; Wed, 23 Dec 2009 12:52:04 -0800 (PST) In-Reply-To: <1696529130.20091223212612@pyro.de> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> Date: Wed, 23 Dec 2009 15:52:03 -0500 Message-ID: <5da0588e0912231252m36d9942bj6d288e17387c2a24@mail.gmail.com> From: Rich To: Solon Lutz Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 20:52:06 -0000 I've presently seen some traffic flying around about ZFS performance being sub-disk-saturating on FBSD, and a hack solution of twiddling the ZFS thread priority. You might want to look into that. - Rich From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 20:56:22 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB1E11065696 for ; Wed, 23 Dec 2009 20:56:22 +0000 (UTC) (envelope-from bp@barryp.org) Received: from itasca.hexavalent.net (itasca.hexavalent.net [67.207.138.180]) by mx1.freebsd.org (Postfix) with ESMTP id 8276E8FC0A for ; Wed, 23 Dec 2009 20:56:22 +0000 (UTC) Received: from barryp.org (host-145-114-107-208.midco.net [208.107.114.145]) by itasca.hexavalent.net (Postfix) with ESMTPS id EAD4123C5FA for ; Wed, 23 Dec 2009 14:56:21 -0600 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=barryp.org; s=itasca; t=1261601782; bh=S8JU1Zu0fDKjSN2CUBRzLOrr8L+6GAxCSBFs2d+sgvc=; h=Message-ID:Date:From:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=Uy5YzZvXhG4b RZJh2KlBPxBKTrTWROV36GMxh/pkEUMFrmvJLKUJanE7/6FiWtk5JHBqF+B09flFcaC 37uFhUNdvvR1ZTWeGdMo5eWFvkhcWdDM6CB6CjTbYUQ9Gtf65N8Zi65vGYmLnu4GPEi kVa4I63RmbXhSZ2ezj+Aha0lg= Received: from octane.med.und.nodak.edu ([134.129.166.23]) by barryp.org with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.67 (FreeBSD)) (envelope-from ) id 1NNYFn-000G70-BX; Wed, 23 Dec 2009 14:56:19 -0600 Message-ID: <4B3283F2.7060804@barryp.org> Date: Wed, 23 Dec 2009 14:56:18 -0600 From: Barry Pederson User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.5) Gecko/20091204 Thunderbird/3.0 MIME-Version: 1.0 To: Bob Friesenhahn References: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Patrick Proniewski Subject: Re: snapshot implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 20:56:22 -0000 On 12/23/09 10:41 AM, Bob Friesenhahn wrote: > On Wed, 23 Dec 2009, Patrick Proniewski wrote: > >> By the way, I'm also interested in ZFS: is the snapshot technology >> available in ZFS the same as the one available in UFS? > > I don't know anything about snapshots in UFS, but snapshots in ZFS are > certainly remarkably different. ZFS uses copy-on-write (COW) whenever a > data block is updated and snapshot creation simply adds a new reference > to existing blocks. The snapshot is made available as a (usually) hidden > directory (/filesystem/.zfs/snapshot/snapname) which contains the > complete filesystem content at the time the snapshot was taken. In my > experience, ZFS snapshots usually take less than a second to complete. > They are so efficient that some systems have snapshots scheduled to be > taken every five minutes as a defense against user/application error. I always liked this quote from this writeup on ZFS: http://www.sun.com/bigadmin/features/articles/zfs_part2_ease.jsp "...there's virtually no overhead at all due to the copy-on-write architecture. In fact, sometimes it is faster to take a snapshot rather than free the blocks containing the old data!" That's certainly not the case with UFS snapshots, which can take a long time to complete (we're talking freezing your machine's disk activity for many minutes), and are limited to 20 total. Barry From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 21:04:00 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C4F371065692 for ; Wed, 23 Dec 2009 21:04:00 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 79D788FC27 for ; Wed, 23 Dec 2009 21:04:00 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id nBNL3xSN008615; Wed, 23 Dec 2009 15:03:59 -0600 (CST) Date: Wed, 23 Dec 2009 15:03:59 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Solon Lutz In-Reply-To: <1696529130.20091223212612@pyro.de> Message-ID: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 23 Dec 2009 15:03:59 -0600 (CST) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 21:04:00 -0000 On Wed, 23 Dec 2009, Solon Lutz wrote: > I opted for two 12-disc raidz2. > Reasons were: Space is more important than performance. > > But performance is very poor - have a look at the iostats - sometimes nothing really seems > to happen for up to ten seconds, or very little data gets written. Might this be a problem > of the amd64 system having only 4GB of RAM? Any tuneable sysctls? Enabling prefetch didn't help...: The most likely cause is that several of your disks are not performing properly. Perhaps a disk is performing error recovery, or there is a bad cable, or maybe even too much chassis vibration. It only takes one pokey disk in a vdev to slow down the whole vdev. Using 'iostat -x 10' while your pool is under continous I/O load may help find the pokey disks. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Dec 23 21:30:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98D7D10656A5 for ; Wed, 23 Dec 2009 21:30:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7C0F98FC1F for ; Wed, 23 Dec 2009 21:30:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBNLU3mZ089266 for ; Wed, 23 Dec 2009 21:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBNLU3tk089261; Wed, 23 Dec 2009 21:30:03 GMT (envelope-from gnats) Date: Wed, 23 Dec 2009 21:30:03 GMT Message-Id: <200912232130.nBNLU3tk089261@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Rusty Nejdl Cc: Subject: Re: kern/132960: [ufs] [panic] panic:ffs_blkfree: freeing free frag X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Rusty Nejdl List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Dec 2009 21:30:03 -0000 The following reply was made to PR kern/132960; it has been noted by GNATS. From: Rusty Nejdl To: bug-followup@FreeBSD.org, kevinxlinuz@163.com Cc: Subject: Re: kern/132960: [ufs] [panic] panic:ffs_blkfree: freeing free frag Date: Wed, 23 Dec 2009 14:59:11 -0600 --00163649a335302443047b6b989d Content-Type: text/plain; charset=ISO-8859-1 I just saw this on AMD64 on FreeBSD 8.0. I'm kind of surprised to see this still lingering around. Should I continue this problem report or submit a new one? Rusty Nejdl --00163649a335302443047b6b989d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I just saw this on AMD64 on FreeBSD 8.0.=A0 I'm kind of surprised to se= e this still lingering around.=A0 Should I continue this problem report or = submit a new one?

Rusty Nejdl
--00163649a335302443047b6b989d-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 00:29:09 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CAEF9106566B for ; Thu, 24 Dec 2009 00:29:09 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id 80E0A8FC1C for ; Thu, 24 Dec 2009 00:29:09 +0000 (UTC) Received: by yxe1 with SMTP id 1so7348230yxe.3 for ; Wed, 23 Dec 2009 16:29:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:content-type:mime-version :subject:from:in-reply-to:date:content-transfer-encoding:message-id :references:to:x-mailer; bh=qxiz8piwU4AXsSUz9nkSn188i9WfXTnqjEpdFK9vVxo=; b=k3qNxNdSJgZhwncfGvVTs/jhdC+r0DjOGOZFpGmvSxTvwJjg+840TPqXl1tLGJ8EqY zb8Iv2+k7Ku+NaC/2+B2QLrKPShpkFa8nJvTaUN+TjPD1e6xmO/0Ccb5slucmZ4uKbg9 ihyOX1CPr+Amdz/tmy+/7iru2G1UDLET0S7qQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer; b=oQCwbbp6HSbsUz4uxc4ZuXLccSL41tv9llvDV6o1HOxn3QXJvYLKCRl2txdFmu1etN EU4NQabHULzJy6BWrp82JCgioDGr4NFhrvhMWKXDXQRTTUP/7sexkszCw9X5/GOOTo5g UJlK42E02YhdfQUiop2oSrIja6OKiaHNV5Ao8= Received: by 10.101.6.13 with SMTP id j13mr16688396ani.128.1261614548894; Wed, 23 Dec 2009 16:29:08 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 20sm3286119yxe.2.2009.12.23.16.29.06 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 16:29:07 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Steven Schlansker In-Reply-To: <4B315320.5050504@quip.cz> Date: Wed, 23 Dec 2009 16:29:02 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 00:29:09 -0000 On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote: > Steven Schlansker wrote: >> As a corollary, you may notice some funky concat business going on. >> This is because I have drives which are very slightly different in = size (< 1MB) >> and whenever one of them goes down and I bring the pool up, it = helpfully (?) >> expands the pool by a whole megabyte then won't let the drive back = in. >> This is extremely frustrating... is there any way to fix that? I'm >> eventually going to keep expanding each of my drives one megabyte at = a time >> using gconcat and space on another drive! Very frustrating... >=20 > You can avoid it by partitioning the drives to the well known = 'minimal' size (size of smallest disk) and use the partition instead of = raw disk. > For example ad12s1 instead of ad12 (if you creat slices by fdisk) > of ad12p1 (if you creat partitions by gpart) Yes, this makes sense. Unfortunately, I didn't do this when I first = made the array as the documentation says you should use whole disks so that it can = enable the write cache, which I took to mean you shouldn't use a partition table. And = now there's no way to fix it after the fact, as you can't shrink a zpool even by a = single MB :( From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 00:32:55 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EEA2D106568F for ; Thu, 24 Dec 2009 00:32:54 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id 817418FC15 for ; Thu, 24 Dec 2009 00:32:54 +0000 (UTC) Received: by fxm27 with SMTP id 27so8000145fxm.3 for ; Wed, 23 Dec 2009 16:32:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=EdBCzSXbFo0zb5Xlv+fu8rVnmM7O/pnvNb+sFHbXy44=; b=G6AfsYY15XFp3TJmAHQ7/DcS5gOYlDtiWDP5lGS8YnZn6vG6NK4iCQbaraYFjSWSZw Iu1u19YVALuoHzVPKF0KJowQ80dakfvvQbDO8NEU735K+zi4Xe8ue8n1K/wlmA6Q2pVV ZLtoRczM6eqvxQqeBrcU72e8GoQs62Kj1+Luw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=Cw6Xp/clWuChpRNiY+Bt08zQqK7cio0Y775fnU4OMjist35noCHSprvHgvX29Dz1TO l/yGDwEtalhI2pMyhso6mk+xpCivmcJTAd0UmrQjWo8XWgsZkTxF67QVNkdRbNxI6uRx CzkykXCMhDoXvLJL3ZRsYkmfkaIamiD75n3to= MIME-Version: 1.0 Received: by 10.239.183.23 with SMTP id s23mr1187491hbg.56.1261614773216; Wed, 23 Dec 2009 16:32:53 -0800 (PST) In-Reply-To: References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> Date: Wed, 23 Dec 2009 19:32:53 -0500 Message-ID: <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com> From: Rich To: Steven Schlansker Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 00:32:55 -0000 That's fascinating - I'd swear it used to be the case (in Solaris-land, at least) that resilvering with a smaller vdev resulted in it shrinking the available space on other vdevs as though they were all as large as the smallest vdev available. In particular, I'd swear I've done this with some disk arrays I have laying around with 7x removable SCA drives, which I have in 2, 4.5, 9, and 18 GB varieties... But maybe I'm just hallucinating, or this went away a long time ago. (This was circa b70 in Solaris.) I know you can't do this in FreeBSD; I've also run into the "insufficient space" problem when trying to replace with a smaller vdev. - Rich On Wed, Dec 23, 2009 at 7:29 PM, Steven Schlansker wrote: > > On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote: > >> Steven Schlansker wrote: >>> As a corollary, you may notice some funky concat business going on. >>> This is because I have drives which are very slightly different in size= (< =A01MB) >>> and whenever one of them goes down and I bring the pool up, it helpfull= y (?) >>> expands the pool by a whole megabyte then won't let the drive back in. >>> This is extremely frustrating... is there any way to fix that? =A0I'm >>> eventually going to keep expanding each of my drives one megabyte at a = time >>> using gconcat and space on another drive! =A0Very frustrating... >> >> You can avoid it by partitioning the drives to the well known 'minimal' = size (size of smallest disk) and use the partition instead of raw disk. >> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >> of ad12p1 (if you creat partitions by gpart) > > > Yes, this makes sense. =A0Unfortunately, I didn't do this when I first ma= de the array > as the documentation says you should use whole disks so that it can enabl= e the write > cache, which I took to mean you shouldn't use a partition table. =A0And n= ow there's no > way to fix it after the fact, as you can't shrink a zpool even by a singl= e > MB :( > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --=20 [We] use bad software and bad machines for the wrong things. -- R. W. Hammi= ng From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 00:36:06 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F2FB61065672 for ; Thu, 24 Dec 2009 00:36:06 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id A88168FC16 for ; Thu, 24 Dec 2009 00:36:06 +0000 (UTC) Received: by yxe1 with SMTP id 1so7351686yxe.3 for ; Wed, 23 Dec 2009 16:36:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:content-type:mime-version :subject:from:in-reply-to:date:content-transfer-encoding:message-id :references:to:x-mailer; bh=CCdAR7SPSjaMql8OnjuX6t2rt1UDEHPty9k3kynTZR8=; b=MA8ZjdwF+8A8FKLJ7y2LOM735uVKI6nvXzdNj80R260LEMWqZrI2XUWteHPSHrL3ea lKp8lt3/YFefTW+VF0PEqjqY/hqBtgw0gWJAnCFEdnicLx1pKdqLzli1EfHdaouvni9D mM22mzOOJvaT2rmDUUX1sNJRTSspG8ppXPtZg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer; b=W5TOsqin3MHa8i64H3UKFr98pTq8XKzyO6cgYJVWpFyA0UaKPP8KzdE6id2W4I+GLI P5ev7kEMcQUbeJNc3TeOmADVCwzSXo3owiUG5+FC6X2Hn09Kmtv+ljwCSCn1Tp9s5NR5 2TNvWKKNd62ZJ58Ut4wYlHScCZhbnA+4QORTk= Received: by 10.100.24.9 with SMTP id 9mr6743445anx.186.1261614966176; Wed, 23 Dec 2009 16:36:06 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 20sm3287122yxe.20.2009.12.23.16.36.04 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 16:36:05 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Steven Schlansker In-Reply-To: <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> Date: Wed, 23 Dec 2009 16:36:00 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 00:36:07 -0000 On Dec 22, 2009, at 5:41 PM, Rich wrote: > http://kerneltrap.org/mailarchive/freebsd-fs/2009/9/30/6457763 may be > useful to you - it's what we did when we got stuck in a resilver loop. > I recall being in the same state you're in right now at one point, and > getting out of it from there. >=20 > I think if you apply that patch, you'll be able to cancel the > resilver, and then resilver again with the device you'd like to > resilver with. >=20 Thanks for the suggestion, but the problem isn't that it's stuck in a resilver loop (which is what the patch seems to try to avoid) but that I can't detach a drive. Now I got clever and fudged a label onto the new drive (copied the first 50MB of one of the dying drives), ran a scrub, and have this layout - pool: universe state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are = unaffected. action: Determine if the device needs to be replaced, and clear the = errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub completed after 20h58m with 0 errors on Wed Dec 23 = 11:36:43 2009 config:=20 NAME STATE READ WRITE CKSUM universe DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 ad16 ONLINE 0 0 0 replacing DEGRADED 0 0 40.7M ad26 ONLINE 0 0 0 506G = repaired 6170688083648327969 UNAVAIL 0 88.7M 0 was = /dev/ad12 ad8 ONLINE 0 0 0 concat/back2 ONLINE 0 0 0 ad10 ONLINE 0 0 0 concat/ad4ex ONLINE 0 0 0 ad24 ONLINE 0 0 0 concat/ad6ex ONLINE 48 0 0 28.5K = repaired Why has the replacing vdev not gone away? I still can't detach - [steven@universe:~]% sudo zpool detach universe 6170688083648327969 cannot detach 6170688083648327969: no valid replicas even though now there actually is a valid replica (ad26) Additionally, running zpool clear hangs permanently and in fact freezes = all IO to the pool. Since I've mounted /usr from the pool, this is effectively death to the system. Any other zfs commands seem to work okay (zpool scrub, zfs mount, etc.). Just clear is insta-death. I can't help but suspect that this is caused by the now non-sensical vdev = configuration (replacing with one good drive and one nonexistent one)... Any further thoughts? Thanks, Steven > - Rich >=20 > On Tue, Dec 22, 2009 at 6:15 PM, Miroslav Lachman <000.fbsd@quip.cz> = wrote: >> Steven Schlansker wrote: >>>=20 >>> As a corollary, you may notice some funky concat business going on. >>> This is because I have drives which are very slightly different in = size (< >>> 1MB) >>> and whenever one of them goes down and I bring the pool up, it = helpfully >>> (?) >>> expands the pool by a whole megabyte then won't let the drive back = in. >>> This is extremely frustrating... is there any way to fix that? I'm >>> eventually going to keep expanding each of my drives one megabyte at = a >>> time >>> using gconcat and space on another drive! Very frustrating... >>=20 >> You can avoid it by partitioning the drives to the well known = 'minimal' size >> (size of smallest disk) and use the partition instead of raw disk. >> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >> of ad12p1 (if you creat partitions by gpart) >>=20 >> You can also use labels instead of device name. >>=20 >> Miroslav Lachman >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >=20 >=20 >=20 > --=20 >=20 > If you are over 80 years old and accompanied by your parents, we will > cash your check. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 00:44:09 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 752AC1065697 for ; Thu, 24 Dec 2009 00:44:09 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.153]) by mx1.freebsd.org (Postfix) with ESMTP id 01A938FC20 for ; Thu, 24 Dec 2009 00:44:08 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id e21so452472fga.13 for ; Wed, 23 Dec 2009 16:44:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=vCs2TihcBUA7Qse0ljhZGqS1WA8CMopXyEqumdLdjhU=; b=Hpnm9swW20d5m9v2hXaL2gIUFUVHNZ1egar8+rwbbdBlukDzjPPA0AH+MJrYsHpc20 AerWMpMH+XmJ3SIi0RAWlU1wtWi7pFF2RMEp0ibmH1RSMzrIDwTLzWBfJVYorquWg9Jw 9Ck17fsadz5P5hT+5JlVbuZmO/AJPxfxuBasE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=oAad6qN7p8Ba3hjWiuRDaS9oe/KZvd7e+6mievQm2eikG6ztnDpaA1N5Fxss7kGGey hlY7MQSovG/i3dbEnFMa3zcalhn6OqfyMpQKJDhXIMXi+vw6Wd/Af5hUemNx9myNNUQQ R8caQvjdAi5ONv+czxTJ+YZriGnXe5paU3CvU= MIME-Version: 1.0 Received: by 10.239.190.69 with SMTP id w5mr1276859hbh.143.1261615447865; Wed, 23 Dec 2009 16:44:07 -0800 (PST) In-Reply-To: <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> Date: Wed, 23 Dec 2009 19:44:07 -0500 Message-ID: <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> From: Rich To: Steven Schlansker Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 00:44:09 -0000 Export then import, perhaps? I don't honestly know what to suggest - there are horrid workarounds you could do involving manually diddling the metadata state, but I feel like the correct solution is to open up a bug report and get a fix put in. - Rich On Wed, Dec 23, 2009 at 7:36 PM, Steven Schlansker wrote: > > On Dec 22, 2009, at 5:41 PM, Rich wrote: > >> http://kerneltrap.org/mailarchive/freebsd-fs/2009/9/30/6457763 may be >> useful to you - it's what we did when we got stuck in a resilver loop. >> I recall being in the same state you're in right now at one point, and >> getting out of it from there. >> >> I think if you apply that patch, you'll be able to cancel the >> resilver, and then resilver again with the device you'd like to >> resilver with. >> > > Thanks for the suggestion, but the problem isn't that it's stuck > in a resilver loop (which is what the patch seems to try to avoid) > but that I can't detach a drive. > > Now I got clever and fudged a label onto the new drive (copied the first > 50MB of one of the dying drives), ran a scrub, and have this layout - > > =A0pool: universe > =A0state: DEGRADED > status: One or more devices has experienced an unrecoverable error. =A0An > =A0 =A0 =A0 =A0attempt was made to correct the error. =A0Applications are= unaffected. > action: Determine if the device needs to be replaced, and clear the error= s > =A0 =A0 =A0 =A0using 'zpool clear' or replace the device with 'zpool repl= ace'. > =A0 see: http://www.sun.com/msg/ZFS-8000-9P > =A0scrub: scrub completed after 20h58m with 0 errors on Wed Dec 23 11:36:= 43 2009 > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 STATE =A0= =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0universe =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 DEGRADED =A0 = =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0raidz2 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 DEGRADED = =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0ad16 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ONLINE = =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0replacing =A0 =A0 =A0 =A0 =A0 =A0 =A0DEGRADED =A0 = =A0 0 =A0 =A0 0 40.7M > =A0 =A0 =A0 =A0 =A0 =A0 =A0ad26 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ONLINE = =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 =A0506G repaired > =A0 =A0 =A0 =A0 =A0 =A0 =A06170688083648327969 =A0UNAVAIL =A0 =A0 =A00 88= .7M =A0 =A0 0 =A0was /dev/ad12 > =A0 =A0 =A0 =A0 =A0 =A0ad8 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ONLINE = =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0concat/back2 =A0 =A0 =A0 =A0 =A0 ONLINE =A0 =A0 = =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0ad10 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ONLINE = =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0concat/ad4ex =A0 =A0 =A0 =A0 =A0 ONLINE =A0 =A0 = =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0ad24 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ONLINE = =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0concat/ad6ex =A0 =A0 =A0 =A0 =A0 ONLINE =A0 =A0 = =A048 =A0 =A0 0 =A0 =A0 0 =A028.5K repaired > > Why has the replacing vdev not gone away? =A0I still can't detach - > [steven@universe:~]% sudo zpool detach universe 6170688083648327969 > cannot detach 6170688083648327969: no valid replicas > even though now there actually is a valid replica (ad26) > > Additionally, running zpool clear hangs permanently and in fact freezes a= ll IO > to the pool. =A0Since I've mounted /usr from the pool, this is effectivel= y > death to the system. =A0Any other zfs commands seem to work okay > (zpool scrub, zfs mount, etc.). =A0Just clear is insta-death. =A0I can't > help but suspect that this is caused by the now non-sensical vdev configu= ration > (replacing with one good drive and one nonexistent one)... > > Any further thoughts? =A0Thanks, > Steven > > >> - Rich >> >> On Tue, Dec 22, 2009 at 6:15 PM, Miroslav Lachman <000.fbsd@quip.cz> wro= te: >>> Steven Schlansker wrote: >>>> >>>> As a corollary, you may notice some funky concat business going on. >>>> This is because I have drives which are very slightly different in siz= e (< >>>> =A01MB) >>>> and whenever one of them goes down and I bring the pool up, it helpful= ly >>>> (?) >>>> expands the pool by a whole megabyte then won't let the drive back in. >>>> This is extremely frustrating... is there any way to fix that? =A0I'm >>>> eventually going to keep expanding each of my drives one megabyte at a >>>> time >>>> using gconcat and space on another drive! =A0Very frustrating... >>> >>> You can avoid it by partitioning the drives to the well known 'minimal'= size >>> (size of smallest disk) and use the partition instead of raw disk. >>> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >>> of ad12p1 (if you creat partitions by gpart) >>> >>> You can also use labels instead of device name. >>> >>> Miroslav Lachman >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> >> >> -- >> >> If you are over 80 years old and accompanied by your parents, we will >> cash your check. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --=20 Forest fires cause Smokey Bears. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 00:57:22 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C80E5106566C for ; Thu, 24 Dec 2009 00:57:22 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yw0-f172.google.com (mail-yw0-f172.google.com [209.85.211.172]) by mx1.freebsd.org (Postfix) with ESMTP id 7AD438FC1D for ; Thu, 24 Dec 2009 00:57:22 +0000 (UTC) Received: by ywh2 with SMTP id 2so8020008ywh.27 for ; Wed, 23 Dec 2009 16:57:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:mime-version :content-type:from:in-reply-to:date:cc:content-transfer-encoding :message-id:references:to:x-mailer; bh=6eqvBPIPp1K0Roz37I4QfhYCSXjMFav62dwv9e9dcl8=; b=F6Wzhzb0YaKo6z2CBPwRRhufCW0jb650yZIWR9cq+itBmtxTLBeEeUxntzFiPamh8y d+ZQMuSdPdV0L4vvSB9JT2kZlLfayVs3z1GzD42gr8CAIPabLFrL+ZSlITYBzLtcy+os 5aF6U4SIrTjtOIrLiY/lW59/smVmybBv3QuW4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=VZL7i2qQLGFXKMzyuKg/DWa9lNTqAGAVnVaVRjL/MmwyfKVcAQ790Jopx8ovrG+PiA G31HLKnJ06SeSZE+5est/lLlh4hWYSe1B8miPlyJ9+KNhszGxA+u5qZFSVDptcsf1tUy l8yZ/lT5D7FDK6BjFH6Yyho+9k0zv3Qj0WKhs= Received: by 10.150.118.20 with SMTP id q20mr11338537ybc.112.1261616241819; Wed, 23 Dec 2009 16:57:21 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 23sm3472385yxe.0.2009.12.23.16.57.18 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 16:57:21 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Steven Schlansker In-Reply-To: <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> Date: Wed, 23 Dec 2009 16:57:12 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> To: Rich X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 00:57:23 -0000 On Dec 23, 2009, at 4:44 PM, Rich wrote: > Export then import, perhaps? >=20 Sadly exporting seems to write the state of this funky "replacing" device out along with everything else, so importing just brings it back with. Granted I haven't tried this in the new state with the newly-resilvered drive, but I don't hold much hope. I'll try it next time I'm physically with my server, as I now have to reboot it anyway (sadness!) > I don't honestly know what to suggest - there are horrid workarounds > you could do involving manually diddling the metadata state, but I > feel like the correct solution is to open up a bug report and get a > fix put in. >=20 I'd be more willing to try hackish workarounds if I either didn't care about the data or had proper backups... but I don't so I'm rather worried about trashing the pool. It's so hard to back up 6TB of data on a college kid's budget! :( I'd file a PR but the last three times I've filed things on the FreeBSD bug tracker, they've gone largely ignored. One from two years ago still is open, and one from last summer hasn't even been replied to... so I've rather given up on it. > - Rich >=20 > On Wed, Dec 23, 2009 at 7:36 PM, Steven Schlansker > wrote: >>=20 >> On Dec 22, 2009, at 5:41 PM, Rich wrote: >>=20 >>> http://kerneltrap.org/mailarchive/freebsd-fs/2009/9/30/6457763 may = be >>> useful to you - it's what we did when we got stuck in a resilver = loop. >>> I recall being in the same state you're in right now at one point, = and >>> getting out of it from there. >>>=20 >>> I think if you apply that patch, you'll be able to cancel the >>> resilver, and then resilver again with the device you'd like to >>> resilver with. >>>=20 >>=20 >> Thanks for the suggestion, but the problem isn't that it's stuck >> in a resilver loop (which is what the patch seems to try to avoid) >> but that I can't detach a drive. >>=20 >> Now I got clever and fudged a label onto the new drive (copied the = first >> 50MB of one of the dying drives), ran a scrub, and have this layout - >>=20 >> pool: universe >> state: DEGRADED >> status: One or more devices has experienced an unrecoverable error. = An >> attempt was made to correct the error. Applications are = unaffected. >> action: Determine if the device needs to be replaced, and clear the = errors >> using 'zpool clear' or replace the device with 'zpool = replace'. >> see: http://www.sun.com/msg/ZFS-8000-9P >> scrub: scrub completed after 20h58m with 0 errors on Wed Dec 23 = 11:36:43 2009 >> config: >>=20 >> NAME STATE READ WRITE CKSUM >> universe DEGRADED 0 0 0 >> raidz2 DEGRADED 0 0 0 >> ad16 ONLINE 0 0 0 >> replacing DEGRADED 0 0 40.7M >> ad26 ONLINE 0 0 0 506G = repaired >> 6170688083648327969 UNAVAIL 0 88.7M 0 was = /dev/ad12 >> ad8 ONLINE 0 0 0 >> concat/back2 ONLINE 0 0 0 >> ad10 ONLINE 0 0 0 >> concat/ad4ex ONLINE 0 0 0 >> ad24 ONLINE 0 0 0 >> concat/ad6ex ONLINE 48 0 0 28.5K = repaired >>=20 >> Why has the replacing vdev not gone away? I still can't detach - >> [steven@universe:~]% sudo zpool detach universe 6170688083648327969 >> cannot detach 6170688083648327969: no valid replicas >> even though now there actually is a valid replica (ad26) >>=20 >> Additionally, running zpool clear hangs permanently and in fact = freezes all IO >> to the pool. Since I've mounted /usr from the pool, this is = effectively >> death to the system. Any other zfs commands seem to work okay >> (zpool scrub, zfs mount, etc.). Just clear is insta-death. I can't >> help but suspect that this is caused by the now non-sensical vdev = configuration >> (replacing with one good drive and one nonexistent one)... >>=20 >> Any further thoughts? Thanks, >> Steven >>=20 >>=20 >>> - Rich >>>=20 >>> On Tue, Dec 22, 2009 at 6:15 PM, Miroslav Lachman <000.fbsd@quip.cz> = wrote: >>>> Steven Schlansker wrote: >>>>>=20 >>>>> As a corollary, you may notice some funky concat business going = on. >>>>> This is because I have drives which are very slightly different in = size (< >>>>> 1MB) >>>>> and whenever one of them goes down and I bring the pool up, it = helpfully >>>>> (?) >>>>> expands the pool by a whole megabyte then won't let the drive back = in. >>>>> This is extremely frustrating... is there any way to fix that? = I'm >>>>> eventually going to keep expanding each of my drives one megabyte = at a >>>>> time >>>>> using gconcat and space on another drive! Very frustrating... >>>>=20 >>>> You can avoid it by partitioning the drives to the well known = 'minimal' size >>>> (size of smallest disk) and use the partition instead of raw disk. >>>> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >>>> of ad12p1 (if you creat partitions by gpart) >>>>=20 >>>> You can also use labels instead of device name. >>>>=20 >>>> Miroslav Lachman >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>>=20 >>>=20 >>>=20 >>>=20 >>> -- >>>=20 >>> If you are over 80 years old and accompanied by your parents, we = will >>> cash your check. >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >=20 >=20 >=20 > --=20 >=20 > Forest fires cause Smokey Bears. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:02:33 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 627561065672 for ; Thu, 24 Dec 2009 01:02:33 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id E9C808FC0A for ; Thu, 24 Dec 2009 01:02:32 +0000 (UTC) Received: by fxm27 with SMTP id 27so8010346fxm.3 for ; Wed, 23 Dec 2009 17:02:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=5HHb697U2ur0iPj+V9h2y+9H2UDekkXDSP5YtJfPiOI=; b=wJrCD7CCAzkQSeGuMOVEsVS+lvIF5gF9ISr0HTPh4KbMrI5Z8EZuSQUuQ7F6h+g2/c Mr5D9uhsvNG5fY1tJRG0sIadirhu2skOrrU5dmJ9ucPSD/LV8E/REaSLy5Q6r6ziXje2 wp3xl+aT7tL6ZLj4CluMJmc2qNA5jj+opStrY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=VcfelwX6R2WU7T4xUAmzHyuGEopCrfxYflOSRbcVY1vHRDf9skuEVLnpInCvxfDAvA NC3P4t2aUyc6A/J20YVfGvmD9yKwEwrxQevImnfcxA4cDUkELJhd/IY/NPH+NDzBPdoe 32OT6TEIGGziuXJoyCg1w1uDbtMljvi5YTLS0= MIME-Version: 1.0 Received: by 10.239.143.215 with SMTP id l23mr108858hba.163.1261616551865; Wed, 23 Dec 2009 17:02:31 -0800 (PST) In-Reply-To: <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> Date: Wed, 23 Dec 2009 20:02:31 -0500 Message-ID: <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> From: Rich To: Steven Schlansker Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:02:33 -0000 I mean, the report in that link I sent you was A) mine and B) on 13TB of data I couldn't lose, so I can sympathize with the problem... A dirty solution might be to see if Solaris does the right thing with ejecting the bad drive. - Rich From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:02:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1EDF8106566C for ; Thu, 24 Dec 2009 01:02:37 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id C69478FC13 for ; Thu, 24 Dec 2009 01:02:36 +0000 (UTC) Received: by yxe1 with SMTP id 1so7364844yxe.3 for ; Wed, 23 Dec 2009 17:02:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:content-type:mime-version :subject:from:in-reply-to:date:content-transfer-encoding:message-id :references:to:x-mailer; bh=es5mDdk15+LrWDEPDYGxrkzcOkuGJ/25wMNzv0iRhHc=; b=REOPA1BXWjhIInBNbCGZWt1wVU/YsiEFJ+UGz39Q8iaYx4M5NUYfZHML4PtYAryvlC LD6uTTLa0CHtMGyw9k9UB/BiRu/ppxy1DLqqXnh4C7GcdkeN7olTXwYwfmTgxJCROWYu hx7bIAzvEswLIHVE4K88pCzelKw+Iy6ReHI3Y= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer; b=tKFfohViM5s/REGlkdrZxy0R4DpnZnHVKWpSDNYZMYWRikaM7GkpNBpZgyU494X8cG oHMl2XLWkg6ky6rVPqrurnl91nNzkcGd4WY7lFQ/P3Li7RnvbjPvJabj0yXLz+MSunPT CUfWAOT0JgCbpbDMRZF1Twv9oMaTY2KbCBYYY= Received: by 10.100.220.6 with SMTP id s6mr3966923ang.140.1261616556176; Wed, 23 Dec 2009 17:02:36 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 5sm3464085yxd.53.2009.12.23.17.02.34 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 17:02:35 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Steven Schlansker In-Reply-To: <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com> Date: Wed, 23 Dec 2009 17:02:31 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <36133DA6-C26B-4B1B-B3E1-DBB714232F59@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:02:37 -0000 On Dec 23, 2009, at 4:32 PM, Rich wrote: > That's fascinating - I'd swear it used to be the case (in > Solaris-land, at least) that resilvering with a smaller vdev resulted > in it shrinking the available space on other vdevs as though they were > all as large as the smallest vdev available. Pretty sure that this doesn't exist for raidz. I haven't tried, though, and Sun's bug database's search blows chunks. I remember seeing a bug filed on it before, but I can't for the life of me find it. >=20 > In particular, I'd swear I've done this with some disk arrays I have > laying around with 7x removable SCA drives, which I have in 2, 4.5, 9, > and 18 GB varieties... >=20 > But maybe I'm just hallucinating, or this went away a long time ago. > (This was circa b70 in Solaris.) Shrinking of mirrored drives seems like it might be working. Again Sun's bug database isn't clear at all about what can / can't be shrunk - maybe I should get a Solaris bootdisk and see if I can shrink it from there... >=20 > I know you can't do this in FreeBSD; I've also run into the > "insufficient space" problem when trying to replace with a smaller > vdev. >=20 > - Rich >=20 > On Wed, Dec 23, 2009 at 7:29 PM, Steven Schlansker > wrote: >>=20 >> On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote: >>=20 >>> Steven Schlansker wrote: >>>> As a corollary, you may notice some funky concat business going on. >>>> This is because I have drives which are very slightly different in = size (< 1MB) >>>> and whenever one of them goes down and I bring the pool up, it = helpfully (?) >>>> expands the pool by a whole megabyte then won't let the drive back = in. >>>> This is extremely frustrating... is there any way to fix that? I'm >>>> eventually going to keep expanding each of my drives one megabyte = at a time >>>> using gconcat and space on another drive! Very frustrating... >>>=20 >>> You can avoid it by partitioning the drives to the well known = 'minimal' size (size of smallest disk) and use the partition instead of = raw disk. >>> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >>> of ad12p1 (if you creat partitions by gpart) >>=20 >>=20 >> Yes, this makes sense. Unfortunately, I didn't do this when I first = made the array >> as the documentation says you should use whole disks so that it can = enable the write >> cache, which I took to mean you shouldn't use a partition table. And = now there's no >> way to fix it after the fact, as you can't shrink a zpool even by a = single >> MB :( >>=20 >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >=20 >=20 >=20 > --=20 >=20 > [We] use bad software and bad machines for the wrong things. -- R. W. = Hamming From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:04:41 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 354351065679 for ; Thu, 24 Dec 2009 01:04:41 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-gx0-f218.google.com (mail-gx0-f218.google.com [209.85.217.218]) by mx1.freebsd.org (Postfix) with ESMTP id DB99C8FC0C for ; Thu, 24 Dec 2009 01:04:40 +0000 (UTC) Received: by gxk10 with SMTP id 10so7472213gxk.3 for ; Wed, 23 Dec 2009 17:04:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:mime-version :content-type:from:in-reply-to:date:cc:content-transfer-encoding :message-id:references:to:x-mailer; bh=8lGDnMN305hUZn9bNUYCBpqWyhBduWwa5YeeQJNly9Y=; b=T9mkO8k6IRr1cE8FzuYlfCCm2YY39MDmBMwMGdHor5kk1tPigoZq8C5PjIZbH0SaEF eklNBn/kHlMRkmxE4sUIPmr1NRK6WjXwC8vEFNfn9Rs5RLQwWpM8IKOiz/zYMwJbFjDF YYNKa+ye0sf/SWMpPvbQZpQ+9HQYmhMNysLcc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=YSQeFaerXqi9kCgHSAzhmx+of+ZXwhiIUT/1uE6+HpT1Xcg5g79ypm5R1ekGR5g9Tp sVxAniE60oRfE0bJ2Gsw4r5V2BDl5Qf5lLcT8vIADzJnny+iKd4NM5O8rOdFG6fvN7sR zLfTSKee8j9+BOumPHxMFLXpYTJuR1GIuaUEU= Received: by 10.101.7.35 with SMTP id k35mr16877098ani.179.1261616674274; Wed, 23 Dec 2009 17:04:34 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 7sm3456555yxd.44.2009.12.23.17.04.32 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 17:04:33 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Steven Schlansker In-Reply-To: <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> Date: Wed, 23 Dec 2009 17:04:30 -0800 Content-Transfer-Encoding: 7bit Message-Id: References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> To: Rich X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:04:41 -0000 On Dec 23, 2009, at 5:02 PM, Rich wrote: > I mean, the report in that link I sent you was A) mine and B) on 13TB > of data I couldn't lose, so I can sympathize with the problem... > > A dirty solution might be to see if Solaris does the right thing with > ejecting the bad drive. That's not a bad plan. I've got a Solaris USB stick to boot off of, but it'll be a few days at least before I am within USB range of the server... :-p I do wish commodity hardware had remote reset functionality! Having a server crash remotely is so frustrating... From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:06:14 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B87A71065670 for ; Thu, 24 Dec 2009 01:06:14 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id 494028FC0C for ; Thu, 24 Dec 2009 01:06:13 +0000 (UTC) Received: by fxm27 with SMTP id 27so8011565fxm.3 for ; Wed, 23 Dec 2009 17:06:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=EZEH3OgA41AFPUqbFMPzaxDZxA8mEHUDZMjQFd5/bGQ=; b=RYUOW+Q3j4/WeyO8Lhjw9sNfYbLBq3taBEThlwRTiw286TaXkGo5O3yPtTn+NVIeYs K+dOuIrjUc5yO6LZw5XhQKxSiRc83BrvevbTqtkMxsgzKVCCl1kLL7srZzcipgATBZ1Z 5KLsdpHwyTlDO4HHCiHfrKo4ruUKtgSu1FdDU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=G+PVR8WWWuhiY50n2SoPOpmWFUeUb/X+PM08NGQMHUsNoL/gQfbY3RHGyqE3Ji5DIp +2Tz82TVMD+hzdemrJf70Sj0RZP8371qLFf/ebpRChGwY9VYgfjcmu9wlkgyl8qRCx3Q Y1AZSP218ZB0pVSmuSSFSoOWjOtrmFDD3tMW0= MIME-Version: 1.0 Received: by 10.239.197.142 with SMTP id z14mr1167581hbi.213.1261616773080; Wed, 23 Dec 2009 17:06:13 -0800 (PST) In-Reply-To: References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> Date: Wed, 23 Dec 2009 20:06:13 -0500 Message-ID: <5da0588e0912231706l1e188ebdpca83d907fb694773@mail.gmail.com> From: Rich To: Steven Schlansker Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:06:14 -0000 IPMI is a great thing, let me tell you. :) Also, Linux's magic sysrq has saved me several times. - Rich From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:26:05 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 76E08106566B for ; Thu, 24 Dec 2009 01:26:05 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yw0-f172.google.com (mail-yw0-f172.google.com [209.85.211.172]) by mx1.freebsd.org (Postfix) with ESMTP id 267D48FC0C for ; Thu, 24 Dec 2009 01:26:04 +0000 (UTC) Received: by ywh2 with SMTP id 2so8036425ywh.27 for ; Wed, 23 Dec 2009 17:26:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:mime-version :content-type:from:in-reply-to:date:cc:content-transfer-encoding :message-id:references:to:x-mailer; bh=iEvgVpVoSUTGj4L5DXYrSZORVmfWoTnL1AND1sVlumo=; b=dxYu1YibyuM7Yig0wPsNLcmb38L6WEipWmDSCtszyq8EHDFiBVASYA/lUkRecZZe3l u1RxkSDl/Npdl+/3g418m4EXUfc9p57TJ5p8NJ2smNXlt2NuLwO2nstvKHTqltfYdBoU R7k4APmOpDvHnhZI9OlnZaWRbhdBoNXdc+gEM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=iNFA4Wfw9CLW70jyRrwS06k5Ssd8Urb4XhDf6sc71F1rv7jt4CZfi68DJqASRSK9Vg qHWU60PrIge3Gie+sf4fpivcVMxVw2Yy0xvoJTVPlGORynFM5oLC+yup4vI8d+26TmRG RpoGMBjjNegdss//eqWhM6EHoSF//pKdNNHlY= Received: by 10.150.107.28 with SMTP id f28mr16796083ybc.57.1261617964374; Wed, 23 Dec 2009 17:26:04 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 9sm3470731yxf.59.2009.12.23.17.25.58 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 17:26:03 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Steven Schlansker In-Reply-To: <5da0588e0912231706l1e188ebdpca83d907fb694773@mail.gmail.com> Date: Wed, 23 Dec 2009 17:25:06 -0800 Content-Transfer-Encoding: 7bit Message-Id: <226A9671-E948-402A-9722-F035BF466B33@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> <5da0588e0912231706l1e188ebdpca83d907fb694773@mail.gmail.com> To: Rich X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:26:05 -0000 On Dec 23, 2009, at 5:06 PM, Rich wrote: > IPMI is a great thing, let me tell you. :) Can one just buy a random card and plug it in to a generic motherboard and have it work? I was under the impression that this was generally only available on server-class hardware... > > Also, Linux's magic sysrq has saved me several times. Me too! I wish FreeBSD had something like this... maybe I'll write a kernel patch to support just the hard-reset bit of it... ;-) From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:30:15 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6A9BA106568B for ; Thu, 24 Dec 2009 01:30:15 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id EF1088FC12 for ; Thu, 24 Dec 2009 01:30:14 +0000 (UTC) Received: by fxm27 with SMTP id 27so8019811fxm.3 for ; Wed, 23 Dec 2009 17:30:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=NYVuMjag6r8RFI7f07sGtuZ4ldMSrRmSOya0Hbx+6vA=; b=Y1F4TwUoGDNqBN5DbJ/XJgYhdrvZNZOHTatHOIjHGtKrCgmUx1ikgmB3g8K2+GahlX d6ymih7CNpyYc5tg+gXnNGDP1vWpJ+J5xWYYEw2pF/Y96mhBiCi8bDtgzZyZKixoMUQ+ H9L5KdIGoXBUvIoZoMwKf05xdhkqUcrwuZ6X8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=HiVvH4nghARpwzc/GNoR73crN19SkcBsgx/FYqgD+XgQNLqz6pkCx3ibMc6XPRw1fJ fC2cytU3KEfx06mopeWmJD3sNtfLcn1kt2d0GExXY3aZDxiiDVyT1XWUcsuLirxMGBoG wXWUD2juXCjlxv3InK4dIbZhxuuN9ZsRjhFv8= MIME-Version: 1.0 Received: by 10.239.158.66 with SMTP id t2mr1283334hbc.200.1261618213715; Wed, 23 Dec 2009 17:30:13 -0800 (PST) In-Reply-To: <226A9671-E948-402A-9722-F035BF466B33@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> <5da0588e0912231644w2a7afb9dg41ceffbafc8c2df6@mail.gmail.com> <409922ED-42D5-4892-B74D-D2E696846AFB@gmail.com> <5da0588e0912231702v3e72e121j75ac077831723a15@mail.gmail.com> <5da0588e0912231706l1e188ebdpca83d907fb694773@mail.gmail.com> <226A9671-E948-402A-9722-F035BF466B33@gmail.com> Date: Wed, 23 Dec 2009 20:30:13 -0500 Message-ID: <5da0588e0912231730i7f255ea6ycb7318ef08b46d9d@mail.gmail.com> From: Rich To: Steven Schlansker Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:30:15 -0000 IPMI cards require a slot to go in, but I believe you can buy PCI cards that are just an IPMI slot. It can't be "that hard"...you just flush disk state and then triple-fault... - Rich From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 03:15:50 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 276521065692 for ; Thu, 24 Dec 2009 03:15:50 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id DC3878FC1B for ; Thu, 24 Dec 2009 03:15:49 +0000 (UTC) Received: by yxe1 with SMTP id 1so7434088yxe.3 for ; Wed, 23 Dec 2009 19:15:49 -0800 (PST) Received: by 10.101.136.3 with SMTP id o3mr16984425ann.173.1261624548065; Wed, 23 Dec 2009 19:15:48 -0800 (PST) Received: from ?10.0.1.193? (c-24-19-45-95.hsd1.wa.comcast.net [24.19.45.95]) by mx.google.com with ESMTPS id 36sm3499733yxh.67.2009.12.23.19.15.46 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 19:15:47 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Matt Simerson X-Priority: 3 (Normal) In-Reply-To: <1696529130.20091223212612@pyro.de> Date: Wed, 23 Dec 2009 19:18:41 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <1555A977-DDDB-47A5-83AF-4096D610C73C@spry.com> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 03:15:50 -0000 On Dec 23, 2009, at 12:26 PM, Solon Lutz wrote: > Hi, >=20 > I opted for two 12-disc raidz2.=20 > Reasons were: Space is more important than performance. >=20 > But performance is very poor - have a look at the iostats - sometimes = nothing really seems > to happen for up to ten seconds, or very little data gets written. = Might this be a problem > of the amd64 system having only 4GB of RAM? Any tuneable sysctls? = Enabling prefetch didn't help...:=20 When testing on a i386 system with only 4GB of ram, disabling prefetch = did help. There are the settings I used, way back when... $ cat /boot/loader.conf=20 #vm.kmem_size=3D"1024M" #vm.kmem_size_max=3D"1024M" #vfs.zfs.prefetch_disable=3D1 #vfs.zfs.arc_min=3D"16M" #vfs.zfs.arc_max=3D"64M" I ultimately worked around that problem by no longer using raidz2. Matt > capacity operations bandwidth = capacity operations bandwidth > pool used avail read write read write pool = used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- ---------- = ----- ----- ----- ----- ----- ----- > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 1.23K 0 153M > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 510 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 510 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 835 0 104M > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 645 0 68.7M > backup-7 1.23T 130G 0 0 0 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 474 0 59.3M 0 store-1 = 616G 15.6T 0 4 0 5.49K > backup-7 1.23T 130G 494 0 61.8M 0 store-1 = 616G 15.6T 0 0 0 0 > backup-7 1.23T 130G 53 0 6.73M 0 store-1 = 617G 15.6T 0 19 0 39.4K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 30 0 140K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 33 0 72.8K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 65 0 162K > backup-7 1.23T 130G 71 0 8.92M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 636 0 79.4M 0 store-1 = 617G 15.6T 0 4 0 10.5K > backup-7 1.23T 130G 485 0 60.6M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 85 0 10.7M 0 store-1 = 617G 15.6T 0 15 0 39.9K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 17 0 40.4K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 31 0 73.3K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 62 0 156K > backup-7 1.23T 130G 17 0 2.24M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 189 0 23.6M 0 store-1 = 617G 15.6T 0 9 0 14.0K > backup-7 1.23T 130G 2 74 133K 247K store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 12 0 17.5K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 1 0 255K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 100 0 12.6M > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 1012 0 126M > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 33 0 137K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 19 0 293K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 153 0 19.2M > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 109 0 13.7M > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 4 0 15.5K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 1.54K 0 188M > backup-7 1.23T 130G 6 0 888K 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 556 0 69.5M 0 store-1 = 617G 15.6T 0 8 0 11.0K > backup-7 1.23T 130G 500 0 62.5M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 5 0 766K 0 store-1 = 617G 15.6T 0 16 0 30.4K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 29 0 136K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 38 0 84.7K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 57 0 145K > backup-7 1.23T 130G 92 0 11.6M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 497 0 62.1M 0 store-1 = 617G 15.6T 0 8 0 9.48K > backup-7 1.23T 130G 534 0 66.7M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 153 0 19.2M 0 store-1 = 617G 15.6T 0 12 0 12.0K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 26 0 101K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 34 0 76.3K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 62 0 155K > backup-7 1.23T 130G 75 0 9.48M 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 690 0 86.3M 0 store-1 = 617G 15.6T 0 6 0 12.0K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 10 0 14.0K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 1 0 255K > backup-7 1.23T 130G 0 0 0 0 store-1 = 617G 15.6T 0 0 0 0 >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 03:20:35 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1789C1065695 for ; Thu, 24 Dec 2009 03:20:35 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id 7417D8FC17 for ; Thu, 24 Dec 2009 03:20:34 +0000 (UTC) Received: by fxm27 with SMTP id 27so8052578fxm.3 for ; Wed, 23 Dec 2009 19:20:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=yemCudeYBp7IIjeQy/R54Pwu4/gyjQ8wThBZBHeg1Fc=; b=t3EJ/mxfN745dwGR7m8hjutYRrTMcKMsjGCm0kPIrPoBgb+xs7lDTig8+pmfPBHQJc cEFk1bVTCudl/unY2vVRMDqeJjeSWvhhcdE8c/Y51aPRGYXY4eKlffIaNPmu9WeTQHZx Zx0WA2c+GqXsw6Z3hvs98zUg3xo4fKncl9aJI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=kkCjxVWjT3JscBEjEgvH30U4NSGjdw7ZmiomG0eq36EzVdCvDbOkJYPi0Jy8PLx7E3 nAKO/e7Rlft4DsjeYCc43Nfk29Su4qg+3kOkDiqkW4yJmdPGRMAFjV6/hS4kRF180PwV JOqx9yADgiV/17fZn+G6xUiQ+/AuYU7hxY84o= MIME-Version: 1.0 Received: by 10.239.145.149 with SMTP id s21mr1054254hba.141.1261624833379; Wed, 23 Dec 2009 19:20:33 -0800 (PST) In-Reply-To: <1555A977-DDDB-47A5-83AF-4096D610C73C@spry.com> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1555A977-DDDB-47A5-83AF-4096D610C73C@spry.com> Date: Wed, 23 Dec 2009 22:20:33 -0500 Message-ID: <5da0588e0912231920q49f78546i1c87cb2cfc05fc32@mail.gmail.com> From: Rich To: Matt Simerson Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: base64 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 03:20:35 -0000 RnJlZUJTRCA3IG9yIDg/IEJlY2F1c2UgOCBwdXJwb3J0cyB0byBuZWVkIGZhciBsZXNzIHR1bmlu Zy4uLgoKLSBSaWNoCgpPbiBXZWQsIERlYyAyMywgMjAwOSBhdCAxMDoxOCBQTSwgTWF0dCBTaW1l cnNvbiA8bWF0dEBjb3JwLnNwcnkuY29tPiB3cm90ZToKPgo+IE9uIERlYyAyMywgMjAwOSwgYXQg MTI6MjYgUE0sIFNvbG9uIEx1dHogd3JvdGU6Cj4KPj4gSGksCj4+Cj4+IEkgb3B0ZWQgZm9yIHR3 byAxMi1kaXNjIHJhaWR6Mi4KPj4gUmVhc29ucyB3ZXJlOiBTcGFjZSBpcyBtb3JlIGltcG9ydGFu dCB0aGFuIHBlcmZvcm1hbmNlLgo+Pgo+PiBCdXQgcGVyZm9ybWFuY2UgaXMgdmVyeSBwb29yIC0g aGF2ZSBhIGxvb2sgYXQgdGhlIGlvc3RhdHMgLSBzb21ldGltZXMgbm90aGluZyByZWFsbHkgc2Vl bXMKPj4gdG8gaGFwcGVuIGZvciB1cCB0byB0ZW4gc2Vjb25kcywgb3IgdmVyeSBsaXR0bGUgZGF0 YSBnZXRzIKB3cml0dGVuLiBNaWdodCB0aGlzIGJlIGEgcHJvYmxlbQo+PiBvZiB0aGUgYW1kNjQg c3lzdGVtIGhhdmluZyBvbmx5IDRHQiBvZiBSQU0/IEFueSB0dW5lYWJsZSBzeXNjdGxzPyBFbmFi bGluZyBwcmVmZXRjaCBkaWRuJ3QgaGVscC4uLjoKPgo+IFdoZW4gdGVzdGluZyBvbiBhIGkzODYg c3lzdGVtIHdpdGggb25seSA0R0Igb2YgcmFtLCBkaXNhYmxpbmcgcHJlZmV0Y2ggZGlkIGhlbHAu IFRoZXJlIGFyZSB0aGUgc2V0dGluZ3MgSSB1c2VkLCB3YXkgYmFjayB3aGVuLi4uCj4KPiAkIGNh dCAvYm9vdC9sb2FkZXIuY29uZgo+ICN2bS5rbWVtX3NpemU9IjEwMjRNIgo+ICN2bS5rbWVtX3Np emVfbWF4PSIxMDI0TSIKPiAjdmZzLnpmcy5wcmVmZXRjaF9kaXNhYmxlPTEKPiAjdmZzLnpmcy5h cmNfbWluPSIxNk0iCj4gI3Zmcy56ZnMuYXJjX21heD0iNjRNIgo+Cj4gSSB1bHRpbWF0ZWx5IHdv cmtlZCBhcm91bmQgdGhhdCBwcm9ibGVtIGJ5IG5vIGxvbmdlciB1c2luZyByYWlkejIuCj4KPiBN YXR0Cj4KPgo+PiCgIKAgoCCgIKAgoCCgIGNhcGFjaXR5IKAgoCBvcGVyYXRpb25zIKAgoGJhbmR3 aWR0aCCgIKAgoCCgIKAgoCCgIKAgoCBjYXBhY2l0eSCgIKAgb3BlcmF0aW9ucyCgIKBiYW5kd2lk dGgKPj4gcG9vbCCgIKAgoCCgIHVzZWQgoGF2YWlsIKAgcmVhZCCgd3JpdGUgoCByZWFkIKB3cml0 ZSCgIHBvb2wgoCCgIKAgoCB1c2VkIKBhdmFpbCCgIHJlYWQgoHdyaXRlIKAgcmVhZCCgd3JpdGUK Pj4gLS0tLS0tLS0tLSCgLS0tLS0goC0tLS0tIKAtLS0tLSCgLS0tLS0goC0tLS0tIKAtLS0tLSCg IC0tLS0tLS0tLS0goC0tLS0tIKAtLS0tLSCgLS0tLS0goC0tLS0tIKAtLS0tLSCgLS0tLS0KPj4g YmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3Rv cmUtMSCgIKAgoDYxNkcgoDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNr dXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0x IKAgoCCgNjE2RyCgMTUuNlQgoCCgIKAwIKAxLjIzSyCgIKAgoDAgoCAxNTNNCj4+IGJhY2t1cC03 IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCg IKA2MTZHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKA1MTAKPj4gYmFja3VwLTcgoCCg MS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYx NkcgoDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIz VCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE2RyCg MTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAg MTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTZHIKAxNS42 VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKA1MTAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBH IKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxNkcgoDE1LjZUIKAg oCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCg IKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE2RyCgMTUuNlQgoCCgIKAw IKAgoDgzNSCgIKAgoDAgoCAxMDRNCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAg oCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTZHIKAxNS42VCCgIKAgoDAgoCCg NjQ1IKAgoCCgMCCgNjguN00KPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAg oDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxNkcgoDE1LjZUIKAgoCCgMCCgIKAgoDAg oCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgNDc0IKAgoCCgMCCg NTkuM00goCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE2RyCgMTUuNlQgoCCgIKAwIKAgoCCgNCCgIKAg oDAgoDUuNDlLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKA0OTQgoCCgIKAwIKA2MS44 TSCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTZHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCg IKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCA1MyCgIKAgoDAgoDYuNzNNIKAg oCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMTkgoCCgIKAwIKAzOS40 Swo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAw IKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+ IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0 b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIDMwIKAgoCCgMCCgIDE0MEsKPj4gYmFj a3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUt MSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMzMgoCCgIKAwIKA3Mi44Swo+PiBiYWNrdXAt NyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAg oCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAg oDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2 MTdHIKAxNS42VCCgIKAgoDAgoCCgIDY1IKAgoCCgMCCgIDE2MksKPj4gYmFja3VwLTcgoCCgMS4y M1QgoCAxMzBHIKAgoCA3MSCgIKAgoDAgoDguOTJNIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cg oDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCg IDEzMEcgoCCgNjM2IKAgoCCgMCCgNzkuNE0goCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUu NlQgoCCgIKAwIKAgoCCgNCCgIKAgoDAgoDEwLjVLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMw RyCgIKA0ODUgoCCgIKAwIKA2MC42TSCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCg IKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAg oCA4NSCgIKAgoDAgoDEwLjdNIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCg MCCgIKAgMTUgoCCgIKAwIKAzOS45Swo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAw IKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAg oCAxNyCgIKAgoDAgoDQwLjRLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCg IKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAw IKAgoCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAg oCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMzEgoCCg IKAwIKA3My4zSwo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAg oDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAg oCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCg IKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIDYyIKAgoCCgMCCgIDE1 NksKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCAxNyCgIKAgoDAgoDIuMjRNIKAgoCCg MCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+ PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgMTg5IKAgoCCgMCCgMjMuNk0goCCgIKAwIKBz dG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgOSCgIKAgoDAgoDE0LjBLCj4+IGJh Y2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDIgoCCgIDc0IKAgMTMzSyCgIDI0N0sgoHN0b3Jl LTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAKPj4gYmFja3Vw LTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCg IKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMTIgoCCgIKAwIKAxNy41Swo+PiBiYWNrdXAtNyCg IKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCg NjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMSCgIKAgoDAgoCAyNTVLCj4+IGJhY2t1cC03IKAgoDEu MjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdH IKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1Qg oCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1 LjZUIKAgoCCgMCCgIKAxMDAgoCCgIKAwIKAxMi42TQo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEz MEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQg oCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCg IKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAg oDAgoCAxMDEyIKAgoCCgMCCgIDEyNk0KPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCg MCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCg IKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAg oCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCAz MyCgIKAgoDAgoCAxMzdLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAw IKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAg oCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCg IKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMTkgoCCgIKAw IKAgMjkzSwo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAg oCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoDE1MyCgIKAgoDAgoDE5 LjJNCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAg oDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAK Pj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCg c3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAxMDkgoCCgIKAwIKAxMy43TQo+PiBi YWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9y ZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgNCCgIKAgoDAgoDE1LjVLCj4+IGJhY2t1 cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEg oCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoDEuNTRLIKAgoCCgMCCgIDE4OE0KPj4gYmFja3VwLTcg oCCgMS4yM1QgoCAxMzBHIKAgoCCgNiCgIKAgoDAgoCA4ODhLIKAgoCCgMCCgc3RvcmUtMSCgIKAg oDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAx LjIzVCCgIDEzMEcgoCCgNTU2IKAgoCCgMCCgNjkuNU0goCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3 RyCgMTUuNlQgoCCgIKAwIKAgoCCgOCCgIKAgoDAgoDExLjBLCj4+IGJhY2t1cC03IKAgoDEuMjNU IKAgMTMwRyCgIKA1MDAgoCCgIKAwIKA2Mi41TSCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAx NS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAx MzBHIKAgoCCgNSCgIKAgoDAgoCA3NjZLIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZU IKAgoCCgMCCgIKAgMTYgoCCgIKAwIKAzMC40Swo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcg oCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCg IKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAg oDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAg oCCgIDI5IKAgoCCgMCCgIDEzNksKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCg IKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAg MzggoCCgIKAwIKA4NC43Swo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCg MCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMCCg IKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAg oCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIDU3IKAgoCCg MCCgIDE0NUsKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCA5MiCgIKAgoDAgoDExLjZN IKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAg oCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgNDk3IKAgoCCgMCCgNjIuMU0goCCg IKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgOCCgIKAgoDAgoDkuNDhL Cj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKA1MzQgoCCgIKAwIKA2Ni43TSCgIKAgoDAg oHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAKPj4g YmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoDE1MyCgIKAgoDAgoDE5LjJNIKAgoCCgMCCgc3Rv cmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMTIgoCCgIKAwIKAxMi4wSwo+PiBiYWNr dXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0x IKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03 IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCg IKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIDI2IKAgoCCgMCCgIDEwMUsKPj4gYmFja3VwLTcgoCCg MS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYx N0cgoDE1LjZUIKAgoCCgMCCgIKAgMzQgoCCgIKAwIKA3Ni4zSwo+PiBiYWNrdXAtNyCgIKAxLjIz VCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCg MTUuNlQgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoCCgIKAwCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAg MTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42 VCCgIKAgoDAgoCCgIDYyIKAgoCCgMCCgIDE1NUsKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBH IKAgoCA3NSCgIKAgoDAgoDkuNDhNIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAg oCCgMCCgIKAgoDAgoCCgIKAwIKAgoCCgMAo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCg NjkwIKAgoCCgMCCgODYuM00goCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAw IKAgoCCgNiCgIKAgoDAgoDEyLjBLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAg oCCgIKAwIKAgoCCgMCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCg IKAwIKAgoCCgMCCgIKAgoDAKPj4gYmFja3VwLTcgoCCgMS4yM1QgoCAxMzBHIKAgoCCgMCCgIKAg oDAgoCCgIKAwIKAgoCCgMCCgc3RvcmUtMSCgIKAgoDYxN0cgoDE1LjZUIKAgoCCgMCCgIKAgMTAg oCCgIKAwIKAxNC4wSwo+PiBiYWNrdXAtNyCgIKAxLjIzVCCgIDEzMEcgoCCgIKAwIKAgoCCgMCCg IKAgoDAgoCCgIKAwIKBzdG9yZS0xIKAgoCCgNjE3RyCgMTUuNlQgoCCgIKAwIKAgoCCgMSCgIKAg oDAgoCAyNTVLCj4+IGJhY2t1cC03IKAgoDEuMjNUIKAgMTMwRyCgIKAgoDAgoCCgIKAwIKAgoCCg MCCgIKAgoDAgoHN0b3JlLTEgoCCgIKA2MTdHIKAxNS42VCCgIKAgoDAgoCCgIKAwIKAgoCCgMCCg IKAgoDAKPj4KPj4KPj4KPj4KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX18KPj4gZnJlZWJzZC1mc0BmcmVlYnNkLm9yZyBtYWlsaW5nIGxpc3QKPj4gaHR0 cDovL2xpc3RzLmZyZWVic2Qub3JnL21haWxtYW4vbGlzdGluZm8vZnJlZWJzZC1mcwo+PiBUbyB1 bnN1YnNjcmliZSwgc2VuZCBhbnkgbWFpbCB0byAiZnJlZWJzZC1mcy11bnN1YnNjcmliZUBmcmVl YnNkLm9yZyIKPgo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fCj4gZnJlZWJzZC1mc0BmcmVlYnNkLm9yZyBtYWlsaW5nIGxpc3QKPiBodHRwOi8vbGlzdHMu ZnJlZWJzZC5vcmcvbWFpbG1hbi9saXN0aW5mby9mcmVlYnNkLWZzCj4gVG8gdW5zdWJzY3JpYmUs IHNlbmQgYW55IG1haWwgdG8gImZyZWVic2QtZnMtdW5zdWJzY3JpYmVAZnJlZWJzZC5vcmciCj4K CgoKLS0gCgpQcmVmaXJvIHBlcmRlciBhIGd1ZXJyYSBlIGdhbmhhciBhIHBhei4gLS0gQm9iIE1h cmxleQo= From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 05:05:48 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0308C106568D for ; Thu, 24 Dec 2009 05:05:48 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f226.google.com (mail-ew0-f226.google.com [209.85.219.226]) by mx1.freebsd.org (Postfix) with ESMTP id 883BF8FC13 for ; Thu, 24 Dec 2009 05:05:47 +0000 (UTC) Received: by ewy26 with SMTP id 26so4473510ewy.3 for ; Wed, 23 Dec 2009 21:05:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=EcaJ3aYYvS7ymM000phOVJWOSLmoYtm1WfadHuEhiwk=; b=pHAGyuCyksBNRDfKp/lYSOvaPSV2f4PyWKuxKQ3oY5sR9/28uxvoeGwyrmRWuaRYZu klcABAYpMkjVrl3hjQkJbRg9M1+cy8DDxe46ets7etdCLbKP1SYURrlBLL+jFBBd4g89 kZrpkNKqq9FAA470wvKSIpsIix4NsIlnA4GrA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=TJAf0HJPil5uZu4OU931ImC11giBRDjzEU4YiZ3lhcYOXmq+3WamezCM3fwOFoYPtg RIVfIHWn+dbZL/uUoK3a+jyXkYxnsjeZojKyHl14ZwwjLjT5VYvG9qAq2cmxCl0waAlU 15KpDcKzu9OjlIeigwdXm1sQB+vOH84iHecGk= MIME-Version: 1.0 Received: by 10.216.91.73 with SMTP id g51mr3919993wef.68.1261631146564; Wed, 23 Dec 2009 21:05:46 -0800 (PST) In-Reply-To: <5da0588e0912231920q49f78546i1c87cb2cfc05fc32@mail.gmail.com> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1555A977-DDDB-47A5-83AF-4096D610C73C@spry.com> <5da0588e0912231920q49f78546i1c87cb2cfc05fc32@mail.gmail.com> Date: Thu, 24 Dec 2009 00:05:46 -0500 Message-ID: From: Thomas Burgess To: Rich Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 05:05:48 -0000 On Wed, Dec 23, 2009 at 10:20 PM, Rich wrote: > FreeBSD 7 or 8? Because 8 purports to need far less tuning... > > - Rich > > That only applies to amd64. i386 still needs tuning. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 13:17:30 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25960106566C for ; Thu, 24 Dec 2009 13:17:30 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 714488FC15 for ; Thu, 24 Dec 2009 13:17:29 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-252-59.shv.bellsouth.net [98.67.252.59]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id C1422A1CAF8F; Thu, 24 Dec 2009 07:17:27 -0600 (CST) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id nBODHNjb042895; Thu, 24 Dec 2009 07:17:24 -0600 (CST) (envelope-from morganw@chemikals.org) Date: Thu, 24 Dec 2009 07:17:23 -0600 (CST) From: Wes Morgan X-X-Sender: morganw@volatile To: Steven Schlansker In-Reply-To: <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> Message-ID: References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912221741r48395defnd11e34728d2b7b97@mail.gmail.com> <9CEE3EE5-2CF7-440E-B5F4-D2BD796EA55C@gmail.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.95.2 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 13:17:30 -0000 On Wed, 23 Dec 2009, Steven Schlansker wrote: > > On Dec 22, 2009, at 5:41 PM, Rich wrote: > >> http://kerneltrap.org/mailarchive/freebsd-fs/2009/9/30/6457763 may be >> useful to you - it's what we did when we got stuck in a resilver loop. >> I recall being in the same state you're in right now at one point, and >> getting out of it from there. >> >> I think if you apply that patch, you'll be able to cancel the >> resilver, and then resilver again with the device you'd like to >> resilver with. >> > > Thanks for the suggestion, but the problem isn't that it's stuck > in a resilver loop (which is what the patch seems to try to avoid) > but that I can't detach a drive. > > Now I got clever and fudged a label onto the new drive (copied the first > 50MB of one of the dying drives), ran a scrub, and have this layout - > > pool: universe > state: DEGRADED > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are unaffected. > action: Determine if the device needs to be replaced, and clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-9P > scrub: scrub completed after 20h58m with 0 errors on Wed Dec 23 11:36:43 2009 > config: > > NAME STATE READ WRITE CKSUM > universe DEGRADED 0 0 0 > raidz2 DEGRADED 0 0 0 > ad16 ONLINE 0 0 0 > replacing DEGRADED 0 0 40.7M > ad26 ONLINE 0 0 0 506G repaired > 6170688083648327969 UNAVAIL 0 88.7M 0 was /dev/ad12 > ad8 ONLINE 0 0 0 > concat/back2 ONLINE 0 0 0 > ad10 ONLINE 0 0 0 > concat/ad4ex ONLINE 0 0 0 > ad24 ONLINE 0 0 0 > concat/ad6ex ONLINE 48 0 0 28.5K repaired > > Why has the replacing vdev not gone away? I still can't detach - > [steven@universe:~]% sudo zpool detach universe 6170688083648327969 > cannot detach 6170688083648327969: no valid replicas > even though now there actually is a valid replica (ad26) Try detaching ad26. If it lets you do that it will abort the replacement and then you just do another replacement with the real device. If it won't let you do that, you may be stuck having to do some metadata tricks. > Additionally, running zpool clear hangs permanently and in fact freezes all IO > to the pool. Since I've mounted /usr from the pool, this is effectively > death to the system. Any other zfs commands seem to work okay > (zpool scrub, zfs mount, etc.). Just clear is insta-death. I can't > help but suspect that this is caused by the now non-sensical vdev configuration > (replacing with one good drive and one nonexistent one)... > > Any further thoughts? Thanks, > Steven > > >> - Rich >> >> On Tue, Dec 22, 2009 at 6:15 PM, Miroslav Lachman <000.fbsd@quip.cz> wrote: >>> Steven Schlansker wrote: >>>> >>>> As a corollary, you may notice some funky concat business going on. >>>> This is because I have drives which are very slightly different in size (< >>>> 1MB) >>>> and whenever one of them goes down and I bring the pool up, it helpfully >>>> (?) >>>> expands the pool by a whole megabyte then won't let the drive back in. >>>> This is extremely frustrating... is there any way to fix that? I'm >>>> eventually going to keep expanding each of my drives one megabyte at a >>>> time >>>> using gconcat and space on another drive! Very frustrating... >>> >>> You can avoid it by partitioning the drives to the well known 'minimal' size >>> (size of smallest disk) and use the partition instead of raw disk. >>> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >>> of ad12p1 (if you creat partitions by gpart) >>> >>> You can also use labels instead of device name. >>> >>> Miroslav Lachman >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> >> >> -- >> >> If you are over 80 years old and accompanied by your parents, we will >> cash your check. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 15:17:55 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DB040106566B; Thu, 24 Dec 2009 15:17:55 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id B3EDE8FC0A; Thu, 24 Dec 2009 15:17:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBOFHt22008387; Thu, 24 Dec 2009 15:17:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBOFHtRi008383; Thu, 24 Dec 2009 15:17:55 GMT (envelope-from linimon) Date: Thu, 24 Dec 2009 15:17:55 GMT Message-Id: <200912241517.nBOFHtRi008383@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141950: [unionfs] [lor] ufs/unionfs(/ufs) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 15:17:55 -0000 Old Synopsis: [lor] ufs/unionfs(/ufs) New Synopsis: [unionfs] [lor] ufs/unionfs(/ufs) Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Dec 24 15:17:40 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141950 From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 16:53:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CD3F1065672 for ; Thu, 24 Dec 2009 16:53:37 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smtp-out3.tiscali.nl (smtp-out3.tiscali.nl [195.241.79.178]) by mx1.freebsd.org (Postfix) with ESMTP id E79C38FC1B for ; Thu, 24 Dec 2009 16:53:36 +0000 (UTC) Received: from [212.123.145.58] (helo=sjakie.klop.ws) by smtp-out3.tiscali.nl with esmtp (Exim) (envelope-from ) id 1NNqwR-0001Cm-Kj; Thu, 24 Dec 2009 17:53:35 +0100 Received: from 82-170-177-25.ip.telfort.nl (localhost [127.0.0.1]) by sjakie.klop.ws (Postfix) with ESMTP id D3F581444B; Thu, 24 Dec 2009 17:53:29 +0100 (CET) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "Solon Lutz" References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> Date: Thu, 24 Dec 2009 17:53:29 +0100 MIME-Version: 1.0 From: "Ronald Klop" Message-ID: In-Reply-To: <1696529130.20091223212612@pyro.de> User-Agent: Opera Mail/10.10 (FreeBSD) Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 16:53:37 -0000 Isn't it write caching? My Solaris machine at work also flushes the data every 30 seconds. Ronald. On Wed, 23 Dec 2009 21:26:12 +0100, Solon Lutz wrote: > Hi, > > I opted for two 12-disc raidz2. > Reasons were: Space is more important than performance. > > But performance is very poor - have a look at the iostats - sometimes =20 > nothing really seems > to happen for up to ten seconds, or very little data gets written. =20 > Might this be a problem > of the amd64 system having only 4GB of RAM? Any tuneable sysctls? =20 > Enabling prefetch didn't help...: > > capacity operations bandwidth =20 > capacity operations bandwidth > pool used avail read write read write pool =20 > used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- ---------- =20 > ----- ----- ----- ----- ----- ----- > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 1.23K 0 153M > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 510 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 510 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 835 0 104M > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 645 0 68.7M > backup-7 1.23T 130G 0 0 0 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 474 0 59.3M 0 store-1 616G = =20 > 15.6T 0 4 0 5.49K > backup-7 1.23T 130G 494 0 61.8M 0 store-1 616G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 53 0 6.73M 0 store-1 617G = =20 > 15.6T 0 19 0 39.4K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 30 0 140K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 33 0 72.8K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 65 0 162K > backup-7 1.23T 130G 71 0 8.92M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 636 0 79.4M 0 store-1 617G = =20 > 15.6T 0 4 0 10.5K > backup-7 1.23T 130G 485 0 60.6M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 85 0 10.7M 0 store-1 617G = =20 > 15.6T 0 15 0 39.9K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 17 0 40.4K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 31 0 73.3K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 62 0 156K > backup-7 1.23T 130G 17 0 2.24M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 189 0 23.6M 0 store-1 617G = =20 > 15.6T 0 9 0 14.0K > backup-7 1.23T 130G 2 74 133K 247K store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 12 0 17.5K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 1 0 255K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 100 0 12.6M > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 1012 0 126M > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 33 0 137K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 19 0 293K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 153 0 19.2M > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 109 0 13.7M > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 4 0 15.5K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 1.54K 0 188M > backup-7 1.23T 130G 6 0 888K 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 556 0 69.5M 0 store-1 617G = =20 > 15.6T 0 8 0 11.0K > backup-7 1.23T 130G 500 0 62.5M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 5 0 766K 0 store-1 617G = =20 > 15.6T 0 16 0 30.4K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 29 0 136K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 38 0 84.7K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 57 0 145K > backup-7 1.23T 130G 92 0 11.6M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 497 0 62.1M 0 store-1 617G = =20 > 15.6T 0 8 0 9.48K > backup-7 1.23T 130G 534 0 66.7M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 153 0 19.2M 0 store-1 617G = =20 > 15.6T 0 12 0 12.0K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 26 0 101K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 34 0 76.3K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 62 0 155K > backup-7 1.23T 130G 75 0 9.48M 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 690 0 86.3M 0 store-1 617G = =20 > 15.6T 0 6 0 12.0K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 10 0 14.0K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 1 0 255K > backup-7 1.23T 130G 0 0 0 0 store-1 617G = =20 > 15.6T 0 0 0 0 > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 00:46:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BD7A1106568B for ; Fri, 25 Dec 2009 00:46:31 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f226.google.com (mail-ew0-f226.google.com [209.85.219.226]) by mx1.freebsd.org (Postfix) with ESMTP id 5077E8FC0A for ; Fri, 25 Dec 2009 00:46:31 +0000 (UTC) Received: by ewy26 with SMTP id 26so5207014ewy.3 for ; Thu, 24 Dec 2009 16:46:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=npSItJ52yU1y8qTUA3PCCgXw3J0Co57aY2CYPf3NmcY=; b=KSDb0esHMa5HebDwqN1lRy3wmdy7SbOBU9c3rZbvR4IRpLqRnI9zueMbKjli+kqOAN t1oFZ+auxKcXXWUccC9dsU+5xec0cqy7B3EnETDjWS/n641YIvAyM4moS7QHYPtyR4rb hAeOxNXDdcOMv8Lh/qT4/8I2/UE90zTTWJsx8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=wGGxQlJUORsH7VE1x8ktfWAgTaJp6MUpxX+BU2dsbEm3oml2YYoQh6OXjOfB3lhz8U P6SzQjopcZYeFC7naF0apeEtTj1bo8ug+ypfj9ZZaVsZWAr93CJj+9cVDY0GuhV7yUgr 9cyMRC6QMCuMCv55AFKdSqQKTMQp+We44qLyI= MIME-Version: 1.0 Received: by 10.216.88.138 with SMTP id a10mr3185885wef.163.1261701990199; Thu, 24 Dec 2009 16:46:30 -0800 (PST) In-Reply-To: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> Date: Thu, 24 Dec 2009 19:46:30 -0500 Message-ID: From: Thomas Burgess To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 00:46:31 -0000 On Thu, Dec 24, 2009 at 11:53 AM, Ronald Klop wrote: > Isn't it write caching? > My Solaris machine at work also flushes the data every 30 seconds. > > Ronald. > > I think you are right. ZFS does work in bursts...it's very different than what most people expect. I know it was really weird to me when i first saw it. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 02:00:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D724A1065670 for ; Fri, 25 Dec 2009 02:00:01 +0000 (UTC) (envelope-from root@ubh.homeip.net) Received: from smtp.bredband2.com (smtp.bredband2.com [83.219.192.166]) by mx1.freebsd.org (Postfix) with ESMTP id 8E3C88FC1A for ; Fri, 25 Dec 2009 02:00:01 +0000 (UTC) Received: from ubh.homeip.net (c-83-233-35-119.cust.bredband2.com [83.233.35.119]) by smtp.bredband2.com (Postfix) with ESMTPA id 59C233421A for ; Fri, 25 Dec 2009 02:40:33 +0100 (CET) Received: by ubh.homeip.net (Postfix, from userid 0) id EEF592842D; Fri, 25 Dec 2009 02:40:30 +0100 (CET) To: freebsd-fs@freebsd.org Message-Id: <20091225014030.EEF592842D@ubh.homeip.net> Date: Fri, 25 Dec 2009 02:40:30 +0100 (CET) From: root@ubh.homeip.net (Charlie Root) X-Bredband2-MailScanner: Found to be clean X-Bredband2-MailScanner-From: root@ubh.homeip.net X-Spam-Status: No Subject: nullfs, delete files and behaves super slow X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 02:00:01 -0000 Moving a file to or from a nullfs mounted part of a filesystem to another location within the same filesystem results in a complete file copying. More severe is that is if I move a file from one place to another where the two places is the same (one mounted with mount_nullfs) the file disappear. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 03:00:55 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9F8921065672; Fri, 25 Dec 2009 03:00:55 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 76D178FC23; Fri, 25 Dec 2009 03:00:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id nBP30tWC053522; Fri, 25 Dec 2009 03:00:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id nBP30t7w053512; Fri, 25 Dec 2009 03:00:55 GMT (envelope-from linimon) Date: Fri, 25 Dec 2009 03:00:55 GMT Message-Id: <200912250300.nBP30t7w053512@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/141992: [ufs] fsck cannot repair file system in which it finds an error [regression] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 03:00:55 -0000 Old Synopsis: fsck cannot repair file system in which it finds an error New Synopsis: [ufs] fsck cannot repair file system in which it finds an error [regression] Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Dec 25 02:59:51 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=141992 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 11:03:55 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E6FE91065698 for ; Fri, 25 Dec 2009 11:03:55 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id 963678FC12 for ; Fri, 25 Dec 2009 11:03:55 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=MORDOR) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NO7xX-0006j4-0z; Fri, 25 Dec 2009 12:03:54 +0100 Date: Fri, 25 Dec 2009 12:03:30 +0100 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <1266543768.20091225120330@pyro.de> To: Thomas Burgess In-Reply-To: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Score: -1.4 (-) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: Guten Tag Thomas Burgess, Dear Thomas Burgess, am Freitag, 25. Dezember 2009 um 01:46 schrieben Sie: on Freitag, 25. Dezember 2009 at 01:46 you wrote: > On Thu, Dec 24, 2009 at 11:53 AM, Ronald Klop wrote: > Isn't it write caching? > My Solaris machine at work also flushes the data every 30 seconds. [...] Content analysis details: (-1.4 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP X-Spam-Flag: NO Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 11:03:56 -0000 Guten Tag Thomas Burgess, Dear Thomas Burgess, am Freitag, 25. Dezember 2009 um 01:46 schrieben Sie: on Freitag, 25. Dezember 2009 at 01:46 you wrote: > On Thu, Dec 24, 2009 at 11:53 AM, Ronald Klop wrote: > Isn't it write caching? > My Solaris machine at work also flushes the data every 30 seconds. > Ronald. > I think you are right. ZFS does work in bursts...it's very different than what most people expect. I know it was really weird to me when i first saw it. But my case isn't anywhere near of performance... According to iostat and gstat it reads and writes some data for 3-4 seconds and then sits there silently for another 10 and does absolutely nothing. Even when I'm doing single HD to single HD copy... Best regards, Solon From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 11:11:25 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A51F5106566B for ; Fri, 25 Dec 2009 11:11:25 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f226.google.com (mail-ew0-f226.google.com [209.85.219.226]) by mx1.freebsd.org (Postfix) with ESMTP id 322A08FC0C for ; Fri, 25 Dec 2009 11:11:25 +0000 (UTC) Received: by ewy26 with SMTP id 26so5443089ewy.3 for ; Fri, 25 Dec 2009 03:11:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=lq7NWErW4atTlRXyiUBq3ZumcQaAw+G/C7ZV/LW4SAM=; b=PILHJWqtYo6JvJ0F0B8db8F4TQs13PBfWs0XZwSHOu6XgY+9IldMc7NOYaGVEBdn81 zxcBj8jn9mbeXxIV0I6Ia41ygtzkqXaF7dpJeauoOYIetcKny8J/QV8gzL3x/YtzojBe A5JVwYFD/o2Ix4C9koLnpBH/jJazfBG2Li65E= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=plU+UM6qtdlOj4vxcs63oEkvrv30T42xmwVf1Vqj/0iXAtpO8INR9O2N1+AT94cqEP kqtLcDPY3H0+mgtyPD4FSrl0iaYIdd/lRnspjBjECpJNnE//LseG9KiIgbGonfhpBJiW JwCjfh60aH8eJWAKVz/e3IQTvSzHV9i/PyltE= MIME-Version: 1.0 Received: by 10.216.86.203 with SMTP id w53mr557285wee.58.1261739484092; Fri, 25 Dec 2009 03:11:24 -0800 (PST) In-Reply-To: <1266543768.20091225120330@pyro.de> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> Date: Fri, 25 Dec 2009 06:11:24 -0500 Message-ID: From: Thomas Burgess To: Solon Lutz Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 11:11:25 -0000 Depending on tuning, you can make it flush to disk more often. It is also highly dependent on how much memory you have. What ZFS tries to do is this: It will collect stuff into memory then flush it to disk. What you describe is EXACTLY what ZFS does. Sometimes it may not write for up to 30 seconds. I know on my personal system, i see this happen a lot but it doesn't seem to have a hugely negative impact on performance for what i use my machine for. Depending on your setup, you may want to try various sysctl settings. I found that disabling prefetch can have a huge impact on some systems. On Fri, Dec 25, 2009 at 6:03 AM, Solon Lutz wrote: > Guten Tag Thomas Burgess, > Dear Thomas Burgess, > > am Freitag, 25. Dezember 2009 um 01:46 schrieben Sie: > on Freitag, 25. Dezember 2009 at 01:46 you wrote: > > > > On Thu, Dec 24, 2009 at 11:53 AM, Ronald Klop < > ronald-freebsd8@klop.yi.org> wrote: > > Isn't it write caching? > > My Solaris machine at work also flushes the data every 30 seconds. > > > Ronald. > > > > > I think you are right. ZFS does work in bursts...it's very different > than what most people expect. I know it was really weird to me when i > first saw it. > > But my case isn't anywhere near of performance... According to iostat and > gstat it reads > and writes some data for 3-4 seconds and then sits there silently for > another 10 and does > absolutely nothing. > Even when I'm doing single HD to single HD copy... > > > Best regards, > > Solon > > > > From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 11:23:54 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F16F31065670 for ; Fri, 25 Dec 2009 11:23:54 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id A1E868FC12 for ; Fri, 25 Dec 2009 11:23:54 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=MORDOR) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NO8Gs-0006lv-G6; Fri, 25 Dec 2009 12:23:53 +0100 Date: Fri, 25 Dec 2009 12:23:31 +0100 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <982740779.20091225122331@pyro.de> To: Thomas Burgess In-Reply-To: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Score: -1.4 (-) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: > Depending on tuning, you can make it flush to disk more often. It is also highly dependent on how much memory you have. At the moment: 4GB. I'm about to try upgrading it to 6GB. Why can't it work like this all the time: [...] Content analysis details: (-1.4 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP X-Spam-Flag: NO Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 11:23:55 -0000 > Depending on tuning, you can make it flush to disk more often. It is also highly dependent on how much memory you have. At the moment: 4GB. I'm about to try upgrading it to 6GB. Why can't it work like this all the time: device r/s w/s kr/s kw/s wait svc_t %b da0 0.0 1907.4 0.0 65494.8 0 0.6 6 ad10 680.7 0.0 87132.0 0.0 35 43.7 92 Effectively, it transfers 8-10MB/s! Took 24h for 1.2TB... > I know on my personal system, i see this happen a lot but it doesn't seem to have a hugely negative impact on > performance for what i use my machine for. Depending on your setup, you may want to try various sysctl settings. I > found that disabling prefetch can have a huge impact on some systems. Prefect is not enabled because of RAM < 4GB... Can you name an tuneable sysctls? Best regards, Solon From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 14:35:23 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 35C421065694 for ; Fri, 25 Dec 2009 14:35:23 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 756608FC0A for ; Fri, 25 Dec 2009 14:35:22 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-215-64.shv.bellsouth.net [98.67.215.64]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 81BD9A1DDAA4; Fri, 25 Dec 2009 08:35:21 -0600 (CST) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id nBPEZHl0094714; Fri, 25 Dec 2009 08:35:18 -0600 (CST) (envelope-from morganw@chemikals.org) Date: Fri, 25 Dec 2009 08:35:17 -0600 (CST) From: Wes Morgan X-X-Sender: morganw@volatile To: Solon Lutz In-Reply-To: <982740779.20091225122331@pyro.de> Message-ID: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> <982740779.20091225122331@pyro.de> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.95.2 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 14:35:23 -0000 On Fri, 25 Dec 2009, Solon Lutz wrote: >> Depending on tuning, you can make it flush to disk more often. It is also highly dependent on how much memory you have. > > At the moment: 4GB. I'm about to try upgrading it to 6GB. > > Why can't it work like this all the time: > > device r/s w/s kr/s kw/s wait svc_t %b > da0 0.0 1907.4 0.0 65494.8 0 0.6 6 > ad10 680.7 0.0 87132.0 0.0 35 43.7 92 > > > Effectively, it transfers 8-10MB/s! Took 24h for 1.2TB... > >> I know on my personal system, i see this happen a lot but it doesn't seem to have a hugely negative impact on >> performance for what i use my machine for. Depending on your setup, you may want to try various sysctl settings. I >> found that disabling prefetch can have a huge impact on some systems. > > Prefect is not enabled because of RAM < 4GB... I have my suspicions that this means your filesystem is heavily fragmented. I've had it happen to me on at least 3 pools, some of which were not even close to full, yet rebuilding the pool restored much of the performance. Hopefully with the block pointer rewrite support coming we will get some tools to address this. Right now I am not even aware of a tool that will check for fragmentation. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 14:39:19 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 509931065692 for ; Fri, 25 Dec 2009 14:39:19 +0000 (UTC) (envelope-from wonslung@gmail.com) Received: from mail-ew0-f226.google.com (mail-ew0-f226.google.com [209.85.219.226]) by mx1.freebsd.org (Postfix) with ESMTP id D49C98FC1E for ; Fri, 25 Dec 2009 14:39:18 +0000 (UTC) Received: by ewy26 with SMTP id 26so5532809ewy.3 for ; Fri, 25 Dec 2009 06:39:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=5jg4xURstJxLN+ow2TO2Q4/MxmODdxDmkQDuDnkUji4=; b=kJxb5tWaMejIkoZMUe9QDpeRmTJxnzdHFTV8BtT8C9yqdiT/rnRKjt1Fz8nnX2a5Sd vYgqG/1Kjd7ozLJmY10z/2XTiOeqtw88I/9SKrvdD/q4MSQcBkD+B7b8t9YYmrwkspaO CFfZeNkZwjQB/MRXvSFoIuT0+ief+jqCuwwAk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=VljHgvFcVZ5m+rY3GKXPvNhF7GwMAocoWmO94wM7KcH7iTXsSvZ5M4SRyTt5Hz5GJk yxKhC532Tzflt3m3C5p/2bHxi59cpkSDOESMv0zPRpcd/9Xy2SByHJwqHFXVpUfd3NeC y3bZ2VlVIFhSbpdSXvcAKtO2hTorNurqdVu8M= MIME-Version: 1.0 Received: by 10.216.93.66 with SMTP id k44mr4212782wef.67.1261751957734; Fri, 25 Dec 2009 06:39:17 -0800 (PST) In-Reply-To: References: <568624531.20091215163420@pyro.de> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> <982740779.20091225122331@pyro.de> Date: Fri, 25 Dec 2009 09:39:17 -0500 Message-ID: From: Thomas Burgess To: Wes Morgan Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 14:39:19 -0000 > I have my suspicions that this means your filesystem is heavily fragmented. I've had it happen to me on at least 3 pools, some of which were not even close to full, yet rebuilding the pool restored much of the performance. Hopefully with the block pointer rewrite support coming we will get some tools to address this. Right now I am not even aware of a tool that will check for fragmentation. I'm pretty sure he is running on a brand new pool. Originally he asked about creating a raidz2 vdev with 24 drives. Then, he said he "settled" for 2 12 drive vdevs. Also, this being FreeBSD it might be awhile before we even see the block pointer rewrite support even when it DOES hit. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 14:39:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8C7F81065676 for ; Fri, 25 Dec 2009 14:39:52 +0000 (UTC) (envelope-from www@patpro.net) Received: from rack.patpro.net (rack.patpro.net [193.30.227.216]) by mx1.freebsd.org (Postfix) with ESMTP id 578B08FC0A for ; Fri, 25 Dec 2009 14:39:52 +0000 (UTC) Received: by rack.patpro.net (Postfix, from userid 80) id 4A12F88; Fri, 25 Dec 2009 15:23:41 +0100 (CET) To: Bob Friesenhahn MIME-Version: 1.0 Date: Fri, 25 Dec 2009 15:23:41 +0100 From: patpro In-Reply-To: References: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> Message-ID: <2d31d4474f4fdc31428a236df5a789db@localhost> X-Sender: patpro@patpro.net User-Agent: RoundCube Webmail/0.3.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: snapshot implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 14:39:52 -0000 On Wed, 23 Dec 2009 10:41:34 -0600 (CST), Bob Friesenhahn wrote: > > I don't know anything about snapshots in UFS, but snapshots in ZFS are > certainly remarkably different. ZFS uses copy-on-write (COW) whenever > a data block is updated UFS snapshot also uses copy on write mechanism. So it's not so different from ZFS snapshot. patpro From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 14:39:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8CC12106568D for ; Fri, 25 Dec 2009 14:39:52 +0000 (UTC) (envelope-from www@patpro.net) Received: from rack.patpro.net (rack.patpro.net [193.30.227.216]) by mx1.freebsd.org (Postfix) with ESMTP id 579268FC12 for ; Fri, 25 Dec 2009 14:39:52 +0000 (UTC) Received: by rack.patpro.net (Postfix, from userid 80) id 6CAE1A7; Fri, 25 Dec 2009 15:29:53 +0100 (CET) To: Barry Pederson MIME-Version: 1.0 Date: Fri, 25 Dec 2009 15:29:53 +0100 From: patpro In-Reply-To: <4B3283F2.7060804@barryp.org> References: <32CA2B73-3412-49DD-9401-4773CC73BED0@patpro.net> <4B3283F2.7060804@barryp.org> Message-ID: <3ea87f5f62bb8ba30d798d4605a64c83@localhost> X-Sender: patpro@patpro.net User-Agent: RoundCube Webmail/0.3.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: snapshot implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 14:39:52 -0000 On Wed, 23 Dec 2009 14:56:18 -0600, Barry Pederson wrote: > "...there's virtually no overhead at all due to the copy-on-write > architecture. In fact, sometimes it is faster to take a snapshot rather > than free the blocks containing the old data!" > > That's certainly not the case with UFS snapshots, which can take a long > time to complete (we're talking freezing your machine's disk activity > for many minutes), and are limited to 20 total. UFS uses copy on write. But you say many minutes to complete? Don't you speak about dump(1), that uses snapshot as a basis to dump a live file system? I agree, UFS snapshot creation is not lightning-fast, but many minutes seems a lot to me, and I never experienced such a long creation time. patpro From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 17:08:45 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3D3EB1065670 for ; Fri, 25 Dec 2009 17:08:45 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id DC9FA8FC14 for ; Fri, 25 Dec 2009 17:08:44 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=MORDOR) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1NODea-0007Q9-OT; Fri, 25 Dec 2009 18:08:44 +0100 Date: Fri, 25 Dec 2009 18:08:21 +0100 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <168533615.20091225180821@pyro.de> To: Wes Morgan In-Reply-To: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> <982740779.20091225122331@pyro.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Score: -1.4 (-) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: > I have my suspicions that this means your filesystem is heavily > fragmented. I've had it happen to me on at least 3 pools, some of which > were not even close to full, yet rebuilding the pool restored much of the > performance. Hopefully with the block pointer rewrite support coming we > will get some tools to address this. Right now I am not even aware of a > tool that will check for fragmentation. [...] Content analysis details: (-1.4 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP X-Spam-Flag: NO Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 17:08:45 -0000 > I have my suspicions that this means your filesystem is heavily > fragmented. I've had it happen to me on at least 3 pools, some of which > were not even close to full, yet rebuilding the pool restored much of the > performance. Hopefully with the block pointer rewrite support coming we > will get some tools to address this. Right now I am not even aware of a > tool that will check for fragmentation. Fragmentation is not an issue, as the destination pool was freshly created and the source was created in a continous copy operation... =( Best regards, Solon Lutz +-----------------------------------------------+ | Pyro.Labs Berlin - Creativity for tomorrow | | Wasgenstrasse 75/13 - 14129 Berlin, Germany | | www.pyro.de - phone + 49 - 30 - 48 48 58 58 | | info@pyro.de - fax + 49 - 30 - 80 94 03 52 | +-----------------------------------------------+ From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 19:54:57 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8F0B9106568D for ; Fri, 25 Dec 2009 19:54:57 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id 541A58FC24 for ; Fri, 25 Dec 2009 19:54:56 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id nBPJssiP098939; Fri, 25 Dec 2009 13:54:55 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=OJ0EeuZ1MrVd1KPWoEQ1SlaWACnYHvhm+i2CwapTzodm9jsPMlsAOfhvD5uHzPNJA waBL5MslXQvWJ/9HQXSokUNkLq1ynAWiiCYUgP1VmOLjQjJYc6UzIODY634SpQBExSV +CALWxcEzsGAQLJOfXTygvJxDTX4SRw1t+MWaNs= Message-ID: <4B35188E.7010602@jrv.org> Date: Fri, 25 Dec 2009 13:54:54 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 To: Solon Lutz References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> In-Reply-To: <1696529130.20091223212612@pyro.de> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 19:54:57 -0000 Solon Lutz wrote: > I opted for two 12-disc raidz2. > Reasons were: Space is more important than performance. > > But performance is very poor - have a look at the iostats Try a RAIDZ with fewer drives - maybe four drives - and see if performance is better. 12 drives in RAIDZ2 may result in a stripe that is just too big for the writes your tests do. > sometimes nothing really seems > to happen for up to ten seconds, or very little data gets written. Might this be a problem > of the amd64 system having only 4GB of RAM? Any tuneable sysctls? Enabling prefetch didn't help...: Any copy-on-write filesystem is going to greatly benefit from very aggressive write-deferral and combining. The cache probably isn't flushed unless it needs to be, or until the next transaction group commit. "10 seconds" is roughly the usual interval for ZFS commits. This is likely not a problem.