From owner-freebsd-fs@FreeBSD.ORG Tue May 14 19:08:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 74F9019E for ; Tue, 14 May 2013 19:08:10 +0000 (UTC) (envelope-from henner.heck@web.de) Received: from mout.web.de (mout.web.de [212.227.17.12]) by mx1.freebsd.org (Postfix) with ESMTP id 1B1402F0 for ; Tue, 14 May 2013 19:08:09 +0000 (UTC) Received: from sender ([77.3.187.219]) by smtp.web.de (mrweb003) with ESMTPSA (Nemesis) id 0MOAmi-1UZ2PW4Ajc-006Dwu for ; Tue, 14 May 2013 20:43:23 +0200 Message-ID: <519285C9.8000306@web.de> Date: Tue, 14 May 2013 20:43:21 +0200 From: Henner Heck User-Agent: Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.2.15) Gecko/20110303 Thunderbird/3.1.9 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Long delays during boot and zpool/zfs commands on 9.1-RELEASE (possibly because of unavailable pool?) References: <968416157.282645.1368232366317.JavaMail.root@erie.cs.uoguelph.ca> <518EFE05.8010100@hub.org> <518F4130.6080201@hub.org> <518F4307.3060908@hub.org> In-Reply-To: <518F4307.3060908@hub.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:JYA8Ot6S1idQw7asAG/62L181aoKAOC7Qa1w+yHCX6s kyq5qTiOogVrWLY3AS0lQu/81rtpwZfsQ0vAZI/RwTyBTKfhVw KpgM3aK8jOZaUTwY5DQ9T3wOv8lDFp88lyNYX+kENN5Mpmq06n GMvyYdEONJ4c3XKyeTK+z8qE/g5n/oCTwLhrEMKTtf3nYpm2P/ FdmJ2AFDfNP2ad9dtY1Cw== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 May 2013 19:08:10 -0000 Hello all, i set up a PC with FreeBSD 9.1-Release and 2 zfs pools. One is a mirror from which FreeBSD is booting (tough enough without getting "error 2"), the other one a raidz2 for data. The disks for the raidz2 are encrypted with geli and attached manually. I noticed that a "zpool status" or a "zfs list" before attaching the encrypted disks waits for about one minute before showing output. When they finally do, the output is as expected, the raidz2 pool is shown as UNAVAIL and its datasets are not listed. When all the disks are attached with geli, the outputs are given immediately. On boot there are 2 delays. One of about 1 minute after the output "Setting hostid: 0x........" and one of 2 minutes after "Mounting local file systems:.". Both these outputs don't show up in dmesg, which ends with "Trying to mount root from zfs:zroot/ROOT []..." shortly before. I suspect that the boot delays too are because of the encrypted pool. A different machine running FreeBSD 8.3-RELEASE has a delay of only about 3 seconds on "zpool status" with an encrypted pool and also the boot shows no annoying anomalies. Any idea how to get rid of these really long delays? Regards, Henner Heck Some more info: uname -a FreeBSD titan 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29 18:27:25 UTC 2013 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 rc.conf hostname="titan" keymap="german.iso.kbd" ifconfig_em0=" inet 192.168.1.21 netmask 255.255.255.0" defaultrouter="192.168.1.1" sshd_enable="YES" ntpd_enable="YES" powerd_enable="YES" # Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable dumpdev="NO" zfs_enable="YES" geli_autodetach="NO" samba_enable="YES" loader.conf zfs_load="YES" vfs.root.mountfrom="zfs:zroot/ROOT" geom_eli_load="YES" aio_load="YES" "zpool status" before attaching (1min delay): pool: titanpool state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM titanpool UNAVAIL 0 0 0 raidz2-0 UNAVAIL 0 0 0 17524249743005198207 UNAVAIL 0 0 0 was /dev/gpt/titan01.eli 12400954011483674215 UNAVAIL 0 0 0 was /dev/gpt/titan02.eli 4127776205786896399 UNAVAIL 0 0 0 was /dev/gpt/titan03.eli 13439834871331336588 UNAVAIL 0 0 0 was /dev/gpt/titan04.eli 7734910905079966692 UNAVAIL 0 0 0 was /dev/gpt/titan05.eli 17344716162596142682 UNAVAIL 0 0 0 was /dev/gpt/titan06.eli 11943961185890967830 UNAVAIL 0 0 0 was /dev/gpt/titan07.eli 13738344899380447289 UNAVAIL 0 0 0 was /dev/gpt/titan08.eli 18205195048240167252 UNAVAIL 0 0 0 was /dev/gpt/titan09.eli 12338698010126903234 UNAVAIL 0 0 0 was /dev/gpt/titan10.eli pool: zroot state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 13 04:42:28 2013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 errors: No known data errors "zpool status" after attaching (no delay): pool: titanpool state: ONLINE scan: scrub repaired 0 in 1h35m with 0 errors on Mon May 13 06:18:22 2013 config: NAME STATE READ WRITE CKSUM titanpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/titan01.eli ONLINE 0 0 0 gpt/titan02.eli ONLINE 0 0 0 gpt/titan03.eli ONLINE 0 0 0 gpt/titan04.eli ONLINE 0 0 0 gpt/titan05.eli ONLINE 0 0 0 gpt/titan06.eli ONLINE 0 0 0 gpt/titan07.eli ONLINE 0 0 0 gpt/titan08.eli ONLINE 0 0 0 gpt/titan09.eli ONLINE 0 0 0 gpt/titan10.eli ONLINE 0 0 0 errors: No known data errors pool: zroot state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 13 04:42:28 2013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 errors: No known data errors