From owner-freebsd-fs@FreeBSD.ORG Wed Sep 19 22:13:18 2007 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B4BC616A420 for ; Wed, 19 Sep 2007 22:13:18 +0000 (UTC) (envelope-from freebsd-fs@adam.gs) Received: from mail.adam.gs (mail.adam.gs [76.9.2.116]) by mx1.freebsd.org (Postfix) with ESMTP id 8021F13C481 for ; Wed, 19 Sep 2007 22:13:18 +0000 (UTC) (envelope-from freebsd-fs@adam.gs) Received: from [127.0.0.1] (localhost.adam.gs [127.0.0.1]) by mail.adam.gs (Postfix) with ESMTP id A4B64F3555A for ; Wed, 19 Sep 2007 18:13:17 -0400 (EDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=simple; s=mail; d=adam.gs; b=Iu40a2222tG6i1E2r8rF7JFbGuMROpFojC+ef03eAmXaSzHZLlDgjLl19pJ/OUe108T3Va6r/hKEXn2gav8lAdsa1cmmPIRnUr5j+wk4OXaaUirB1W4BPlSovxE1SyR7aH9AP7abt81yYjYqRWIvkaYIlipgaWszFi+H4cNHLiI=; Received: from [66.230.128.46] (unknown [66.230.128.46]) (Authenticated sender: adam@adam.gs) by mail.adam.gs (Postfix) with ESMTP id 4DECDF35417; Wed, 19 Sep 2007 18:13:17 -0400 (EDT) In-Reply-To: <20070919082551.GS55051@obelix.dsto.defence.gov.au> References: <20070919082551.GS55051@obelix.dsto.defence.gov.au> Mime-Version: 1.0 (Apple Message framework v752.3) Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Message-Id: Content-Transfer-Encoding: 7bit From: Adam Jacob Muller Date: Wed, 19 Sep 2007 18:13:15 -0400 To: "Wilkinson, Alex" X-Mailer: Apple Mail (2.752.3) Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org Subject: Re: ZFS pool not working on boot X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Sep 2007 22:13:18 -0000 On Sep 19, 2007, at 4:25 AM, Wilkinson, Alex wrote: > 0n Wed, Sep 19, 2007 at 03:24:25AM -0400, Adam Jacob Muller wrote: > >> I have a server with two ZFS pools, one is an internal raid0 using >> 2 drives >> connected via ahc. The other is an external storage array with 11 >> drives >> also using ahc, using raidz. (This is a dell 1650 and pv220s). >> On reboot, the pools do not come online on their own. Both pools >> consistently show as failed. > > Make sure your hostid doesn't change. If it does. Then ZFS will > fail upon bootstrap. > > -aW > No, The hostid is not changing, just rebooted and replicated the problem. Also it seems like from reading ZFS docs that the symptoms would be that the pool would simply need to be imported again if the host id changed? after another reboot, I see this: # zpool status pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas da1 ONLINE 0 0 0 da2 UNAVAIL 0 0 0 cannot open ... more output showing the other array with 11 drives is fine # zpool export tank # zpool import tank # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors (11-drive raidz is fine still of course)