Date: Thu, 27 Feb 2020 10:11:59 +0200 From: Andriy Gapon <avg@FreeBSD.org> To: Willem Jan Withagen <wjw@digiware.nl>, FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Re: ZFS pools in "trouble" Message-ID: <91e1cd09-b6b8-f107-537f-ae2755aba087@FreeBSD.org> In-Reply-To: <71e1f22a-1261-67d9-e41d-0f326bf81469@digiware.nl> References: <71e1f22a-1261-67d9-e41d-0f326bf81469@digiware.nl>
next in thread | previous in thread | raw e-mail | index | archive | help
On 26/02/2020 19:09, Willem Jan Withagen wrote: > Hi, > > I'm using my pools in perhaps a rather awkward way as underlying storage for my > ceph cluster: > 1 disk per pool, with log and cache on SSD > > For one reason or another one of the servers has crashed ad does not really want > to read several of the pools: > ---- > pool: osd_2 > state: UNAVAIL > Assertion failed: (reason == ZPOOL_STATUS_OK), file > /usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool_main.c, line 5098. > Abort (core dumped) > ---- > > The code there is like: > ---- > default: > /* > * The remaining errors can't actually be generated, yet. > */ > assert(reason == ZPOOL_STATUS_OK); > > ---- > And this on already 3 disks. > Running: > FreeBSD 12.1-STABLE (GENERIC) #0 r355208M: Fri Nov 29 10:43:47 CET 2019 > > Now this is a test cluster, so no harm there in matters of data loss. > And the ceph cluster probably can rebuild everything if I do not lose too many > disk. > > But the problem also lies in the fact that not all disk are recognized by the > kernel, and not all disk end up mounted. So I need to remove a pool first to get > more disks online. > > Is there anything I can do the get them back online? > Or is this a lost cause? Depends on what 'reason' is. I mean the value of the variable. -- Andriy Gapon
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?91e1cd09-b6b8-f107-537f-ae2755aba087>