Date: Wed, 22 Nov 2017 09:40:18 -0500 From: Youzhong Yang <youzhong@gmail.com> To: Andriy Gapon <avg@freebsd.org> Cc: Shiva Bhanujan <Shiva.Bhanujan@quorum.com>, "cem@freebsd.org" <cem@freebsd.org>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: Re: zio_done panic in 10.3 Message-ID: <CADpNCvbpSTFjSHHVGV_=-LK27XLSzfZi_gzSa0v-Z=h_msOQuw@mail.gmail.com> In-Reply-To: <41e2465d-e1b5-33ce-57b5-49bea6087d9a@FreeBSD.org> References: <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D3D1@QLEXC01.Quorum.local> <5021a016-9193-b626-78cf-54ffa3929e22@FreeBSD.org> <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D562@QLEXC01.Quorum.local> <CAG6CVpVT=4iid4xi0yw3AJe4kbBNEGj6zVCKfozq_-8CgGYfag@mail.gmail.com> <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D636@QLEXC01.Quorum.local> <41e2465d-e1b5-33ce-57b5-49bea6087d9a@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi Andriy, This is nice! I am 100% sure it's exactly the same issue I experienced and then reported to illumos mailing list. In all the crash dumps zio->io_done = l2arc_read_done, so I thought the crash must be related to L2ARC. Once I set secondarycache=metadata, the frequency of crash went from one per 2 days down to one per week. I've been puzzled by what could have caused a zio being destroyed while there's still child zio. Your explanation definitely makes sense! By the way, is there a FreeBSD bug report or an illumos bug number tracking this issue? I would be more than happy to create one if needed, and also test your potential fix here in our environment. Thanks, --Youzhong On Tue, Nov 21, 2017 at 3:46 PM, Andriy Gapon <avg@freebsd.org> wrote: > > > > On 21/11/2017 21:30, Shiva Bhanujan wrote: > > it did get compressed to 0.5G - still too big to send via email. I did > send some more debug information by running kgdb on the core file to > Andriy, and I'm waiting for any analysis that he might provide. > > Yes, kgdb-over-email turned out to be a far more efficient compression :-) > I already have an analysis based on the information provided by Shiva and > by > another user who has the same problem and contacted me privately. > I am discussing possible ways to fix the problem with George Wilson who > was very > kind to double-check the analysis, complete it and suggest possible fixes. > > A short version is that dbuf_prefetch and dbuf_prefetch_indirect_done > functions > chain new zio-s under the same parent zio (a completion of one child zio > may > create another child zio). They do it using arc_read which can create > either a > logical zio in most cases or a vdev zio for a read from a cache device > (2arc). > zio_done() has a check for the completion of a parent zio's children but > that > check is not completely safe and can be broken by the pattern that > dbuf_prefetch > can create. So, under some specific circumstances the parent zio may > complete > and get destroyed while there is a child zio. > > I believe this problem to be rather rare, but there could be > configurations and > workloads where it's triggered more often. > The problem does not happen if there are no cache devices. > > > From: Conrad Meyer [cem@freebsd.org] > > > > Sent: Tuesday, November 21, 2017 9:04 AM > > > > To: Shiva Bhanujan > > > > Cc: Andriy Gapon; freebsd-fs@freebsd.org > > > > Subject: Re: zio_done panic in 10.3 > > > > > > > > > > > > > > > > Have you tried compressing it with e.g. xz or zstd? > > > > -- > Andriy Gapon > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CADpNCvbpSTFjSHHVGV_=-LK27XLSzfZi_gzSa0v-Z=h_msOQuw>