Date: Fri, 15 Dec 2017 23:44:45 +0000 From: Shiva Bhanujan <Shiva.Bhanujan@Quorum.com> To: Youzhong Yang <youzhong@gmail.com>, Andriy Gapon <avg@freebsd.org> Cc: "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>, Shiva Bhanujan <Shiva.Bhanujan@Quorum.com> Subject: RE: zio_done panic in 10.3 Message-ID: <3A5A10BE32AC9E45B4A22F89FC90EC0701C3683E11@QLEXC01.Quorum.local> In-Reply-To: <3A5A10BE32AC9E45B4A22F89FC90EC0701C3680A9C@QLEXC01.Quorum.local> References: <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D3D1@QLEXC01.Quorum.local> <5021a016-9193-b626-78cf-54ffa3929e22@FreeBSD.org> <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D562@QLEXC01.Quorum.local> <CAG6CVpVT=4iid4xi0yw3AJe4kbBNEGj6zVCKfozq_-8CgGYfag@mail.gmail.com> <3A5A10BE32AC9E45B4A22F89FC90EC0701C367D636@QLEXC01.Quorum.local> <41e2465d-e1b5-33ce-57b5-49bea6087d9a@FreeBSD.org> <CADpNCvbpSTFjSHHVGV_=-LK27XLSzfZi_gzSa0v-Z=h_msOQuw@mail.gmail.com> <78d712d9-dda3-0411-262e-bb64f9ab46eb@FreeBSD.org>, <CADpNCvawxn1wkaEjp_9TFTfMWtaLD20ei--gNTGfsTdA9ELqUg@mail.gmail.com>, <3A5A10BE32AC9E45B4A22F89FC90EC0701C3680A9C@QLEXC01.Quorum.local>
next in thread | previous in thread | raw e-mail | index | archive | help
I've updated both of the bug reports. I was hoping that setting = secondacache=3Dmetadata on the destination ZFS where the snapshots are = being received would restrict performance to only the receive side. that = isn't the case, and the crash has started again. I was really hoping that = there could be a solution to this. From: owner-freebsd-fs=40freebsd.org =5Bowner-freebsd-fs=40freebsd.org=5D = on behalf of Shiva Bhanujan =5Bshiva.bhanujan=40quorum.net=5D Sent: Wednesday, November 29, 2017 5:32 AM To: Youzhong Yang; Andriy Gapon Cc: freebsd-fs=40freebsd.org Subject: RE: zio_done panic in 10.3 Hi Andriy, Could you please let me know when could a fix for this be available? Regards, Shiva From: Youzhong Yang =5Byouzhong=40gmail.com=5D Sent: Wednesday, November 22, 2017 8:26 AM To: Andriy Gapon Cc: Shiva Bhanujan;=20 cem=40freebsd.org;=20 freebsd-fs=40freebsd.org Subject: Re: zio_done panic in 10.3 Thanks Andriy. Two bug reports filed: https://www.illumos.org/issues/8857 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D223803 On Wed, Nov 22, 2017 at 10:22 AM, Andriy Gapon=20 <avg=40freebsd.org> wrote: On 22/11/2017 16:40, Youzhong Yang wrote: > Hi Andriy, > > This is nice=21 I am 100% sure it's exactly the same issue I experienced = and then > reported to illumos mailing list. In all the crash dumps zio->io_done =3D > l2arc_read_done, so I thought the crash must be related to L2ARC. Once I = set > secondarycache=3Dmetadata, the frequency of crash went from one per 2 = days down to > one per week. I've been puzzled by what could have caused a zio being = destroyed > while there's still child zio. Your explanation definitely makes sense=21 Oh, I now recall seeing your report: https://illumos.topicbox.com/groups/zfs/Tccd8b4463865899e I remember that it raised my interest, but then I forgot about it and didn't correlate it with the latest reports. > By the way, is there a FreeBSD bug report or an illumos bug number = tracking this > issue? I would be more than happy to create one if needed, and also test = your > potential fix here in our environment. I am not aware of any existing bug report. It would be great if you could open one =5B or two :-) =5D If you open an illumos issue, please also add George Wilson as a watcher. I think that George is also interested in fixing this issue and he knows the relevant code better than me. Thank you=21 > On Tue, Nov 21, 2017 at 3:46 PM, Andriy Gapon <avg=40freebsd.org > <mailto:avg=40freebsd.org>> wrote: > > > > > On 21/11/2017 21:30, Shiva Bhanujan wrote: > > it did get compressed to 0.5G - still too big to send via email. I did = send some more debug information by running kgdb on the core file to = Andriy, and I'm waiting for any analysis that he might provide. > > Yes, kgdb-over-email turned out to be a far more efficient compression :-) > I already have an analysis based on the information provided by Shiva = and by > another user who has the same problem and contacted me privately. > I am discussing possible ways to fix the problem with George Wilson who = was very > kind to double-check the analysis, complete it and suggest possible fixes. > > A short version is that dbuf_prefetch and dbuf_prefetch_indirect_done = functions > chain new zio-s under the same parent zio (a completion of one child zio = may > create another child zio). They do it using arc_read which can create = either a > logical zio in most cases or a vdev zio for a read from a cache device = (2arc). > zio_done() has a check for the completion of a parent zio's children but = that > check is not completely safe and can be broken by the pattern that = dbuf_prefetch > can create. So, under some specific circumstances the parent zio may = complete > and get destroyed while there is a child zio. > > I believe this problem to be rather rare, but there could be = configurations and > workloads where it's triggered more often. > The problem does not happen if there are no cache devices. > > > From: Conrad Meyer =5Bcem=40freebsd.org <mailto:cem=40freebsd.org>=5D > > > > Sent: Tuesday, November 21, 2017 9:04 AM > > > > To: Shiva Bhanujan > > > > Cc: Andriy Gapon;=20 freebsd-fs=40freebsd.org <mailto:freebsd-fs=40freebsd.org> > > > > Subject: Re: zio_done panic in 10.3 > > > > > > > > > > > > > > > > Have you tried compressing it with e.g. xz or zstd? > > > > -- > Andriy Gapon > _______________________________________________ >=20 freebsd-fs=40freebsd.org <mailto:freebsd-fs=40freebsd.org> mailing list >=20 https://lists.freebsd.org/mailman/listinfo/freebsd-fs > <https://lists.freebsd.org/mailman/listinfo/freebsd-fs> > To unsubscribe, send any mail to =22freebsd-fs-unsubscribe=40freebsd.org > <mailto:freebsd-fs-unsubscribe=40freebsd.org>=22 > > -- Andriy Gapon _______________________________________________ freebsd-fs=40freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to =22freebsd-fs-unsubscribe=40freebsd.org=22
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3A5A10BE32AC9E45B4A22F89FC90EC0701C3683E11>