Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 7 Jul 2016 03:12:18 +0300
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        David Cross <dcrosstech@gmail.com>
Cc:        freebsd-stable@freebsd.org, freebsd-hackers@freebsd.org
Subject:   Re: Reproducable panic in FFS with softupdates and no journaling (10.3-RELEASE-pLATEST)
Message-ID:  <20160707001218.GI38613@kib.kiev.ua>
In-Reply-To: <CAM9edeOb0yUqaXbTMGBJVFqgJ%2B%2ByaDr4tGV1TQ_UPOYmv4p2fw@mail.gmail.com>
References:  <CAM9edeOek_zqRPt-0vDMNMK9CH31yAeVPAirWVvcuUWy5xsm4A@mail.gmail.com> <CAM9edeN1Npc=cNth2gAk1XFLvar-jZqzxWX50pLQVxDusMrOVg@mail.gmail.com> <20160706151822.GC38613@kib.kiev.ua> <CAM9edeMDdjO6C2BRXBxDV-trUG5A0NEua%2BK0H_wERq7H4AR72g@mail.gmail.com> <CAM9edePfMxm26yYC=o10CGhRSDUHXTTNosFc_T89v4Pxt0JM0g@mail.gmail.com> <20160706173758.GF38613@kib.kiev.ua> <CAM9edeOb0yUqaXbTMGBJVFqgJ%2B%2ByaDr4tGV1TQ_UPOYmv4p2fw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jul 06, 2016 at 02:21:20PM -0400, David Cross wrote:
> (kgdb) up 5
> #5  0xffffffff804aafa1 in brelse (bp=0xfffffe00f77457d0) at buf.h:428
> 428                     (*bioops.io_deallocate)(bp);
> Current language:  auto; currently minimal
> (kgdb) p/x *(struct buf *)0xfffffe00f77457d0
> $1 = {b_bufobj = 0xfffff80002e88480, b_bcount = 0x4000, b_caller1 = 0x0,
>   b_data = 0xfffffe00f857b000, b_error = 0x0, b_iocmd = 0x0, b_ioflags =
> 0x0,
>   b_iooffset = 0x0, b_resid = 0x0, b_iodone = 0x0, b_blkno = 0x115d6400,
>   b_offset = 0x0, b_bobufs = {tqe_next = 0x0, tqe_prev =
> 0xfffff80002e884d0},
>   b_vflags = 0x0, b_freelist = {tqe_next = 0xfffffe00f7745a28,
>     tqe_prev = 0xffffffff80c2afc0}, b_qindex = 0x0, b_flags = 0x20402800,
>   b_xflags = 0x2, b_lock = {lock_object = {lo_name = 0xffffffff8075030b,
>       lo_flags = 0x6730000, lo_data = 0x0, lo_witness =
> 0xfffffe0000602f00},
>     lk_lock = 0xfffff800022e8000, lk_exslpfail = 0x0, lk_timo = 0x0,
>     lk_pri = 0x60}, b_bufsize = 0x4000, b_runningbufspace = 0x0,
>   b_kvabase = 0xfffffe00f857b000, b_kvaalloc = 0x0, b_kvasize = 0x4000,
>   b_lblkno = 0x0, b_vp = 0xfffff80002e883b0, b_dirtyoff = 0x0,
>   b_dirtyend = 0x0, b_rcred = 0x0, b_wcred = 0x0, b_saveaddr = 0x0, b_pager
> = {
>     pg_reqpage = 0x0}, b_cluster = {cluster_head = {tqh_first = 0x0,
>       tqh_last = 0x0}, cluster_entry = {tqe_next = 0x0, tqe_prev = 0x0}},
>   b_pages = {0xfffff800b99b30b0, 0xfffff800b99b3118, 0xfffff800b99b3180,
>     0xfffff800b99b31e8, 0x0 <repeats 28 times>}, b_npages = 0x4, b_dep = {
>     lh_first = 0xfffff800023d8c00}, b_fsprivate1 = 0x0, b_fsprivate2 = 0x0,
>   b_fsprivate3 = 0x0, b_pin_count = 0x0}
> 
> 
> This is the freshly allocated buf that causes the panic; is this what is
> needed?  I "know" which vnode will cause the panic on vnlru cleanup, but I
> don't know how to walk the memory list without a 'hook'.. as in, i can
> setup the kernel in a state that I know will panic when the vnode is
> cleaned up, I can force a panic 'early' (kill -9 1), and then I could get
> that vnode.. if I could get the vnode list to walk.

Was the state printed after the panic occured ?  What is strange is that
buffer was not even tried for i/o, AFAIS.  Apart from empty b_error/b_iocmd,
the b_lblkno is zero, which means that the buffer was never allocated on
the disk.

The b_blkno looks strangely high.  Can you print *(bp->b_vp) ?  If it is
UFS vnode, do p *(struct inode)(<vnode>->v_data).  I am esp. interested
in the vnode size.

Can you reproduce the problem on HEAD ?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160707001218.GI38613>