Date: Mon, 02 Jun 2014 17:37:31 -0700 From: Mike Carlson <mike@bayphoto.com> To: Steven Hartland <killing@multiplay.co.uk>, freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Message-ID: <538D18CB.5020906@bayphoto.com> In-Reply-To: <F445995D86AA44FB8497296E0D41AC8F@multiplay.co.uk> References: <5388D64D.4030400@bayphoto.com> <EC2EA442-56FC-46B4-A1E2-97523029B7B3@mail.turbofuzz.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <F959477921CD4552A94BF932A55961F4@multiplay.co.uk> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <F445995D86AA44FB8497296E0D41AC8F@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
On 6/2/2014 5:29 PM, Steven Hartland wrote:
>
> ----- Original Message ----- From: "Mike Carlson" <mike@bayphoto.com>
> To: "Steven Hartland" <killing@multiplay.co.uk>; <freebsd-fs@freebsd.org>
> Sent: Monday, June 02, 2014 11:57 PM
> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE
>
>
>> On 6/2/2014 2:15 PM, Steven Hartland wrote:
>>> ----- Original Message ----- From: "Mike Carlson" <mike@bayphoto.com>
>>>
>>>>> Thats the line I gathered it was on but no I need to know what the
>>>>> value
>>>>> of vd is, so what you need to do is:
>>>>> print vd
>>>>>
>>>>> If thats valid then:
>>>>> print *vd
>>>>>
>>>> It reports:
>>>>
>>>> (kgdb) print *vd
>>>> No symbol "vd" in current context.
>>>
>>> Dam optimiser :(
>>>
>>>> Should I rebuild the kernel with additional options?
>>>
>>> Likely wont help as kernel with zero optimisations tends to fail
>>> to build in my experience :(
>>>
>>> Can you try applying the attached patch to your src e.g.
>>> cd /usr/src
>>> patch < zfs-dsize-dva-check.patch
>>>
>>> The rebuild, install the kernel and then reproduce the issue again.
>>>
>>> Hopefully it will provide some more information on the cause, but
>>> I suspect you might be seeing the effect os have some corruption.
>>
>> Well, after building the kernel with your patch, installing it and
>> booting off of it, the system does not panic.
>>
>> It reports this when I mount the filesystem:
>>
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>>
>> Here is the results, I can now mount the file system!
>>
>> root@working-1:~ # zfs set canmount=on zroot/data/working
>> root@working-1:~ # zfs mount zroot/data/working
>> root@working-1:~ # df
>> Filesystem 1K-blocks Used Avail Capacity
>> Mounted on
>> zroot 2677363378 1207060 2676156318
>> 0% /
>> devfs 1 1 0 100% /dev
>> /dev/mfid10p1 253911544 2827824 230770800 1%
>> /dump
>> zroot/home 2676156506 188 2676156318
>> 0% /home
>> zroot/data 2676156389 71 2676156318
>> 0% /mnt/data
>> zroot/usr/ports/distfiles 2676246609 90291 2676156318
>> 0% /mnt/usr/ports/distfiles
>> zroot/usr/ports/packages 2676158702 2384 2676156318
>> 0% /mnt/usr/ports/packages
>> zroot/tmp 2676156812 493 2676156318
>> 0% /tmp
>> zroot/usr 2679746045 3589727 2676156318
>> 0% /usr
>> zroot/usr/ports 2676986896 830578 2676156318
>> 0% /usr/ports
>> zroot/usr/src 2676643553 487234 2676156318
>> 0% /usr/src
>> zroot/var 2676650671 494353 2676156318
>> 0% /var
>> zroot/var/crash 2676156388 69 2676156318
>> 0% /var/crash
>> zroot/var/db 2677521200 1364882 2676156318
>> 0% /var/db
>> zroot/var/db/pkg 2676198058 41740 2676156318
>> 0% /var/db/pkg
>> zroot/var/empty 2676156387 68 2676156318
>> 0% /var/empty
>> zroot/var/log 2676168522 12203 2676156318
>> 0% /var/log
>> zroot/var/mail 2676157043 725 2676156318
>> 0% /var/mail
>> zroot/var/run 2676156508 190 2676156318
>> 0% /var/run
>> zroot/var/tmp 2676156389 71 2676156318
>> 0% /var/tmp
>> zroot/data/working 7664687468 4988531149 2676156318 65%
>> /mnt/data/working
>> root@working-1:~ # ls /mnt/data/working/
>> DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS
>> RECYCLER XML_NOTIFICATIONS XML_REPORTS
>
> That does indeed seem to indicated some on disk corruption.
>
> There are a number of cases in the code which have a similar check but
> I'm afraid I don't know the implications of the corruption your
> seeing but others may.
>
> The attached updated patch will enforce the safe panic in this case
> unless the sysctl vfs.zfs.recover is set to 1 (which can also now be
> done on the fly).
>
> I'd recommend backing up the data off the pool and restoring it else
> where.
>
> It would be interesting to see the output of the following command
> on your pool:
> zdb -uuumdC <pool>
>
> Regards
> Steve
I'm applying that patch and rebuilding the kernel again
Here is the output from zdb -uuumdC:
zroot:
version: 28
name: 'zroot'
state: 0
txg: 13
pool_guid: 9132288035431788388
hostname: 'amnesia.discdrive.bayphoto.com'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 9132288035431788388
children[0]:
type: 'raidz'
id: 0
guid: 15520162542638044402
nparity: 2
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9894744555520
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 4289437176706222104
path: '/dev/gpt/disk0'
phys_path: '/dev/gpt/disk0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 5369387862706621015
path: '/dev/gpt/disk1'
phys_path: '/dev/gpt/disk1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 456749962069636782
path: '/dev/gpt/disk2'
phys_path: '/dev/gpt/disk2'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 3809413300177228462
path: '/dev/gpt/disk3'
phys_path: '/dev/gpt/disk3'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 4978694931676882497
path: '/dev/gpt/disk4'
phys_path: '/dev/gpt/disk4'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 17831739822150458220
path: '/dev/gpt/disk5'
phys_path: '/dev/gpt/disk5'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 1286918567594965543
path: '/dev/gpt/disk6'
phys_path: '/dev/gpt/disk6'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 7958718879588658810
path: '/dev/gpt/disk7'
phys_path: '/dev/gpt/disk7'
whole_disk: 1
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 18392960683862755998
path: '/dev/gpt/disk8'
phys_path: '/dev/gpt/disk8'
whole_disk: 1
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 13046629036569375198
path: '/dev/gpt/disk9'
phys_path: '/dev/gpt/disk9'
whole_disk: 1
create_txg: 4
children[10]:
type: 'disk'
id: 10
guid: 10604061156531251346
path: '/dev/gpt/disk11'
phys_path: '/dev/gpt/disk11'
whole_disk: 1
create_txg: 4
I find it strange that it says version 28 when it was upgraded to
version 5000
[-- Attachment #2 --]
0 *H
010 + 0 *H
"00e3v=0
*H
0K10
URootCA10U
Bay Photo Lab10U
California10 UUS0
121023173218Z
271023173218Z0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0"0
*H
0
;TąuyK~Zz2M'4
EiTj)yL5"kv7Urn \!SgP;zh>ˊj \VovX<LgfxxkL1CdY\S;z(5TO[)5bu\mBj*
nUh&`Qί;Z x ȜF
ԧ@})8}4#dzw&P^=AdT}*4 qS^E)̈cA$XDS]Z/_5M`~ӻRo'Ftw\e.G.3@m"\,c{'Gidv(TQY9zbp9c#Y³Vs |If ew7I% Grau07h f.;{Jʾx/R1.LT}Կ!kb
o8H ]=}SΈ퉃 00U-rfbb,v0U00U#0FNqi$x'{(W+0U00}{http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=RootCA,O=Bay%20Photo%20Lab,ST=California,C=US0U0
*H
JMUZ>7gm[z }/.~^J;өƉ-Q_\Όh#ԾXL7ph(@`+8W&ib!Qj+ȡ1iT(#^( giZ9c<R꼓e.ݘVѬ峿ۅ8Dh$~mm啠~'\ET& a}rMKL 0u%HYL
l=`Υ3k[؝Y}$ ss8?~IXKda<==mL[RҠsHBR/*`JfUzA)'0JkArvp#e-{]U
Z`#2Ϡv~.#l7"D=&t^-Q_9Mi
uԒn{Zn!U%r3J;QDi@PNg]&;yw|9B*.L=Ij-)/]'g^U0#0b=0
*H
0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0
121023180003Z
141023180003Z0`10
&,d1306910UMike Carlson10 UIT10U
Bay Photo Lab10 UUS0"0
*H
0
<ȼ^|=e9KtФ-jI_ %[߲'O%3;=*n((RT ͐C/\WU@HCjrIU-iE˼|paҨm-4݈amƵbK$"UEkEzd
w.
wG u:B'9!?tdk%%̞N. 8C1ަί[
BjF0){C9&pXnĉZuX")3zsS\\D:L1Q}1Gzz(d#V3fRoш^CLfQ@S/StX
d5Y 3M0ՙQ5ō;pIdV]&d#26zsgM}r#iМ|3)md:}뚁 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0U(}awJ״(#0U0 0U#0-rfbb,v0U00U%0
+0U0mike@bayphoto.com0
*H
9|&V,*Hd ƏA~6fFg'^y
I'yy,v}Z @ᔘ7\F5QA37*LT4VStTe .Dӧ=n}=L\E {
z7kYs#RO}E`OnL'1M0`Dۋ
rvVuX?s= +O0:yE?BA̡5|Ʀpp*<FLA36k덝j9b=&)KJSmʐXo@g;V4@ujkX9 @W h#nl\Y)A
rFGj qtvhu.ճK)L}@41AKz&ȴztÈ6͢j=0*+@;xnc-
WƣLG9X )=
y%]Q@BW
,Άut00MiC0
*H
0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0
121023175745Z
141023175745Z0`10
&,d1306910UMike Carlson10 UIT10U
Bay Photo Lab10 UUS0"0
*H
0
@vɌAļVAW5:eh$n>b%k7Pwޡ=^CBv2 ULLqn6+>A:P#=ѕ[8Z<|&wb(x椉
iҒx9H?~Ɔ-y]jN崡1geAˇwH4w?h!/^Pؕfa5-+%<*/+`ZBCƀn o|6'zoe!)@H藱$zѩ+
SXDz(~Bݬe?V
\j;.P,銉[JݦkjY*nȡ5]
hlkz3.Wme/tɧ# 8L%
Ũ%zp _p)ڜ(C=MYe3S>Tfρ=@ ]ڑav&0ۗ;.j'Yk_ 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0UFO+Rdb`?60U0 0U#0-rfbb,v0U00http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=Bay%20Photo%20People%20CA,O=Bay%20Photo%20Lab,ST=California,C=US0U0U%0++0U0mike@bayphoto.com0
*H
/ungfsy@KLw.cM&6?-Y4 ++IJYD C£S_2$eڏPU((̖S~aM0ri~jk2Ւ[n9rn&Bz(MݼIܪ*ȱImu5lr[Q`3͈;l{Z07h$>at)qo\]pJW7*[c%
y1FB)p2͞[~=?!Wd9XY5.bOKUDV[Z98E
^X9n<Hi@C?H+jlۗc&yqQ<Ii/
ɣ*B!f<.Re-=Y*?-4;|vj1@+Iܑ=J7%'jMmrSM@GV|:C'ݮ_Lkt61F0B0d0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSMiC0 + 0 *H
1 *H
0 *H
1
140603003731Z0# *H
1ryqj%UG0l *H
1_0]0 `He*0 `He0
*H
0*H
0
*H
@0+0
*H
(0s +71f0d0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSb=0u*H
1fd0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSb=0
*H
-|8%[~3@)ҠDHr$[GZ\FHg?b M#LD]iwqo?Β~j?\3`{_#z 49ۢ!q?M}#)vO!%!A #3ƌIG&@6RxjwŅ6SQ"=Fr{` #p#ZxMHM)D)X˗yhu#|aw3(= z=."Jk>h>}!hZ!
Zb%l@[)<NL.}ؚG8A)%Sݫ[{ˤ?4I.6TT#AlԌ$tk@m</ʪ6Q7wG>EJm
xr ;ڽㆅN=VZ֍#R2:reЗvk0x\+A,lr!%Û|8DĂ
5w{
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?538D18CB.5020906>
