Date: Fri, 30 May 2014 12:04:45 -0700 From: Mike Carlson <mike@bayphoto.com> To: freebsd-fs@FreeBSD.org Subject: ZFS Kernel Panic on 10.0-RELEASE Message-ID: <5388D64D.4030400@bayphoto.com>
next in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] Hey FS@ Over the weekend, we had upgraded one of our servers from 9.1-RELEASE to 10.0-RELEASE, and then the zpool was upgraded (from 28 to 5000) Tuesday afternoon, the server suddenly rebooted (kernel panic), and as soon as it tried to remount all of its ZFS volumes, it panic'd again. We have a zfs on root install, the total raidz is around 9TB. The volume it is panic-ing on is zroot/data/working, which has (or had at this point) 4TB of data. I can boot off of the 10.0-RELEASE usb image, and mount: zroot zroot/usr zroot/var zroot/data Just not the large volume that had 4TB of data. I've set the volume to readonly, and that still causes a panic upon mount, I was able to snapshot the problematic volume, and even do a send | receive, but that panics when the transfer is nearly complete (4.64TB out of 4.639TB) Now, that data is not super critical, its basically scratch storage where archives are extracted and shuffled around, and then moved off to another location. We just like to keep about 60 days worth in case we need to re-process something. The more important issue is, why did this happen, and what can we recover from situations like this. It looks pretty bad when any data is lost. See url's for the kernel panic message (used my phone to take a picture) https://drive.google.com/file/d/0B0i2JyKe_ya2RnRYT3A1Qk5ldkk/edit?usp=sharing https://drive.google.com/file/d/0B0i2JyKe_ya2YWNlbVl3MVFlTGc/edit?usp=sharing Here is the zpool history for that storage: History for 'zroot': 2013-11-19.11:31:37 zpool create -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache -f zroot raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9 /dev/gpt/disk11 spare /dev/gpt/disk12 2013-11-19.11:31:37 zpool export zroot 2013-11-19.11:31:38 zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot 2013-11-19.11:31:38 zpool set bootfs=zroot zroot 2013-11-19.11:31:43 zfs set checksum=fletcher4 zroot 2013-11-19.11:34:11 zfs create zroot/usr 2013-11-19.11:34:11 zfs create zroot/home 2013-11-19.11:34:11 zfs create zroot/var 2013-11-19.11:34:11 zfs create zroot/data 2013-11-19.11:34:11 zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp 2013-11-19.11:34:11 zfs create -o compression=lzjb -o setuid=off zroot/usr/ports 2013-11-19.11:34:11 zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles 2013-11-19.11:34:11 zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages 2013-11-19.11:34:11 zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src 2013-11-19.11:34:11 zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash 2013-11-19.11:34:11 zfs create -o exec=off -o setuid=off zroot/var/db 2013-11-19.11:34:11 zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg 2013-11-19.11:34:11 zfs create -o exec=off -o setuid=off zroot/var/empty 2013-11-19.11:34:11 zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log 2013-11-19.11:34:11 zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail 2013-11-19.11:34:11 zfs create -o exec=off -o setuid=off zroot/var/run 2013-11-19.11:34:11 zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp 2013-11-19.11:34:11 zfs create -V 4G zroot/swap 2013-11-19.11:34:11 zfs set org.freebsd:swap=on zroot/swap 2013-11-19.11:34:11 zfs set checksum=off zroot/swap 2013-11-19.11:43:24 zfs set readonly=on zroot/var/empty 2013-11-19.11:43:40 zfs set mountpoint=legacy zroot 2013-11-19.11:43:50 zfs set mountpoint=/tmp zroot/tmp 2013-11-19.11:43:58 zfs set mountpoint=/usr zroot/usr 2013-11-19.11:44:05 zfs set mountpoint=/var zroot/var 2013-11-19.11:44:12 zfs set mountpoint=/home zroot/home 2013-11-19.11:44:18 zfs set mountpoint=/data zroot/data 2013-11-19.20:11:53 zfs create zroot/data/working 2013-11-19.20:17:59 zpool scrub zroot 2013-11-19.21:21:23 zfs set aclmode=passthrough zroot/data 2013-11-19.21:21:33 zfs set aclinherit=passthrough zroot/data 2013-11-21.00:58:57 zfs set compression=lzjb zroot/data/working 2014-05-24.14:24:40 zfs set readonly=off zroot/var/empty 2014-05-24.15:37:15 zpool upgrade zroot 2014-05-27.15:32:41 zfs set mountpoint=/mnt zroot 2014-05-27.15:33:55 zfs set mountpoint=/mnt/tmp zroot/tmp 2014-05-27.15:34:03 zfs set mountpoint=/mnt/var zroot/var 2014-05-27.15:34:13 zfs set mountpoint=/mnt/crash zroot/var/crash 2014-05-27.15:34:22 zfs set mountpoint=/mnt/db zroot/var/db 2014-05-27.15:34:35 zfs set mountpoint=/mnt/db/pkg zroot/var/db/pkg 2014-05-27.15:34:47 zfs set mountpoint=/mnt/db/empty zroot/var/empty 2014-05-27.15:35:22 zfs set mountpoint=/mnt/var/db zroot/var/db 2014-05-27.15:35:29 zfs set mountpoint=/mnt/var/db/pkg zroot/var/db/pkg 2014-05-27.15:35:38 zfs set mountpoint=/mnt/var/empty zroot/var/empty 2014-05-27.15:35:45 zfs set mountpoint=/mnt/var/log zroot/var/log 2014-05-27.15:35:54 zfs set mountpoint=/mnt/var/mail zroot/var/mail 2014-05-27.15:36:02 zfs set mountpoint=/mnt/var/run zroot/var/run 2014-05-27.15:36:09 zfs set mountpoint=/mnt/var/tmp zroot/var/tmp 2014-05-27.15:36:34 zfs set mountpoint=/mnt/usr zroot/usr 2014-05-27.15:36:40 zfs set mountpoint=/mnt/usr/ports zroot/usr/ports 2014-05-27.15:36:54 zfs set mountpoint=/mnt/usr/distfiles zroot/usr/ports/distfiles 2014-05-27.15:37:12 zfs set mountpoint=/mnt/usr/ports/packages zroot/usr/ports/packages 2014-05-27.15:37:20 zfs set mountpoint=/mnt/usr/ports/distfiles zroot/usr/ports/distfiles 2014-05-27.15:37:35 zfs set mountpoint=/mnt/usr/src zroot/usr/src 2014-05-27.15:37:53 zfs set mountpoint=/mnt/home zroot/home 2014-05-27.15:38:39 zfs set mountpoint=/mnt/data zroot/data 2014-05-27.15:38:47 zfs set mountpoint=/mnt/data/working zroot/data/working 2014-05-27.15:57:53 zpool scrub zroot 2014-05-28.09:34:16 zfs snapshot zroot/data/working@1 2014-05-28.18:55:12 zfs set readonly=on zroot/data/working The full zfs attributes for that particular volume: NAME PROPERTY VALUE SOURCE zroot/data/working type filesystem - zroot/data/working creation Tue Nov 19 20:11 2013 - zroot/data/working used 4.64T - zroot/data/working available 2.49T - zroot/data/working referenced 4.64T - zroot/data/working compressratio 1.00x - zroot/data/working mounted no - zroot/data/working quota none default zroot/data/working reservation none default zroot/data/working recordsize 128K default zroot/data/working mountpoint /mnt/data/working local zroot/data/working sharenfs off default zroot/data/working checksum fletcher4 inherited from zroot zroot/data/working compression lzjb local zroot/data/working atime on default zroot/data/working devices on default zroot/data/working exec on default zroot/data/working setuid on default zroot/data/working readonly on local zroot/data/working jailed off default zroot/data/working snapdir hidden default zroot/data/working aclmode passthrough inherited from zroot/data zroot/data/working aclinherit passthrough inherited from zroot/data zroot/data/working canmount on default zroot/data/working xattr on default zroot/data/working copies 1 default zroot/data/working version 5 - zroot/data/working utf8only off - zroot/data/working normalization none - zroot/data/working casesensitivity sensitive - zroot/data/working vscan off default zroot/data/working nbmand off default zroot/data/working sharesmb off default zroot/data/working refquota none default zroot/data/working refreservation none default zroot/data/working primarycache all default zroot/data/working secondarycache all default zroot/data/working usedbysnapshots 0 - zroot/data/working usedbydataset 4.64T - zroot/data/working usedbychildren 0 - zroot/data/working usedbyrefreservation 0 - zroot/data/working logbias latency default zroot/data/working dedup off default zroot/data/working mlslabel - zroot/data/working sync standard default zroot/data/working refcompressratio 1.00x - zroot/data/working written 0 - zroot/data/working logicalused 4.65T - zroot/data/working logicalreferenced 4.65T - Was this a zfs 10.0-RELEASE issue? Or, did our Dell PERC H710 controller just happen to become an issue and the timing is coincidental? Any pointers on either restoring the data or preventing this in the future would be great Mike C [-- Attachment #2 --] 0 *H 010 + 0 *H "00e3v=0 *H 0K10 URootCA10U Bay Photo Lab10U California10 UUS0 121023173218Z 271023173218Z0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUS0"0 *H 0 ;TąuyK~Zz2M'4 EiTj)yL5"kv7Urn \!SgP;zh>ˊj \VovX<LgfxxkL1CdY\S;z(5TO[)5bu\mBj* nUh&`Qί;Z x ȜF ԧ@})8}4#dzw&P^=AdT}*4 qS^E)̈cA$XDS]Z/_5M`~ӻRo'Ftw\e.G.3@m"\,c{'Gidv(TQY9zbp9c#Y³Vs |If ew7I% Grau07h f.;{Jʾx/R1.LT}Կ!kb o8H ]=}SΈ퉃 00U-rfbb,v0U00U#0FNqi$x'{(W+0U00}{http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=RootCA,O=Bay%20Photo%20Lab,ST=California,C=US0U0 *H JMUZ>7gm[z }/.~^J;өƉ-Q_\Όh#ԾXL7ph(@`+8W&ib!Qj+ȡ1iT(#^( giZ9c<R꼓e.ݘVѬ峿ۅ8Dh$~mm啠~'\ET& a}rMKL 0u%HYL l=`Υ3k[؝Y}$ ss8?~IXKda<==mL[RҠsHBR/*`JfUzA)'0JkArvp#e-{]U Z`#2Ϡv~.#l7"D=&t^-Q_9Mi uԒn{Zn!U%r3J;QDi@PNg]&;yw|9B*.L=Ij-)/]'g^U0#0b=0 *H 0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUS0 121023180003Z 141023180003Z0`10 &,d1306910UMike Carlson10 UIT10U Bay Photo Lab10 UUS0"0 *H 0 <ȼ^|=e9KtФ-jI_ %[߲'O%3;=*n((RT ͐C/\WU@HCjrIU-iE˼|paҨm-4݈amƵbK$"UEkEzd w. wG u:B'9!?tdk%%̞N. 8C1ަί[ BjF0){C9&pXnĉZuX")3zsS\\D:L1Q}1Gzz(d#V3fRoш^CLfQ@S/StX d5Y 3M0ՙQ5ō;pIdV]&d#26zsgM}r#iМ|3)md:}뚁 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0U(}awJ״(#0U0 0U#0-rfbb,v0U00U%0 +0U0mike@bayphoto.com0 *H 9|&V,*Hd ƏA~6fFg'^y I'yy,v}Z @ᔘ7\F5QA37*LT4VStTe .Dӧ=n}=L\E { z7kYs#RO}E`OnL'1M0`Dۋ rvVuX?s= +O0:yE?BA̡5|Ʀpp*<FLA36k덝j9b=&)KJSmʐXo@g;V4@ujkX9 @W h#nl\Y)A rFGj qtvhu.ճK)L}@41AKz&ȴztÈ6͢j=0*+@;xnc- WƣLG9X )= y%]Q@BW ,Άut00MiC0 *H 0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUS0 121023175745Z 141023175745Z0`10 &,d1306910UMike Carlson10 UIT10U Bay Photo Lab10 UUS0"0 *H 0 @vɌAļVAW5:eh$n>b%k7Pwޡ=^CBv2 ULLqn6+>A:P#=ѕ[8Z<|&wb(x椉 iҒx9H?~Ɔ-y]jN崡1geAˇwH4w?h!/^Pؕfa5-+%<*/+`ZBCƀn o|6'zoe!)@H藱$zѩ+ SXDz(~Bݬe?V \j;.P,銉[JݦkjY*nȡ5] hlkz3.Wme/tɧ# 8L% Ũ%zp _p)ڜ(C=MYe3S>Tfρ=@ ]ڑav&0ۗ;.j'Yk_ 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0UFO+Rdb`?60U0 0U#0-rfbb,v0U00http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=Bay%20Photo%20People%20CA,O=Bay%20Photo%20Lab,ST=California,C=US0U0U%0++0U0mike@bayphoto.com0 *H /ungfsy@KLw.cM&6?-Y4 ++IJYD C£S_2$eڏPU((̖S~aM0ri~jk2Ւ[n9rn&Bz(MݼIܪ*ȱImu5lr[Q`3͈;l{Z07h$>at)qo\]pJW7*[c% y1FB)p2͞[~=?!Wd9XY5.bOKUDV[Z98E ^X9n<Hi@C?H+jlۗc&yqQ<Ii/ ɣ*B!f<.Re-=Y*?-4;|vj1@+Iܑ=J7%'jMmrSM@GV|:C'ݮ_Lkt61F0B0d0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUSMiC0 + 0 *H 1 *H 0 *H 1 140530190445Z0# *H 1Hا_螢nؓtk0l *H 1_0]0 `He*0 `He0 *H 0*H 0 *H @0+0 *H (0s +71f0d0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUSb=0u*H 1fd0X10UBay Photo People CA10U Bay Photo Lab10U California10 UUSb=0 *H "$ '4>ԌҺ4\bJbn:cqpw2bHI*{gѭa(Ż6f h\'/<Q{EQn 'jt̂8YO]ۏ+` ;\SٻB+Y NJԑċ>j? \ҶpPՍ+3ǚu2I OA$ېYŜsSdp4>ڂ)Wȁײ[Sҫ,8^e!>d4ʮ}h/-YLB{Ǿ,</;])֟ yf/T^@77yt b/K?En'%E3`K5gt;$J(#2'!A8J_\lM1B9V?6/y*q ~#A6]s'Y<A G ZI=ܽV »_.4"u ҖB#$"sD=k/(NydL
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5388D64D.4030400>
