Date: Sat, 04 May 2013 16:33:13 -0700 From: Mike Carlson <mike@bayphoto.com> To: freebsd-fs@freebsd.org Subject: Re: zfs issue - disappearing data Message-ID: <51859AB9.6040804@bayphoto.com> In-Reply-To: <51845054.3020302@bayphoto.com> References: <5183F739.2040908@bayphoto.com> <51843D0E.2020907@platinum.linux.pl> <51845054.3020302@bayphoto.com>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
Just to update the list and close this thread, this issue as it turns
out was not ZFS related, but the application that read/writes to the
CIFS share.
Good to know about the leaking extended attributes though, and of
course, the great responses from Adam and Jeremy.
Thanks again,
Mike C
On 5/3/2013 5:03 PM, Mike Carlson wrote:
> Interesting.
>
> Is that why zdb shows so many objects?
>
> Is this a configuration mistake, and would it lead to data loss?
>
> Can I provide any addition information ?
>
> Mike C
>
> On 5/3/2013 3:41 PM, Adam Nowacki wrote:
>> Looks like we have a leak with extended attributes:
>>
>> # zfs create -o mountpoint=/test root/test
>> # touch /test/file1
>> # setextattr user test abc /test/file1
>> # zdb root/test
>> Object lvl iblk dblk dsize lsize %full type
>> 8 1 16K 512 0 512 0.00 ZFS plain file
>> 9 1 16K 512 1K 512 100.00 ZFS directory
>> 10 1 16K 512 512 512 100.00 ZFS plain file
>>
>> object 8 - the file,
>> object 9 - extended attributes directory,
>> object 10 - value of the 'test' extended attribute
>>
>> # rm /test/file1
>> # zdb root/test
>>
>> Object lvl iblk dblk dsize lsize %full type
>> 10 1 16K 512 512 512 100.00 ZFS plain file
>>
>> objects 8 and 9 are deleted, object 10 is still there (leaked).
>>
>> On 2013-05-03 19:43, Mike Carlson wrote:
>>> We had a critical issue with a zfs server that exports shares via samba
>>> (3.5) last night
>>>
>>> system info:
>>> uname -a
>>>
>>> FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD
>>> 9.1-RELEASE
>>> #0 r243825: Tue Dec 4 09:23:10 UTC 2012
>>> root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
>>>
>>> zpool history:
>>>
>>> History for 'data':
>>> 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop
>>> /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop
>>> 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop
>>> /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop
>>> 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop
>>> /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop
>>> 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop
>>> /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop
>>> 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop
>>> /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop
>>> 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop
>>> /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop
>>> 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop
>>> /dev/gpt/disk26.nop
>>> 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop
>>> 2013-02-25.17:12:19 zfs set checksum=fletcher4 data
>>> 2013-02-25.17:12:22 zfs set compression=lzjb data
>>> 2013-02-25.17:12:25 zfs set aclmode=passthrough data
>>> 2013-02-25.17:12:30 zfs set aclinherit=passthrough data
>>> 2013-02-25.17:13:25 zpool export data
>>> 2013-02-25.17:15:33 zpool import -d /dev/gpt data
>>> 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop
>>> 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW
>>> 2013-03-27.12:05:42 zfs create data/IMAGEQUIX
>>> 2013-03-27.13:32:54 zfs create data/ROES_ORDERS
>>> 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES
>>> 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES
>>> 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE
>>>
>>> We had a file structure drop off:
>>>
>>> /data/XML_WORKFLOW/XML_ORDERS/
>>>
>>> around 5/2/2012 @ 17:00
>>>
>>> In that directory, there were a few thousand directories (containing
>>> images and a couple metadata text/xml files)
>>>
>>> What is odd, is doing a du -h in the parent XML_WORKFLOW directory,
>>> only
>>> reports ~150MB:
>>>
>>> # find . -type f |wc -l
>>> 86
>>> # du -sh .
>>> 130M .
>>>
>>>
>>> however, df reports 1.5GB:
>>>
>>> # df -h .
>>> Filesystem Size Used Avail Capacity Mounted on
>>> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW
>>>
>>> zdb -d shows:
>>>
>>> # zdb -d data/XML_WORKFLOW
>>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G,
>>> 212812 objects
>>>
>>> Digging further into zdb, the path is missing for most of those
>>> objects:
>>>
>>> # zdb -ddddd data/XML_WORKFLOW 635248
>>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G,
>>> 212812 objects, rootbp DVA[0]=<5:b274264000:2000>
>>> DVA[1]=<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE
>>> contiguous unique double size=800L/200P birth=1202311L/1202311P
>>> fill=212812
>>> cksum=16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5292b
>>>
>>> Object lvl iblk dblk dsize lsize %full type
>>> 635248 1 16K 512 6.00K 512 100.00 ZFS plain file
>>> 168 bonus System attributes
>>> dnode flags: USED_BYTES USERUSED_ACCOUNTED
>>> dnode maxblkid: 0
>>> path ???<object#635248>
>>> uid 11258
>>> gid 10513
>>> atime Thu May 2 17:31:26 2013
>>> mtime Thu May 2 17:31:26 2013
>>> ctime Thu May 2 17:31:26 2013
>>> crtime Thu May 2 17:13:58 2013
>>> gen 1197180
>>> mode 100600
>>> size 52
>>> parent 635247
>>> links 1
>>> pflags 40800000005
>>> Indirect blocks:
>>> 0 L0 3:a9da05a000:2000 200L/200P F=1 B=1197391/1197391
>>>
>>> segment [0000000000000000, 0000000000000200) size 512
>>>
>>> The application that writes to this volume runs on a windows client, so
>>> far, it has exhibited identical behavior across two zfs servers, but
>>> not
>>> on a generic windows server 2003 network share.
>>>
>>> The question is, what is happening to the data. Is it a samba issue? Is
>>> it ZFS? I've enabled the samba full_audit module to track file
>>> deletions, so I should have more information on that side.
>>>
>>> If anyone has seen similar behavior please let me know
>>>
>>> Mike C
>>
>
>
[-- Attachment #2 --]
0 *H
010 + 0 *H
"00e3v=0
*H
0K10
URootCA10U
Bay Photo Lab10U
California10 UUS0
121023173218Z
271023173218Z0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0"0
*H
0
;TąuyK~Zz2M'4
EiTj)yL5"kv7Urn \!SgP;zh>ˊj \VovX<LgfxxkL1CdY\S;z(5TO[)5bu\mBj*
nUh&`Qί;Z x ȜF
ԧ@})8}4#dzw&P^=AdT}*4 qS^E)̈cA$XDS]Z/_5M`~ӻRo'Ftw\e.G.3@m"\,c{'Gidv(TQY9zbp9c#Y³Vs |If ew7I% Grau07h f.;{Jʾx/R1.LT}Կ!kb
o8H ]=}SΈ퉃 00U-rfbb,v0U00U#0FNqi$x'{(W+0U00}{http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=RootCA,O=Bay%20Photo%20Lab,ST=California,C=US0U0
*H
JMUZ>7gm[z }/.~^J;өƉ-Q_\Όh#ԾXL7ph(@`+8W&ib!Qj+ȡ1iT(#^( giZ9c<R꼓e.ݘVѬ峿ۅ8Dh$~mm啠~'\ET& a}rMKL 0u%HYL
l=`Υ3k[؝Y}$ ss8?~IXKda<==mL[RҠsHBR/*`JfUzA)'0JkArvp#e-{]U
Z`#2Ϡv~.#l7"D=&t^-Q_9Mi
uԒn{Zn!U%r3J;QDi@PNg]&;yw|9B*.L=Ij-)/]'g^U0#0b=0
*H
0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0
121023180003Z
141023180003Z0`10
&,d1306910UMike Carlson10 UIT10U
Bay Photo Lab10 UUS0"0
*H
0
<ȼ^|=e9KtФ-jI_ %[߲'O%3;=*n((RT ͐C/\WU@HCjrIU-iE˼|paҨm-4݈amƵbK$"UEkEzd
w.
wG u:B'9!?tdk%%̞N. 8C1ަί[
BjF0){C9&pXnĉZuX")3zsS\\D:L1Q}1Gzz(d#V3fRoш^CLfQ@S/StX
d5Y 3M0ՙQ5ō;pIdV]&d#26zsgM}r#iМ|3)md:}뚁 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0U(}awJ״(#0U0 0U#0-rfbb,v0U00U%0
+0U0mike@bayphoto.com0
*H
9|&V,*Hd ƏA~6fFg'^y
I'yy,v}Z @ᔘ7\F5QA37*LT4VStTe .Dӧ=n}=L\E {
z7kYs#RO}E`OnL'1M0`Dۋ
rvVuX?s= +O0:yE?BA̡5|Ʀpp*<FLA36k덝j9b=&)KJSmʐXo@g;V4@ujkX9 @W h#nl\Y)A
rFGj qtvhu.ճK)L}@41AKz&ȴztÈ6͢j=0*+@;xnc-
WƣLG9X )=
y%]Q@BW
,Άut00MiC0
*H
0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUS0
121023175745Z
141023175745Z0`10
&,d1306910UMike Carlson10 UIT10U
Bay Photo Lab10 UUS0"0
*H
0
@vɌAļVAW5:eh$n>b%k7Pwޡ=^CBv2 ULLqn6+>A:P#=ѕ[8Z<|&wb(x椉
iҒx9H?~Ɔ-y]jN崡1geAˇwH4w?h!/^Pؕfa5-+%<*/+`ZBCƀn o|6'zoe!)@H藱$zѩ+
SXDz(~Bݬe?V
\j;.P,銉[JݦkjY*nȡ5]
hlkz3.Wme/tɧ# 8L%
Ũ%zp _p)ڜ(C=MYe3S>Tfρ=@ ]ڑav&0ۗ;.j'Yk_ 00R+F0D0B+06http://bayca.bayhoto.local/ejbca/publicweb/status/ocsp0UFO+Rdb`?60U0 0U#0-rfbb,v0U00http://bayca.bayhoto.local/ejbca/publicweb/webdist/certdist?cmd=crl&issuer=CN=Bay%20Photo%20People%20CA,O=Bay%20Photo%20Lab,ST=California,C=US0U0U%0++0U0mike@bayphoto.com0
*H
/ungfsy@KLw.cM&6?-Y4 ++IJYD C£S_2$eڏPU((̖S~aM0ri~jk2Ւ[n9rn&Bz(MݼIܪ*ȱImu5lr[Q`3͈;l{Z07h$>at)qo\]pJW7*[c%
y1FB)p2͞[~=?!Wd9XY5.bOKUDV[Z98E
^X9n<Hi@C?H+jlۗc&yqQ<Ii/
ɣ*B!f<.Re-=Y*?-4;|vj1@+Iܑ=J7%'jMmrSM@GV|:C'ݮ_Lkt61F0B0d0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSMiC0 + 0 *H
1 *H
0 *H
1
130504233313Z0# *H
10批ΤQT`Uw0l *H
1_0]0 `He*0 `He0
*H
0*H
0
*H
@0+0
*H
(0s +71f0d0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSb=0u*H
1fd0X10UBay Photo People CA10U
Bay Photo Lab10U
California10 UUSb=0
*H
'騭xZKE"y>;ݩE'vȁ /ZME
cd$y{Q;aPȞk%eסHp=VTėƗpW;ANLH$NY/D8Xn."
؞ncrzŚn|Jg$2w9=0,!*'ͭOT"ws+%fz ž
I6.wFg3Q
fk姹KB}Mԁ.ɐQpPa+({ns=GV/i 1[ qbD \JV9EѵV1x []Խah-m3{kf@{ճ1gY8dhr m8w#eڿaSasq6%7Ίҩ́/8m[F5Eb! ˥݃E_S͕=jp7A
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51859AB9.6040804>
