Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Nov 2010 12:03:00 +0100
From:      Ivan Voras <ivoras@freebsd.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: 8.1-RELEASE: ZFS data errors
Message-ID:  <ibdu54$fd1$1@dough.gmane.org>
In-Reply-To: <4CD98816.1020306@llnl.gov>
References:  <4CD84258.6090404@llnl.gov> <ibbauo$27m$1@dough.gmane.org>	<4CD986DC.1070401@llnl.gov> <4CD98816.1020306@llnl.gov>

next in thread | previous in thread | raw e-mail | index | archive | help
On 11/09/10 18:42, Mike Carlson wrote:

>>      write# gstripe label -v -s 16384  data /dev/da2 /dev/da3 /dev/da4
>>      /dev/da5 /dev/da6 /dev/da7 /dev/da8

>>      write# df -h
>>      Filesystem            Size    Used   Avail Capacity  Mounted on
>>      /dev/da0s1a           1.7T     22G    1.6T     1%    /
>>      devfs                 1.0K    1.0K      0B   100%    /dev
>>      /dev/stripe/data      126T    4.0K    116T     0%    /mnt

>>      write# fsck /mnt
>>      fsck: Could not determine filesystem type
>>      write# fsck_ufs  /mnt
>>      ** /dev/stripe/data (NO WRITE)
>>      ** Last Mounted on /mnt
>>      ** Phase 1 - Check Blocks and Sizes
>>      Segmentation fault

>> So, the data appears to be okay, I wanted to run through a FSCK just to
>> do it but that seg faulted. Otherwise, that data looks good.

Hmm, probably it tried to allocate a gazillion internal structures to
check it and didn't take no for an answer.

>> Question, why did you recommend using a smaller stripe size? Is that to
>> ensure a sample 1GB test file gets written across ALL disk members?

Yes, it's the surest way since MAXPHYS=128 KiB / 8 = 16 KiB.

Well, as far as I'm concerned this probably shows that there isn't
something wrong about hardware or GEOM, though more testing, like
running a couple of bonnie++ rounds on the UFS on the stripe volume for
a few hours, would probably be better.

Btw. what bandwidth do you get from this combination (gstripe + UFS)?

> Oh, I almost forgot, here is the ZFS version of that gstripe array:
> 
>    write# zpool create test01 /dev/stripe/data

>    write# zpool scrub
>    write# zpool status
>       pool: test01
>      state: ONLINE
>      scrub: scrub completed after 0h0m with 0 errors on Tue Nov  9
>    09:41:34 2010
>    config:
> 
>         NAME           STATE     READ WRITE CKSUM
>         test01         ONLINE       0     0     0
>           stripe/data  ONLINE       0     0     0

"scrub" verifies only written data, not the whole file system space
(that's why it finishes so fast), so it isn't really doing any load on
the array, but I agree that it looks more and more like there really is
an issue in ZFS.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ibdu54$fd1$1>