Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Mar 2013 15:27:56 +0400
From:      Lev Serebryakov <lev@FreeBSD.org>
To:        Ivan Voras <ivoras@freebsd.org>, Don Lewis <truckman@FreeBSD.org>
Cc:        freebsd-fs@freebsd.org, freebsd-geom@freebsd.org
Subject:   Re: Unexpected SU+J inconsistency AGAIN -- please, don't shift topic to ZFS!
Message-ID:  <612776324.20130301152756@serebryakov.spb.ru>
In-Reply-To: <kgo2hn$f1s$1@ger.gmane.org>
References:  <1796551389.20130228120630@serebryakov.spb.ru> <1238720635.20130228123325@serebryakov.spb.ru> <1158712592.20130228141323@serebryakov.spb.ru> <CAPJF9w=CZg_%2BK7NHTGUhRLaMJWWNOG7zMipGMJL6w6NoNZpSXA@mail.gmail.com> <583012022.20130228143129@serebryakov.spb.ru> <kgnp1n$9mc$1@ger.gmane.org> <1502041051.20130228185647@serebryakov.spb.ru> <kgo2hn$f1s$1@ger.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello, Ivan.
You wrote 28 =D1=84=D0=B5=D0=B2=D1=80=D0=B0=D0=BB=D1=8F 2013 =D0=B3., 21:01=
:46:

>>   One time, Kirk say, that delayed writes are Ok for SU until bottom
>>  layer doesn't lie about operation completeness. geom_raid5 could
>>  delay writes (in hope that next writes will combine nicely and allow
>>  not to do read-calculate-write cycle for read alone), but it never
>>  mark BIO complete until it is really completed (layers down to
>>  geom_raid5 returns completion). So, every BIO in wait queue is "in
>>  flight" from GEOM/VFS point of view. Maybe, it is fatal for journal :(
IV> It shouldn't be - it could be a bug.
   I understand, that it proves nothing, but I've tried to repeat
 "previous crash corrupt FS in journal-undetectable way" theory by
 killing virtual system when there is massive writing to
 geom_radi5-based FS (on virtual drives, unfortunately). I've done 15
 tries (as it is manual testing, it takes about 1-1.5 hours total),
 but every time FS was Ok after double-fsck (first with journal and
 last without one). Of course, there was MASSIVE loss of data, as
 timeout and size of cache in geom_raid5 was set very high (sometimes
 FS becomes empty after unpacking 50% of SVN mirror seed, crash and
 check) but FS was consistent every time!


--=20
// Black Lion AKA Lev Serebryakov <lev@FreeBSD.org>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?612776324.20130301152756>