Date: Mon, 22 Jan 2007 05:15:37 -0800 (PST) From: "R. B. Riddick" <arne_woerner@yahoo.com> To: freebsd-fs@freebsd.org, CyberLeo Kitsana <cyberleo@cyberleo.net>, FreeBSD Geom <freebsd-geom@freebsd.org> Subject: Re: geom_raid5 livelock? Message-ID: <916065.8298.qm@web30309.mail.mud.yahoo.com>
next in thread | raw e-mail | index | archive | help
--- "R. B. Riddick" <arne_woerner@yahoo.com> wrote: > It looks like, always the same consumer returns false data again and again in > this strange situation, although at the same time a dd to the same consumer > at the same offset returns data, that fits to the parity block. > > Does somebody here have an idea, why GEOM does that? > Could it be, that graid5 ruined somehow memory management? > Could it be, that GEOM is disturbed by simultaneous request? > I think, not graid5 ruined memory management, but <tadah> UFS changes memory areas while a read request, that has to use the same memory area, is not completed.</tadah> Hints: 1. Since I use for graid5's SAFEOP mode just graid5-private-memory for the parity check, no parity errors show up. 2. It was always -when I checked it- the use-data memory chunk, that had bad data. 3. That happened in a quite simple special case, too (I used just 2 disks, so that graid5 was like gmirror with 2 disks and round-robin balance). Further details see: http://perforce.freebsd.org/chv.cgi?CH=113310 Anyone here, who can validate my theory (it feels so _wrong_!)? :-) -Arne ____________________________________________________________________________________ Never miss an email again! Yahoo! Toolbar alerts you the instant new Mail arrives. http://tools.search.yahoo.com/toolbar/features/mail/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?916065.8298.qm>