Date: Mon, 7 Jun 2010 01:34:28 -0700 From: Jeremy Chadwick <freebsd@jdc.parodius.com> To: Andriy Gapon <avg@icyb.net.ua> Cc: freebsd-fs@freebsd.org Subject: Re: zfs i/o error, no driver error Message-ID: <20100607083428.GA48419@icarus.home.lan> In-Reply-To: <4C0CAABA.2010506@icyb.net.ua> References: <4C0CAABA.2010506@icyb.net.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jun 07, 2010 at 11:15:54AM +0300, Andriy Gapon wrote: > During recent zpool scrub one read error was detected and "128K repaired". > > In system log I see the following message: > ZFS: vdev I/O failure, zpool=tank > path=/dev/gptid/536c6f78-e4f3-11de-b9f8-001cc08221ff offset=284456910848 > size=131072 error=5 > > On the other hand, there are no other errors, nothing from geom, ahci, etc. > Why would that happen? What kind of error could this be? I believe this indicates silent data corruption[1], which ZFS can auto-correct if the pool is a mirror or raidz (otherwise it can detect the problem but not fix it). This can happen for a lot of reasons, but tracking down the source is often difficult. Usually it indicates the disk itself has some kind of problem (cache going bad, some sector remaps which didn't happen or failed, etc.). What I'd need to determine the cause: - Full "zpool status tank" output before the scrub - Full "zpool status tank" output after the scrub - Full "smartctl -a /dev/XXX" for all disk members of zpool "tank" Furthermore, what made you decide to scrub the pool on a whim? [1]: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data http://blogs.sun.com/bonwick/entry/raid_z -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100607083428.GA48419>