From nobody Fri Dec 15 06:41:22 2023 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Ss04M18WHz54N7q for ; Fri, 15 Dec 2023 06:41:39 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Ss04L6bHTz4bMZ for ; Fri, 15 Dec 2023 06:41:38 +0000 (UTC) (envelope-from rincebrain@gmail.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-40c48d7a7a7so3230755e9.3 for ; Thu, 14 Dec 2023 22:41:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702622497; x=1703227297; darn=freebsd.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=bt8LQ5dmn/SFFoY95VA6R0OieLZjFyFkcXbUNWvkrM0=; b=nY7sYIJtli6RBQoQJJoI+S7CYiVj28Bxjfa94fD9am9G1IQxMKv+sUggLywzhodJ3k 8QbViUGtxWb+r0tIM32xE+JwEvEKFWFX2MZLUc7/yiOSpS8LO7zpjAdcpY7vEBxRCvTX rkpTtmLb55ri1HX0VLnnlylZBGhQps/RZK9KF5P3IshPSy6qQcJD1M1K05xC80tp7Blg L1L4jaWTFaRznTzurQNai6VZ3XPXUnJANf+GE1DauCrfdH6iRR59cVB1z+EYlo5AkA8B akhp8hCtwhpiSFCyAJPaE16VPlLe7PEPkxZ7qWtuVnlI6G54BTx22tuabQDrjllpIYUw xnDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702622497; x=1703227297; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bt8LQ5dmn/SFFoY95VA6R0OieLZjFyFkcXbUNWvkrM0=; b=HwhmHcXt3Y2qBrgzl8jTVmsOpvX2vJisGJL2CmTgfaVcOihbatabdM0ENwTPQyfo3F axhUNpAhPgzdnIH9Z48boc1ALBZPP0STbx9SCFSPFRAc0cuQIhqZCUAU2C6mfnLU5A6z BVNpxR+0GSs+8LUZUxNiAlI4sAe2sQKNRzUMW1b70H9ot2/EZFQzR+EdhnxhZEakJywq vBTV7nMg4/Q+euCx6E/XA/GcwZ+K1aODGfOsUUVe66FU57uQpv8fy0kO1KwVggJ89dxA AB0cUVod6tN7pKTLTUR/MSMHMwDy3R6g03ip2Wdn8Dw9x3DvLO4HnPFkbj/+l1u92oRe 2dTA== X-Gm-Message-State: AOJu0Yyhe9Ai8cSktHnI5xkYq9WbEccv/fOy1ziwHUPhfs3mqPhHRF3I 1GO8hNhrCJ7DN6GmIf1iRnfpm6U7h18zcEVGcsr+3VFl X-Google-Smtp-Source: AGHT+IERUdn1j1QWGbT5GnCsHU0Qy+wdY3VF7KWSEFPHQh3fWTgxgcmcpdA4rIIJGUiHyA/9tTjUsbCPF6uecE1G7nc= X-Received: by 2002:a1c:7411:0:b0:40c:30f8:dcf0 with SMTP id p17-20020a1c7411000000b0040c30f8dcf0mr6437304wmc.64.1702622496466; Thu, 14 Dec 2023 22:41:36 -0800 (PST) List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 References: <787CB64A-1687-49C3-9063-2CE3B6F957EF@le-fay.org> <5d4ceb91-2046-4d2f-92b8-839a330c924a@quip.cz> In-Reply-To: <5d4ceb91-2046-4d2f-92b8-839a330c924a@quip.cz> From: Rich Date: Fri, 15 Dec 2023 01:41:22 -0500 Message-ID: Subject: Re: unusual ZFS issue To: Miroslav Lachman <000.fbsd@quip.cz> Cc: Lexi Winter , "freebsd-fs@freebsd.org" Content-Type: multipart/alternative; boundary="00000000000016ad74060c86b1c8" X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US] X-Spamd-Bar: ---- X-Rspamd-Queue-Id: 4Ss04L6bHTz4bMZ --00000000000016ad74060c86b1c8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Native encryption decryption errors won't show up as r/w/c errors, but will show up as "things with errors" in the status output. That wouldn't be triggered by scrub noticing them, though, since scrub doesn't decrypt things. Just the only thing I know of offhand where it'll decide there are errors but the counters will be zero... - Rich On Thu, Dec 14, 2023 at 7:05=E2=80=AFPM Miroslav Lachman <000.fbsd@quip.cz>= wrote: > On 14/12/2023 22:17, Lexi Winter wrote: > > hi list, > > > > i=E2=80=99ve just hit this ZFS error: > > > > # zfs list -rt snapshot data/vm/media/disk1 > > cannot iterate filesystems: I/O error > > NAME USED AVAIL > REFER MOUNTPOINT > > data/vm/media/disk1@autosnap_2023-12-13_12:00:00_hourly 0B - > 6.42G - > > data/vm/media/disk1@autosnap_2023-12-14_10:16:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_11:17:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_12:04:00_monthly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_12:15:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_13:14:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_14:38:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_15:11:00_hourly 0B - > 6.46G - > > data/vm/media/disk1@autosnap_2023-12-14_17:12:00_hourly 316K - > 6.47G - > > data/vm/media/disk1@autosnap_2023-12-14_17:29:00_daily 2.70M - > 6.47G - > > > > the pool itself also reports an error: > > > > # zpool status -v > > pool: data > > state: ONLINE > > status: One or more devices has experienced an error resulting in data > > corruption. Applications may be affected. > > action: Restore the file in question if possible. Otherwise restore th= e > > entire pool from backup. > > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A > > scan: scrub in progress since Thu Dec 14 18:58:21 2023 > > 11.5T / 18.8T scanned at 1.46G/s, 6.25T / 18.8T issued at 809M/s > > 0B repaired, 33.29% done, 04:30:20 to go > > config: > > > > NAME STATE READ WRITE CKSUM > > data ONLINE 0 0 0 > > raidz2-0 ONLINE 0 0 0 > > da4p1 ONLINE 0 0 0 > > da6p1 ONLINE 0 0 0 > > da5p1 ONLINE 0 0 0 > > da7p1 ONLINE 0 0 0 > > da1p1 ONLINE 0 0 0 > > da0p1 ONLINE 0 0 0 > > da3p1 ONLINE 0 0 0 > > da2p1 ONLINE 0 0 0 > > logs > > mirror-2 ONLINE 0 0 0 > > ada0p4 ONLINE 0 0 0 > > ada1p4 ONLINE 0 0 0 > > cache > > ada1p5 ONLINE 0 0 0 > > ada0p5 ONLINE 0 0 0 > > > > errors: Permanent errors have been detected in the following files: > > > > (it doesn=E2=80=99t list any files, the output ends there.) > > > > my assumption is that this indicates some sort of metadata corruption > issue, but i can=E2=80=99t find anything that might have caused it. none= of the > disks report any errors, and while all the disks are on the same SAS > controller, i would have expected controller errors to be flagged as CKSU= M > errors. > > > > my best guess is that this might be caused by a CPU or memory issue, bu= t > the system has ECC memory and hasn=E2=80=99t reported any issues. > > > > - has anyone else encountered anything like this? > > I've never seen "cannot iterate filesystems: I/O error". Could it be > that the system has too many snapshots / not enough memory to list them? > > But I have seen the pool report an error in an unknown file and not > shows any READ / WRITE / CKSUM errors. This is from my notes taken 10 > years ago: > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D > # zpool status -v > > pool: tank > > state: ONLINE > > status: One or more devices has experienced an error resulting in data > > corruption. Applications may be affected. > > action: Restore the file in question if possible. Otherwise restore the > > entire pool from backup. > > see: http://www.sun.com/msg/ZFS-8000-8A > > scrub: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank ONLINE 0 0 0 > > raidz1 ONLINE 0 0 0 > > ad0 ONLINE 0 0 0 > > ad1 ONLINE 0 0 0 > > ad2 ONLINE 0 0 0 > > ad3 ONLINE 0 0 0 > > > > errors: Permanent errors have been detected in the following files: > > > > <0x2da>:<0x258ab13> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D > > As you can see there are no CKSUM errors. There is something that should > be a path to filename: <0x2da>:<0x258ab13> > Maybe it was error in a snapshot which was already deleted? Just my guess= . > I ran a scrub on that pool, it finished without any error and then the > status of the pool was OK. > Similar error reappeared after a month and then after about 6 month. The > machine had ECC RAM. After these 3 incidents, I never saw it again. I > still have this machine in working condition, just the disk drives were > replaced from 4x 1TB to 4x 4TB and then 4x 8TB :) > > Kind regards > Miroslav Lachman > > > --00000000000016ad74060c86b1c8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Native encryption decryption errors won't show up as r= /w/c errors, but will show up as "things with errors" in the stat= us output.

That wouldn't be triggered by scrub notic= ing them, though, since scrub doesn't decrypt things.

Jus= t the only thing I know of offhand where it'll decide there are errors = but the counters will be zero...

- Rich

On Th= u, Dec 14, 2023 at 7:05=E2=80=AFPM Miroslav Lachman <000.fbsd@quip.cz> wrote:
On 14/12/2023 22:17, Lexi Winter wrote: > hi list,
>
> i=E2=80=99ve just hit this ZFS error:
>
> # zfs list -rt snapshot data/vm/media/disk1
> cannot iterate filesystems: I/O error
> NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0USED=C2=A0 AVAIL=C2= =A0 REFER=C2=A0 MOUNTPOINT
> data/vm/media/disk1@autosnap_2023-12-13_12:00:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.42G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_10:16:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_11:17:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_12:04:00_monthly=C2=A0 =C2=A0 = =C2=A00B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_12:15:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_13:14:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_14:38:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_15:11:00_hourly=C2=A0 =C2=A0 = =C2=A0 0B=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.46G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_17:12:00_hourly=C2=A0 =C2=A0 3= 16K=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.47G=C2=A0 -
> data/vm/media/disk1@autosnap_2023-12-14_17:29:00_daily=C2=A0 =C2=A0 2.= 70M=C2=A0 =C2=A0 =C2=A0 -=C2=A0 6.47G=C2=A0 -
>
> the pool itself also reports an error:
>
> # zpool status -v
>=C2=A0 =C2=A0 pool: data
>=C2=A0 =C2=A0state: ONLINE
> status: One or more devices has experienced an error resulting in data=
>=C2=A0 =C2=A0 =C2=A0 =C2=A0corruption.=C2=A0 Applications may be affect= ed.
> action: Restore the file in question if possible.=C2=A0 Otherwise rest= ore the
>=C2=A0 =C2=A0 =C2=A0 =C2=A0entire pool from backup.
>=C2=A0 =C2=A0 =C2=A0see: https://openzfs.g= ithub.io/openzfs-docs/msg/ZFS-8000-8A
>=C2=A0 =C2=A0 scan: scrub in progress since Thu Dec 14 18:58:21 2023 >=C2=A0 =C2=A0 =C2=A0 =C2=A011.5T / 18.8T scanned at 1.46G/s, 6.25T / 18= .8T issued at 809M/s
>=C2=A0 =C2=A0 =C2=A0 =C2=A00B repaired, 33.29% done, 04:30:20 to go
> config:
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 STATE=C2=A0 = =C2=A0 =C2=A0READ WRITE CKSUM
>=C2=A0 =C2=A0 =C2=A0 =C2=A0data=C2=A0 =C2=A0 =C2=A0 =C2=A0 ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raidz2-0=C2=A0 ONLINE=C2=A0 =C2=A0 = =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da4p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da6p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da5p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da7p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da1p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da0p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da3p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0da2p1=C2=A0 =C2=A0ONLINE=C2=A0= =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0logs
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0mirror-2=C2=A0 ONLINE=C2=A0 =C2=A0 = =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ada0p4=C2=A0 ONLINE=C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ada1p4=C2=A0 ONLINE=C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0cache
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ada1p5=C2=A0 =C2=A0 ONLINE=C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ada0p5=C2=A0 =C2=A0 ONLINE=C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
>
> errors: Permanent errors have been detected in the following files: >
> (it doesn=E2=80=99t list any files, the output ends there.)
>
> my assumption is that this indicates some sort of metadata corruption = issue, but i can=E2=80=99t find anything that might have caused it.=C2=A0 n= one of the disks report any errors, and while all the disks are on the same= SAS controller, i would have expected controller errors to be flagged as C= KSUM errors.
>
> my best guess is that this might be caused by a CPU or memory issue, b= ut the system has ECC memory and hasn=E2=80=99t reported any issues.
>
> - has anyone else encountered anything like this?

I've never seen "cannot iterate filesystems: I/O error". Coul= d it be
that the system has too many snapshots / not enough memory to list them?
But I have seen the pool report an error in an unknown file and not
shows any READ / WRITE / CKSUM errors. This is from my notes taken 10
years ago:

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
# zpool status -v

=C2=A0 =C2=A0pool: tank

=C2=A0 state: ONLINE

status: One or more devices has experienced an error resulting in data

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0corruption.=C2=A0 Applications may be aff= ected.

action: Restore the file in question if possible.=C2=A0 Otherwise restore t= he

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0entire pool from backup.

=C2=A0 =C2=A0 see: http://www.sun.com/msg/ZFS-8000-8A

=C2=A0 scrub: none requested

config:



=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 STATE=C2= =A0 =C2=A0 =C2=A0READ WRITE CKSUM

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tank=C2=A0 =C2=A0 =C2=A0 =C2=A0 ONLINE=C2= =A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raidz1=C2=A0 =C2=A0 ONLINE=C2=A0 = =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ad0=C2=A0 =C2=A0 =C2=A0ONLI= NE=C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ad1=C2=A0 =C2=A0 =C2=A0ONLI= NE=C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ad2=C2=A0 =C2=A0 =C2=A0ONLI= NE=C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ad3=C2=A0 =C2=A0 =C2=A0ONLI= NE=C2=A0 =C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A00


errors: Permanent errors have been detected in the following files:



=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<0x2da>:<0x258ab13>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D

As you can see there are no CKSUM errors. There is something that should be a path to filename: <0x2da>:<0x258ab13>
Maybe it was error in a snapshot which was already deleted? Just my guess.<= br> I ran a scrub on that pool, it finished without any error and then the
status of the pool was OK.
Similar error reappeared after a month and then after about 6 month. The machine had ECC RAM. After these 3 incidents, I never saw it again. I
still have this machine in working condition, just the disk drives were replaced from 4x 1TB to 4x 4TB and then 4x 8TB :)

Kind regards
Miroslav Lachman


--00000000000016ad74060c86b1c8--