Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 8 Nov 2016 17:33:21 +0100
From:      Jean-Marc.LACROIX@unice.fr
To:        freebsd-fs@freebsd.org
Subject:   FreeBSD 11.0 + ZFSD
Message-ID:  <5521603a-65ef-7b79-4fa8-4315e1d9c7f8@unice.fr>

next in thread | raw e-mail | index | archive | help
Hello,

     We are testing the mecanism of ZFSD on the latest FresBSD 11.0. In 
order to do that,
we have created a VMware virtual machine with 5 disk:
- 1 disk for the system OS
- 3 disks for the raidz1 pool
- 1 disk for spare

So modified /etc/rc.conf to have the daemon start at boot, and rebooted.

Then (in the the virtual machine parameters, to simulate a disk failure) 
we removed a disk of the pool
We can see that ZFSD proceed to the replacement of the UNAVAILABLE disk 
by the spare disk.
and complete the resilver.
Then, we removed (in the the virtual machine parameters) a second disk 
of the pool
=> the pool is marked as UNAVAIL, if we try, for example, to cd to a 
filesystem in the pool,
it crashed completely, we have to kill the terminal, and reconnect the 
server.

But if we issue a zpool clear zpool command, The pool status change 
state from UNAVAIL to DEGRADED as shown below:

root@pcmath228:~ # zpool status
   pool: zpool
  state: DEGRADED
status: One or more devices has experienced an error resulting in data
     corruption.  Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
     entire pool from backup.
    see: http://illumos.org/msg/ZFS-8000-8A
   scan: resilvered 328M in 0h0m with 0 errors on Tue Nov  8 16:24:50 2016
config:

     NAME                        STATE     READ WRITE CKSUM
     zpool                       DEGRADED     0     0     0
       raidz1-0                  DEGRADED     0     0     0
         spare-0                 DEGRADED     0     0     0
           16161479624068136764  REMOVED      0     0     0 was /dev/da1
           da4                   ONLINE       0     0     0
         7947336420112974466     REMOVED      0     0     0 was /dev/da2
         da3                     ONLINE       0     0     0
     spares
       16893112194374399469      INUSE     was /dev/da4

errors: 2 data errors, use '-v' for a list

   pool: zroot
  state: ONLINE
   scan: none requested
config:

     NAME        STATE     READ WRITE CKSUM
     zroot       ONLINE       0     0     0
       da0p3     ONLINE       0     0     0

errors: No known data errors

Anyway it is said :"One or more devices has experienced an error 
resulting in data corruption."
but a cd to a filesystem of the pool doesn't crashed anymore.

So the questions:
- why we have to issue a zpool clear in order to recover a "working" pool ?

- is it normal to have possible data corruption (as said in the 
message), what it means exactly ?
   As we understood normaly the pool should recover enough redondancy 
informations to have a fonctional one,
   and without possible data corruption, no ?

Thank for you help,
Best regards
Jean-Marc & Roland


-- 
LACROIX Jean-Marc                  office: W612
Administrateur Systèmes et Réseaux LJAD
phone:  04.92.07.62.51             fax: 04.93.51.79.74
email:  jml@unice.fr
Address: Laboratoire J.A.Dieudonne - UMR CNRS 7351
          Universite de Nice Sophia-Antipolis
          Parc Valrose - 06108 Nice Cedex 2 - France




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5521603a-65ef-7b79-4fa8-4315e1d9c7f8>