Date: Tue, 08 Nov 2016 18:18:41 +0000 From: Jacques Fourie <jacques.fourie@gmail.com> To: Jean-Marc.LACROIX@unice.fr, freebsd-fs@freebsd.org Subject: Re: FreeBSD 11.0 + ZFSD Message-ID: <CALX0vxC6a8ZyRsBLhs_SaNQp2m2MGxwP5KhYkWpxod4Po6k2qA@mail.gmail.com> In-Reply-To: <5521603a-65ef-7b79-4fa8-4315e1d9c7f8@unice.fr> References: <5521603a-65ef-7b79-4fa8-4315e1d9c7f8@unice.fr>
next in thread | previous in thread | raw e-mail | index | archive | help
9other. eh6 On TkhzKMVV CCV.,a.ue, Nov 8, 2016, 11:34 yyyyyyAM <Jean-Marc.LACROIX@unice.fr> wrngbte: > Hello, > > We are testing the mecanism of ZFSD on the latest FresBSD 11.0. In > order to do that, > we have created a VMware virtual machine with 5 disk: > - 1 disk for the system OS > - 3 disks for the raidz1 pool > - 1 disk for spare > > So modified /etc/rc.conf to have the daemon start at boot, and rebooted. > > Then (in the the virtual machine parameters, to simulate a disk failure) > we removed a disk of the pool > We can see that ZFSD proceed to the replacement of the UNAVAILABLE disk > by the spare disk. > and complete the resilver. > Then, we removed (in the the virtual machine parameters) a second disk > of the pool > =3D> the pool is marked as UNAVAIL, if we try, for example, to cd to a > filesystem in the pool, > it crashed completely, we have to kill the terminal, and reconnect the > server. > > But if we issue a zpool clear zpool command, The pool status change > state from UNAVAIL to DEGRADED as shown below: > > root@pcmath228:~ # zpool status > pool: zpool > state: DEGRADED > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://illumos.org/msg/ZFS-8000-8A > scan: resilvered 328M in 0h0m with 0 errors on Tue Nov 8 16:24:50 201= 6 > config: > > NAME STATE READ WRITE CKSUM > zpool DEGRADED 0 0 0 > raidz1-0 DEGRADED 0 0 0 > spare-0 DEGRADED 0 0 0 > 16161479624068136764 REMOVED 0 0 0 was /dev/da1 > da4 ONLINE 0 0 0 > 7947336420112974466 REMOVED 0 0 0 was /dev/da2 > da3 ONLINE 0 0 0 > spares > 16893112194374399469 INUSE was /dev/da4 > > errors: 2 data errors, use '-v' for a list > > pool: zroot > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > da0p3 ONLINE 0 0 0 > > errors: No known data errors > > Anyway it is said :"One or more devices has experienced an error > resulting in data corruption." > but a cd to a filesystem of the pool doesn't crashed anymore. > > So the questions: > - why we have to issue a zpool clear in order to recover a "working" pool= ? > > - is it normal to have possible data corruption (as said in the > message), what it means exactly ? > As we understood normaly the pool should recover enough redondancy > informations to have a fonctional one, > and without possible data corruption, no ? > > Thank for you help, > Best regards > Jean-Marc & Roland > > > -- > LACROIX Jean-Marc office: W612 > Administrateur Syst=C3=A8mes et R=C3=A9seaux LJAD > phone: 04.92.07.62.51 fax: 04.93.51.79.74 > email: jml@unice.fr > Address: Laboratoire J.A.Dieudonne - UMR CNRS 7351 > Universite de Nice Sophia-Antipolis > Parc Valrose - 06108 Nice Cedex 2 - France > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALX0vxC6a8ZyRsBLhs_SaNQp2m2MGxwP5KhYkWpxod4Po6k2qA>