Date: Sun, 14 Mar 2010 12:26:45 +0900 From: Randy Bush <randy@psg.com> To: freebsd-fs <freebsd-fs@freebsd.org> Subject: degraded zfs slowdown Message-ID: <m2vdczr3be.wl%randy@psg.com>
next in thread | raw e-mail | index | archive | help
i lost a drive on a remote server. i had to use tw_cli from single user at boot time to remove it from the controller as it was making the controller unusable. so now, while waiting for the replacement drive to ship in, i have # df Filesystem 1024-blocks Used Avail Capacity Mounted on /dev/twed0s1a 253678 198102 35282 85% / /dev/twed0s1h 63254 2414 55780 4% /root tank 154191872 16256 154175616 0% /tank tank/usr 173331328 19155712 154175616 11% /usr tank/usr/home 213014784 58839168 154175616 28% /usr/home tank/var 157336192 3160576 154175616 2% /var tank/var/spool 154475392 299776 154175616 0% /var/spool /dev/md0 126702 156 116410 0% /tmp devfs 1 1 0 100% /dev procfs 4 4 0 100% /proc and # zpool status pool: tank state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 twed1 REMOVED 0 2 0 twed2 ONLINE 0 0 0 errors: No known data errors but the system is extremely soggy and hard to light. do i need to do some sort of remove at the zfs layer? randy
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?m2vdczr3be.wl%randy>