From owner-freebsd-fs@FreeBSD.ORG Sun Mar 14 03:26:48 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F3CAC106566C for ; Sun, 14 Mar 2010 03:26:47 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id DC0AE8FC19 for ; Sun, 14 Mar 2010 03:26:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=rmac.psg.com.psg.com) by ran.psg.com with esmtp (Exim 4.71 (FreeBSD)) (envelope-from ) id 1NqeTW-0006wZ-CF for freebsd-fs@freebsd.org; Sun, 14 Mar 2010 03:26:46 +0000 Date: Sun, 14 Mar 2010 12:26:45 +0900 Message-ID: From: Randy Bush To: freebsd-fs User-Agent: Wanderlust/2.15.9 (Almost Unreal) Emacs/22.3 Mule/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Subject: degraded zfs slowdown X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Mar 2010 03:26:48 -0000 i lost a drive on a remote server. i had to use tw_cli from single user at boot time to remove it from the controller as it was making the controller unusable. so now, while waiting for the replacement drive to ship in, i have # df Filesystem 1024-blocks Used Avail Capacity Mounted on /dev/twed0s1a 253678 198102 35282 85% / /dev/twed0s1h 63254 2414 55780 4% /root tank 154191872 16256 154175616 0% /tank tank/usr 173331328 19155712 154175616 11% /usr tank/usr/home 213014784 58839168 154175616 28% /usr/home tank/var 157336192 3160576 154175616 2% /var tank/var/spool 154475392 299776 154175616 0% /var/spool /dev/md0 126702 156 116410 0% /tmp devfs 1 1 0 100% /dev procfs 4 4 0 100% /proc and # zpool status pool: tank state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 twed1 REMOVED 0 2 0 twed2 ONLINE 0 0 0 errors: No known data errors but the system is extremely soggy and hard to light. do i need to do some sort of remove at the zfs layer? randy