From owner-freebsd-fs@FreeBSD.ORG Fri Jul 8 04:43:00 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E6052106566B for ; Fri, 8 Jul 2011 04:43:00 +0000 (UTC) (envelope-from jwd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D2F9C8FC0C; Fri, 8 Jul 2011 04:43:00 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p684h0Br010281; Fri, 8 Jul 2011 04:43:00 GMT (envelope-from jwd@freefall.freebsd.org) Received: (from jwd@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p684h0rJ010280; Fri, 8 Jul 2011 04:43:00 GMT (envelope-from jwd) Date: Fri, 8 Jul 2011 04:43:00 +0000 From: John To: freebsd-fs@freebsd.org Message-ID: <20110708044300.GA2130@FreeBSD.org> References: <20110707220019.GA79464@DataIX.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110707220019.GA79464@DataIX.net> User-Agent: Mutt/1.4.2.3i Cc: Jason Hellenthal , Volodymyr Kostyrko Subject: Re: everlasting log device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jul 2011 04:43:01 -0000 ----- Jason Hellenthal's Original Message ----- > > > On Fri, Jul 08, 2011 at 12:22:01AM +0300, Volodymyr Kostyrko wrote: > > Hi all. > > > > When I get my hands on SSD device I tried to setup a log/cache partition > > for my pools. Everything works fine until one day I realized that I have > > a better place to stick this SSD in. I have upgraded system from > > RELENG_8_2 to RELENG_8 and tried to remove devices. From my two pools > > one was successfully freed from log/cache devices yet another one > > refuses to live without log device: > > > > # zpool upgrade > > This system is currently running ZFS pool version 28. > > > > All pools are formatted using this version. > > > > # zfs upgrade > > This system is currently running ZFS filesystem version 5. > > > > All filesystems are formatted with the current version. > > > > # zpool status > > pool: utwig > > state: DEGRADED > > status: One or more devices could not be opened. Sufficient replicas > > exist for > > the pool to continue functioning in a degraded state. > > action: Attach the missing device and online it using 'zpool online'. > > see: http://www.sun.com/msg/ZFS-8000-2Q > > scan: resilvered 0 in 0h21m with 0 errors on Sat Jul 2 15:07:35 2011 > > config: > > > > NAME STATE READ > > WRITE CKSUM > > utwig DEGRADED 0 > > 0 0 > > mirror-0 ONLINE 0 > > 0 0 > > gptid/ecb17af1-9119-11df-bb0b-00304f4e6d80 ONLINE 0 > > 0 0 > > gptid/03aed1f5-95a3-11df-bb0b-00304f4e6d80 ONLINE 0 > > 0 0 > > logs > > gptid/231b9002-a4a5-11e0-a114-3f386a87752c UNAVAIL 0 > > 0 0 cannot open > > > > errors: No known data errors > > > > pool: utwig-sas > > state: ONLINE > > scan: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > utwig-sas ONLINE 0 0 0 > > mirror-0 ONLINE 0 0 0 > > aacd1 ONLINE 0 0 0 > > aacd2 ONLINE 0 0 0 > > > > errors: No known data errors > > > > # zpool remove utwig gptid/231b9002-a4a5-11e0-a114-3f386a87752c && echo good > > good > > > > And nothing changes - system needs that partition. > > > > One more weird thing. > > > > # zpool iostat -v utwig > > capacity operations > > bandwidth > > pool alloc free read write > > read write > > -------------------------------------- ----- ----- ----- ----- > > ----- ----- > > utwig 284G 172G 41 70 > > 272K 793K > > mirror 284G 172G 41 70 > > 272K 793K > > gptid/ecb17af1-9119-11df-bb0b-00304f4e6d80 - - 8 > > 27 456K 794K > > gptid/03aed1f5-95a3-11df-bb0b-00304f4e6d80 - - 8 > > 27 459K 794K > > gptid/231b9002-a4a5-11e0-a114-3f386a87752c 148K 3,97G 0 > > 0 0 0 > > -------------------------------------- ----- ----- ----- ----- > > ----- ----- > > > > System claims that this log device has 148K data. Is this the size of > > unwritten data? The number is still the same when booting into single > > user mode and doesn't change at all. > > > > Can I remove this log device? Should I recreate the pool to get rid of > > this behavior? > > > > If you have the possibility to re-create the pool then Id definately > suggest it. > > If you remove this device (physically) your pool will not be operable > unfortunately there is still somehting missing to allow SLOGs to be > removed from a running pool yet, what that might be is beyond me at this > time. You might try to export the pool then boot into single user mode > and reimport the pool and try the removal procedure but I raelly dont > think that will help you. > > Good luck. I have the same issue. Easy to ignore most of the time. Really annoying at others. Haven't figured out a way to fix/avoid it yet. This is running a current systems just a few days old. It's been around for awhile though. # zpool iostat -v capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool1 4.95G 971G 1 0 4.40K 803 raidz1 4.95G 971G 1 0 4.40K 803 da0 - - 0 0 3.19K 287 da1 - - 0 0 3.17K 287 da2 - - 0 0 3.20K 279 da3 - - 0 0 3.17K 279 da4 - - 0 0 3.20K 287 da5 - - 0 0 3.20K 299 da6 - - 0 0 3.17K 289 hast/md0 0 250M 0 0 0 0 hast/md1 4K 250M 0 0 0 0 -John