From owner-freebsd-fs@FreeBSD.ORG Sat Nov 10 12:35:33 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E53E4B51 for ; Sat, 10 Nov 2012 12:35:32 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 6B3608FC12 for ; Sat, 10 Nov 2012 12:35:32 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1TXAHa-00023h-LW for freebsd-fs@freebsd.org; Sat, 10 Nov 2012 13:35:31 +0100 Received: from h253044.upc-h.chello.nl ([62.194.253.44] helo=pinky) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1TXAHa-0002qa-3f for freebsd-fs@freebsd.org; Sat, 10 Nov 2012 13:35:30 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: zfs remove vdev functionality References: <509BF0A2.2050600@digsys.bg> <393C72B3-7231-47C6-ABD2-1C6BA166ED11@gmail.com> <509CECA6.70308@digsys.bg> Date: Sat, 10 Nov 2012 13:35:29 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.10 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.0 X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=disabled version=3.2.5 X-Scan-Signature: 22b714be0c51703cd3047a81d17f7b3c X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Nov 2012 12:35:33 -0000 On Fri, 09 Nov 2012 19:55:37 +0100, Rich wrote: > I have not looked at the code to confirm this, but wouldn't the snapshots What keeps you from looking at the code? The answers are in there. Ronald. > be built on the logical block addresses (e.g. the layer above the > RAID/pool > abstraction that maps to physical blocks) rather than physical block > addresses? > > So as long as the logical block mapping is correct, the snapshot layer > shouldn't care - it's if you were reshaping how the logical block layer > being presented worked that would make this painful for that. > > Modulo the abstraction of translation that the DDT does, of course - I > have > no clue how that voodoo is implemented under the covers. > > - Rich > > > On Fri, Nov 9, 2012 at 8:17 AM, Johannes Totz wrote: > >> On 09/11/2012 11:44, Daniel Kalchev wrote: >> > >> > >> > On 08.11.12 21:39, Nikolay Denev wrote: >> >> On Nov 8, 2012, at 7:49 PM, Daniel Kalchev wrote: >> >> >> >>> I was thinking on how to implement vdev removal on ZFS and came to >> >>> this idea: >> >>> >> >>> If we can have an per-vdev flag that prohibits new allocations on >> the >> >>> vdev, but permits reads and frees - then we could mark so the vdev >> >>> we intend to remove form the zpool and issue an scrub-like command >> >>> that will rewrite all blocks allocated form that particular vdev. >> >>> Since ZFS is COW, this will effectively move all blocks off that "no >> >>> write" vdev and we can now detach it. >> >>> >> >>> All this could be implemented with the new ZFS feature flags, so no >> >>> version bumps etc are necessary. >> >>> >> >>> Is there something I didn't think of? >> >> I don't think this will be that easy, because of snapshots, clones >> etc. >> > >> > The snapshots etc are above the block allocator level, at which this >> > should be implemented. >> >> Are you sure about this? Because exactly the reason that snapshots are >> built on the actual block pointer addresses makes the whole BP-rewrite >> so difficult. >> >> > I believe the BP rewrite is the same concept, except it was probably >> > intended to increment the zpool version etc. My idea is to make this >> > feature independent of the on-disk format. The only more involving >> > change would be the "detach vdev" part. >> > >> > A side effect of this might be the ability to re-balance the data over >> > all vdevs, which is an issue currently, especially if you add new >> vdevs >> > to an reasonably full zpool. >> > >> > Daniel >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"