From owner-freebsd-questions@FreeBSD.ORG Mon Jan 19 16:12:00 2015 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 857EF6E4 for ; Mon, 19 Jan 2015 16:12:00 +0000 (UTC) Received: from uk1rly2283.eechost.net (relay01.mail.uk1.eechost.net [217.69.40.75]) by mx1.freebsd.org (Postfix) with ESMTP id 1CE691E3 for ; Mon, 19 Jan 2015 16:11:59 +0000 (UTC) Received: from [88.151.27.41] (helo=smtp.marelmo.com) by uk1rly2283.eechost.net with esmtpa (Exim 4.72) (envelope-from ) id 1YDF05-0004jk-4Z for freebsd-questions@freebsd.org; Mon, 19 Jan 2015 16:16:25 +0000 Received: from [192.168.63.1] (helo=steve.marelmo.com) by smtp.marelmo.com with smtp (Exim 4.84 (FreeBSD)) (envelope-from ) id 1YDEl7-000EvZ-8b for freebsd-questions@freebsd.org; Mon, 19 Jan 2015 16:00:57 +0000 Date: Mon, 19 Jan 2015 16:00:56 +0000 From: Steve O'Hara-Smith To: freebsd-questions@freebsd.org Subject: ZFS and sparse file backed md devices Message-Id: <20150119160056.7c7ece19f5d0fccded7e913f@sohara.org> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.24.25; amd64-portbld-freebsd10.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Auth-Info: 24227@permanet.ie (plain) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Jan 2015 16:12:00 -0000 Hi, I tried to follow the suggestions for converting a ZFS mirror (mine was a three way mirror) to a RAIDZ (or in my case a RAIDZ2) when tight on discs by creating a pool using sparse file backed md devices to stand in for the missing discs. Fortunately I experimented with a dry run using nothing but sparse file backed md devices first. I'm using FreeBSD 10.1-RELEASE-p3. The first surprise was when I created four 2TB sparse file backed md devices using truncate and mdconfig and then tried to make a zfs pool out of them. The sparse files became not sparse - or at least tried to but of course there wasn't 8TB of space to use in /tmp so it filled up and it took a reboot to kill the zpool create run. Next experiment was more modest, four 128MB sparse files, sure enough once the zpool create finished they were four 128MB files and not sparse. Creating a pool on real discs certainly doesn't write on all the blocks - so why did my sparse files get filled in ? A little more experimenting revealed that I could offline the 128MB md devices one by one, destroy the device, truncate the file up to 2TB, recreate the device, wipe the ZFS meta data and replace the offlined device without filling in the sparse file. All was well until I did this to the fourth device and the pool tried to autoexpand - after a few seconds the box locked up and became completely unresponsive to everything except pings. Anybody have any idea why ? At this point I decided that the sparse file method was a non-starter and rebuilt my pool using four 1TB partitions out of the two available drives, copied the data, and then replaced the partitions one by one with whole drives[1], eventually winding up where I wanted to be with my three drive mirror converted to a four drive RAIDZ2. Still I am puzzled as to why the sparse file md device route no longer works. [1] Well single partitions covering most of each drive. -- Steve O'Hara-Smith