From owner-freebsd-stable@FreeBSD.ORG Tue Nov 30 22:39:00 2004 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0172516A4CE for ; Tue, 30 Nov 2004 22:39:00 +0000 (GMT) Received: from freebee.digiware.nl (dsl439.iae.nl [212.61.63.187]) by mx1.FreeBSD.org (Postfix) with ESMTP id EF4DA43D39 for ; Tue, 30 Nov 2004 22:38:56 +0000 (GMT) (envelope-from wjw@withagen.nl) Received: from [212.61.27.71] (dual.digiware.nl [212.61.27.71]) by freebee.digiware.nl (8.13.1/8.13.1) with ESMTP id iAUMcs4x024283; Tue, 30 Nov 2004 23:38:54 +0100 (CET) (envelope-from wjw@withagen.nl) Message-ID: <41ACF67E.6000000@withagen.nl> Date: Tue, 30 Nov 2004 23:38:54 +0100 From: Willem Jan Withagen User-Agent: Mozilla Thunderbird 0.9 (Windows/20041103) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Ceri Davies References: <20041125205836.GH37244@submonkey.net> <20041130184502.GT60679@submonkey.net> In-Reply-To: <20041130184502.GT60679@submonkey.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: stable@freebsd.org Subject: Re: 4.10 -> 5.3 migration; what happens to vinum volumes? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Nov 2004 22:39:00 -0000 Ceri Davies wrote: > On Thu, Nov 25, 2004 at 08:58:36PM +0000, Ceri Davies wrote: > >>I have a 4.10-STABLE machine that I want to migrate to 5.3-STABLE. Most >>of the bases are covered, but I'm not sure what to expect for my vinum >>volumes. I don't have anything esoteric (see attached config), but can >>I just expect "sed -i.bak -e 's/vinum/gvinum/' /etc/fstab" to leave me >>with working volumes? > > > Should I take it that nobody knows, or that nobody wants to say? > > >># Vinum configuration of shrike.private.submonkey.net, saved at Thu Nov 25 20:54:01 2004 >>drive vinumdrive2 device /dev/ad0s1d >>drive vinumdrive0 device /dev/ad0s1h >>drive vinumdrive3 device /dev/ad1s1d >>drive vinumdrive1 device /dev/ad1s1h >>volume userhome >>volume werehaus >>plex name userhome.p0 org concat vol userhome >>plex name userhome.p1 org concat vol userhome >>plex name werehaus.p0 org striped 512s vol werehaus >>sd name userhome.p0.s0 drive vinumdrive0 plex userhome.p0 len 52428535s driveoffset 265s plexoffset 0s >>sd name userhome.p1.s0 drive vinumdrive1 plex userhome.p1 len 52428535s driveoffset 265s plexoffset 0s >>sd name werehaus.p0.s0 drive vinumdrive2 plex werehaus.p0 len 67166720s driveoffset 265s plexoffset 0s >>sd name werehaus.p0.s1 drive vinumdrive3 plex werehaus.p0 len 67166720s driveoffset 265s plexoffset 512s From what I've seen on the lists, I'd be carefull to go that route yet with doing a full backup of all data. Lets say that gvinum has still rough edges. But then those only get smoothed out by people taking the leap. There have also been raports on succes. So I guess the reason nobody replied is just because there is no consensus. Also look in the GEOM@ archive for several threads on this topic. For my main fileserver I've stopped at 5.1 at this moment. I need to test more before I go there, but I can't seem to find mucht time lately. It that the customers pay the bills for not playing around. :() --WjW