From owner-freebsd-geom@FreeBSD.ORG Sun Jun 17 14:51:04 2007 Return-Path: X-Original-To: freebsd-geom@freebsd.org Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id DF1F616A400 for ; Sun, 17 Jun 2007 14:51:04 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from merke.itea.ntnu.no (merke.itea.ntnu.no [129.241.7.61]) by mx1.freebsd.org (Postfix) with ESMTP id 6DBB813C448 for ; Sun, 17 Jun 2007 14:51:04 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from localhost (localhost [127.0.0.1]) by merke.itea.ntnu.no (Postfix) with ESMTP id 9F86313DD54; Sun, 17 Jun 2007 16:26:34 +0200 (CEST) Received: from twoflower.idi.ntnu.no (twoflower.idi.ntnu.no [129.241.104.169]) by merke.itea.ntnu.no (Postfix) with ESMTP; Sun, 17 Jun 2007 16:26:32 +0200 (CEST) Received: by twoflower.idi.ntnu.no (Postfix, from userid 1002) id E874E17023; Sun, 17 Jun 2007 16:26:31 +0200 (CEST) Date: Sun, 17 Jun 2007 16:26:31 +0200 From: Ulf Lilleengen To: Giancarlo Rubio Message-ID: <20070617142631.GA33976@twoflower.idi.ntnu.no> Mail-Followup-To: Giancarlo Rubio , freebsd-geom@freebsd.org References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.15 (2007-04-06) X-Content-Scanned: with sophos and spamassassin at mailgw.ntnu.no. Cc: freebsd-geom@freebsd.org Subject: Re: Gvinum X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "..."@twoflower.idi.ntnu.no List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Jun 2007 14:51:05 -0000 On Sat, Jun 16, 2007 at 05:08:49PM -0300, Giancarlo Rubio wrote: > Hi all > > I'm using raid 5 with 3 400gb disks, via gvinum. > > Disks > ad4: 381553MB at ata2-master SATA150 > ad6: 381554MB at ata3-master SATA150 > ad7: 381554MB at ata3-slave SATA150 > > When i listing the raid via gvinum show > > servidor# gvinum > gvinum -> list > 3 drives: > D raid53 State: up /dev/ad7s1a A: 0/381553 MB (0%) > D raid52 State: up /dev/ad6s1a A: 0/381553 MB (0%) > D raid51 State: up /dev/ad4s1a A: 0/381552 MB (0%) > > 1 volume: > V data State: up Plexes: 1 Size: 745 GB > > 1 plex: > P data.p0 R5 State: up Subdisks: 3 Size: 745 GB > > 3 subdisks: > S data.p0.s2 State: up D: raid53 Size: 372 GB > S data.p0.s1 State: up D: raid52 Size: 372 GB > S data.p0.s0 State: up D: raid51 Size: 372 GB > gvinum -> > > > devfs 1.0K 1.0K 0B 100% /dev > /dev/ad0s1e 496M 14K 456M 0% /tmp > /dev/ad0s1f 69G 2.0G 61G 3% /usr > /dev/ad0s1d 1.4G 79M 1.2G 6% /var > /dev/gvinum/data 722G 722G -58G 109% /home The difference is because of the filesystem layout and metadata, while gvinum show's you the "raw" disk data size. When it comes to the negative numbers...I think this might have something to do with the filesystem and not gvinum. But perhaps you could send the output off 'gvinum printconfig'? -- Ulf Lilleengen