From owner-freebsd-hackers Wed Nov 11 22:04:24 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id WAA01215 for freebsd-hackers-outgoing; Wed, 11 Nov 1998 22:04:24 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from dingo.cdrom.com (castles205.castles.com [208.214.165.205]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id WAA01210 for ; Wed, 11 Nov 1998 22:04:21 -0800 (PST) (envelope-from mike@dingo.cdrom.com) Received: from dingo.cdrom.com (localhost [127.0.0.1]) by dingo.cdrom.com (8.9.1/8.8.8) with ESMTP id WAA08044; Wed, 11 Nov 1998 22:00:41 -0800 (PST) (envelope-from mike@dingo.cdrom.com) Message-Id: <199811120600.WAA08044@dingo.cdrom.com> X-Mailer: exmh version 2.0.2 2/24/98 To: Greg Lehey cc: hackers@FreeBSD.ORG Subject: Re: [Vinum] Stupid benchmark: newfsstone In-reply-to: Your message of "Wed, 11 Nov 1998 10:30:28 +1030." <19981111103028.L18183@freebie.lemis.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Wed, 11 Nov 1998 22:00:40 -0800 From: Mike Smith Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG > On Monday, 9 November 1998 at 22:38:04 -0800, Mike Smith wrote: > > > > Just started playing with Vinum. Gawd Greg, this thing seriously needs > > a "smart" frontend to do the "simple" things. > > Any suggestions? After seeing people just banging out RAID > configurations with GUIs, I thought that this is probably a Bad > Thing. If you don't understand what you're doing, you shouldn't be > doing it. That's not entirely true. > The four-layer concepts used by Veritas and Vinum have always been > difficult to understand. I'm trying to work out how to explain them > better, but taking the Microsoft-style "don't worry, little boy, I'll > do it all for you" approach is IMO not the right way. I think it's a mistake to conceal all the workings, but it's also a mistake to assume that for the "common case", you need to thrust all of it into the novice's face. The "common case" for RAID applications seems to be: "I have these disk units, and I want to make them into a RAID volume". So the required functionality is: 1) Input the disks to participate in the volume. 2) Input the RAID model to be used. Step 2 should check the sizes of the disks selected in step 1, and make it clear that you can only get striped or RAID 5 volumes if the disks are all the same size. If they're within 10% or so of each other, it should probably ignore the excess on the larger drives. > > 4 x 4GB disks (2x Atlas, 2x Grand Prix) on an ncr 53c875, slapped > > together as a single volume. (you want to mention building filesystems > > in your manpages somewhere too - the '-v' option is not immediately > > obvious). > > As Tony observed, it's in vinum(4). vinum(8) just describes the > interface program. Do you still think I need to add more information? > There's supposed to be a user's guide, when I get round to writing it. I missed that, sorry. > > There was an interesting symptom observed in striped mode, where the > > disks seemed to have a binarily-weighted access pattern. > > Can you describe that in more detail? Maybe I should consider > relating stripe size to cylinder group size. I'm wondering if it was just a beat pattern related to the stripe size and cg sizes. Basically, the first disk in the group of 4 was always active. The second would go inactive for a very short period of time on a reasonably regular basis. The third for slightly longer, and the fourth for longer still, with the intervals for the third and fourth being progressively shorter. > > It will get more interesting when I add two more 9GB drives and four > > more 4GB units to the volume; especially as I haven't worked out if I > > can stripe the 9GB units separately and then concatenate their plex > > with the plex containing the 4GB units; my understanding is that all > > plexes in a volume contain copies of the same data. > > Correct. I need to think about how to do this, and whether it's worth > the trouble. It's straightforward with concatenated plexes, of > course. Yes, and it may be that activities will be sufficiently spread out over the volume that this won't be a problem. > > Can you nest plexes? > > No. That's somewhat unfortunate, but probably contributes to code simplicity. 8) -- \\ Sometimes you're ahead, \\ Mike Smith \\ sometimes you're behind. \\ mike@smith.net.au \\ The race is long, and in the \\ msmith@freebsd.org \\ end it's only with yourself. \\ msmith@cdrom.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message