Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 1 Sep 1998 10:26:57 -0400 (EDT)
From:      "Robert D. Keys" <bsdbob@seedlab1.cropsci.ncsu.edu>
To:        grog@lemis.com (Greg Lehey)
Cc:        freebsd-questions@FreeBSD.ORG
Subject:   Re: Looking for logic and rationale of fs partition conventions.
Message-ID:  <199809011426.KAA15175@seedlab1.cropsci.ncsu.edu>
In-Reply-To: <19980901091129.U606@freebie.lemis.com> from Greg Lehey at "Sep 1, 98 09:11:29 am"

next in thread | previous in thread | raw e-mail | index | archive | help
> On Monday, 31 August 1998 at 12:01:08 -0400, Robert D. Keys wrote:
> > Can anyone fill me in (and probably others too), as to the logic and
> > rationale of the fs partition naming conventions of BSD from 4.3,
> > Tahoe, Reno, 4.4, 4.4-Lite, and FreeBSD?
> 
> We tend to call them slices instead of partitions in order to avoid
> some of the confusion of them being stored in Microsoft partitions.

That other OS is a dead issue, for me, since I never mix it and the
real OS on the same box.  I get the partition nomenclature from
the SMM01-1.2 section, and only think of it in that regard.
As a rule I slice the whole disk to unix.  It is a waste of a
good unix box to relegate useful space to that other thing IMHO.

> > I understand ``a'' is the root partition, ``b'' the swap partition,
> > and ``c'' the entire disk.  After that, d/e/f/g/h sort of go every
> > which way, with no particular rhyme or reason.  ``g'' is often used
> > for the remaining /usr partition, but there does not seem to be much
> > clear reasoning as to why.  I would like to understand that rhyme
> > and reason.
> 
> You were right first time.  There is neither rhyme nor reason.
> They're just 5 slices you can use any way you like.  In fact, slice
> 'a' is pretty much the same except on the first disk, and 'b' is just
> a convention.  The only slice the system knows about and gives special
> treatment to is 'c'.
> 
> Having said that, 'd' used to imply the whole Microsoft partition, so
> FreeBSD used to (maybe still does?) start allocating additional slices
> with slice 'e'.  BSD/OS started at 'h' and worked backwards.  None of
> this is so important, though.

OK.  I was wondering if there had been some logic to it out of CSRG
as relates historically back to 4BSD, probably.  It must be the knowledge
of the ancients, mebbie.  The problem that I am running into is that
each variant of unix wants to have a different set of uses for beyond ``c''
partition.  I guess technically, it does not matter, on a particular
machine, but the time or two I have not gotten it right, it leads to
that dreaded ``fs not found'' error on boot, and then locking the root
fs into read-only mode, and the machine crying ``help''.  It really
does need to be standardized amongst the unix community, but that
probably won't happen.

> > Also, what is the convention of fs splitting between drives?  The
> > table in the 4.4SMM (sec. 2.5.2) suggests some possiblities, but is
> > there any other rationale behind the choices?  How things might be
> > affected loadwise on singleuser workstations vs heavy servers, is
> > probably very different.  I would like to understand more of the
> > reasoning of these conventions.
> 
> The text in SMM goes into more detail: the main concern is to balance
> the load between the disks.  When this was written, a big disk was 1
> GB.  Now you can't get anything that small any more, and the rules
> have changed.  Most people only have one disk, mainly 2.  Under those
> circumstances, you probably want as few slices as possible.

Most of what I run tends to be single-user workstations, where load
is not particularly a problem, as it might be on a large server.
Amongst my junker boxes, a 1 gig drive still is a goodly drive.....(:+}}...
One of these days I will find a box full of 10 gig platters.... (right).

Assuming the machine is devoted only to unix, then one slice per drive,
and for the IDE users, mostly 2 drives max.  The problem then becomes
how to best divide up the drive(s) for maximizing space usage?

> I have always recommended one file system per disk, with the exception
> of the system disk, which can contain a root file system as well as
> the /usr file system (and swap space).  Others disagree and bring
> forward arguments for a /var file system, but that only makes sense if
> you're going to try to make /usr read-only, which almost nobody does.

Interesting.  I tend to agree with you, in my experiences.  What I find
I usually do is put the root, swap, var, and usr file systems on hd0,
and then all my stuff on hd1.  I find that I am using the var fs mostly
as a temp dump for mail and spooling, and little else.  All my user
space is usually hung off a home or usr/home fs via hd1, so var does not
have to hold that anymore.  This begs the question, ``Why var?'', and if
and if so, ``How big a var fs?''.  Also, on a large drive (1 gig or larger),
I find I need to make the root larger than the 16M traditionally from
4.4BSD or the usual 32M from FreeBSD.  I find around 64-100M needed for
root on my 2 x 2 gig hd system.  Is there something in table sizes or
such somewhere that makes me need a root fs that big?  I have hit that
root fs full message several times.

> Greg

Thanks for the input Greg..... as that movie robot was want to say,
``No. 5 need input, need input!''.

Bob Keys


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199809011426.KAA15175>