Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Jul 2006 16:42:20 -0400
From:      Mike Meyer <mwm-keyword-freebsdhackers2.e313df@mired.org>
To:        "Michael R. Wayne" <wayne@staff.msen.com>, Alex Zbyslaw <xfb52@dial.pipex.com>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: disklabel differences FreeBSD, DragonFly
Message-ID:  <17609.9516.506115.204334@bhuda.mired.org>
In-Reply-To: <20060727185721.GC25626@manor.msen.com>
References:  <20060727063936.GA1246@titan.klemm.apsfilter.org> <20060727122159.GB4217@britannica.bec.de> <20060727134948.GA3755@energistic.com> <20060727180412.GB48057@megan.kiwi-computer.com> <17609.1474.618423.970137@bhuda.mired.org> <44C910BE.9000108@dial.pipex.com> <20060727185721.GC25626@manor.msen.com>

next in thread | previous in thread | raw e-mail | index | archive | help
In <20060727185721.GC25626@manor.msen.com>, Michael R. Wayne <wayne@staff.msen.com> typed:
> On Thu, Jul 27, 2006 at 02:28:18PM -0400, Mike Meyer wrote:
> > These days, the only technical reason I know of for having separate
> > mountpoints is because you want to run commands that work on
> > filesystems on the two parts with different arguments or under
> > different conditions.
> Or you want to run a bunch of jails.

You don't need mount points to run jails. In fact, the man page (on
5.5, anyway) provides examples that *break* if you put the jails on a
separate mount point.

> Or you want to give a bunch > of users a big chunk of space and
> quotas are a bad fit.

That's a social reason, not a technical one. That used to be really
common as well, but these days "a bunch of users" tend to get their
own machine.

In <44C910BE.9000108@dial.pipex.com>, Alex Zbyslaw <xfb52@dial.pipex.com> typed:
> Mike Meyer wrote:
> >In <20060727180412.GB48057@megan.kiwi-computer.com>, Rick C. Petty <rick-freebsd@kiwi-computer.com> typed:
> >>On Thu, Jul 27, 2006 at 09:49:48AM -0400, Steve Ames wrote:
> >>>On Thu, Jul 27, 2006 at 02:21:59PM +0200, Joerg Sonnenberger wrote:
> >>>>DragonFly disklabels allow 16 entries by default, FreeBSD still limits
> >>>>it to 8. That's why you can't read it directly.
> >>>Are there plans to bump the default up from 8? I'm honestly torn on
> >>>this topic whenever I install a new system. On the one hand I like
> >>>having a lot of discrete mountpoints to control potential usage. On
> >>>the other hand with drive space being so inexpensive I sometimes
> >>>wonder if I need to bother and can get away with very few mountpoints.
> >>I would think that cheap disk space would mean larger disks which implies
> >>more mountpoints ???
> >Nope. One of the historical uses of partitions was to act as firewalls
> >between subsystems, so that subsystem A running out of space didn't
> >cause subsystem B to die for lack of space. This had the downside of
> >making it more likely that one of the two would run out of space
> >because the excess space from another subsystem could only be used by
> >it. With cheap disk space, you overallocate by enough to give you
> >plenty of warning before you have to deal with the issue. You can
> >safely share that space, and doing so means you have to "deal with the
> >issue" less often.
> You assume that "running out of space" happens over time, but with some 
> runaway process logging to a file, for example, the partition filling up 
> will still happen without you expecting it.  It might take a bit longer 
> with a big disk, but 20 minutes instead of 5 minutes isn't much 
> different in terms of warning.

Yes, I'm assuming that "running out of space" happens over
time. Sustained I/O speeds on modern hardware was around 100MB/sec
last time I looked. So a good, large disk - say a terabyte raid (you
need raid to get those performance numbers, so call it 2 500GB disks
to keep it simple) - will take about three hours to fill *if you do
nothing but write to the disk*. A runaway process - especially one
generating log data - is normally doing other things that it's trying
to log information about.

A typical installation will have smaller, slower disks. A high-end
installation with faster disks will almost certainly have lots more
space as well. So it's perfectly reasonable to rely on disks to not
fill up in a matter of minutes.

In practice, log files are several orders of magnitude smaller than
the actual data dealt with by most application. A few hundred
megabytes is more than adequate log space for most systems, with
runaway processes filling them in a day or so. So I give those systems
a gigabyte of log space, 'cause disk is *cheap*.

And yes, I separate /var from /home and /usr, but not because I'm
worried about them running out of space.

> Fill /tmp or /var and many things can fail. Fill /home and it's just
> users who suffer a little but mail, demons etc. just carry on.

You're being inconsistent. Log files normally go on /var, so if you
fill that, your demons may well fail, depending on how they react to
not being able to log messages. On the other hand, for some demons it
makes sense to treat their data just like any other user data, so
they'd be on /home, and suddenly they're failing when /home fills up.

I had a system fall over for lack of disk space this month. It was an
old system, that only had 16GB of disk for file storage, and the 300GB
drive upgrade had already been ordered. It's a four-core 3GHz opteron
system, doing ETL processing as fast as it's little chips can
cycle. It took *five hours* to fill up when half of the data started
collecting on it instead of being loaded into the database. If it had
had the disk upgrade, it would have take a couple of days.

> A further reason to separate partitions is that dump works at the level 
> of a partition.  Different partitions may have very different backup 
> requirements, and for those of us without huge tape drives, partitioning 
> to a size that can be dumped on one tape makes life easier.

That's one of the technical reasons I mentioned in the part you
didn't quote.

> In some environments, fewer partitions may indeed be the new norm, but 
> in others it would not.

And in some environments, Windows is the norm. The question is - is
there still a good technical reason for doing that? The two primary
technical reasons I used to create partitions (firewalls for space and
damage) are both pretty much dead.

> Personally, I would like a limit of 16.  It would mean that I could fit 
> all my regular partitions inside a single slice, freeing up other slices 
> for, for example, experimenting with 64-bit, or -current, or whatever.  
> Bootable FreeBSD slices will be stuck at 4 for the foreseeable future - 
> extending the number of partitions within a slice frees up slices, which 
> are the really limited resource.

Why do you need lots of partitions for experimental systems?  And if
you need that, how often is it actually a win to give up the unlimited
number of logical volumes you get in an extended partition to get one
(1) extra bootable FreeBSD partition?  Especially if some of the
system you want to experiment with aren't as limited FreeBSD, and can
boot off of logical volumes? Frankly, if you're really worried about
bootable slices, you should be advocating giving FreeBSD the ability
to boot from a logical volume. IIRC, someone did that once, but never
got it into the tree.

	<mike
-- 
Mike Meyer <mwm@mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17609.9516.506115.204334>