Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Oct 1998 17:50:38 -0400
From:      "Christopher G. Petrilli" <petrilli@amber.org>
To:        freebsd-smp@FreeBSD.ORG
Subject:   Re: hw platform Q - what's a good smp choice these days?
Message-ID:  <19981007175038.58996@amber.org>
In-Reply-To: <32BABEF63EAED111B2C5204C4F4F502017F9@WGP01>; from James Mansion on Wed, Oct 07, 1998 at 09:25:23PM %2B0100
References:  <32BABEF63EAED111B2C5204C4F4F502017F9@WGP01>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 07, 1998 at 09:25:23PM +0100, James Mansion wrote:
> > everything from mainframes to Sun Starfires to little embedded
> > machines, and ALWAYS they are I/O bound.  When you think you've fixed
> 
> No they aren't.

Well, I guess I misphrased that, in that in tuning you always start with
the I/O side, not the CPU side, as most cases heavily loaded systems
respond slowly not because of CPU problems, but because of unpredictable
I/O subsystems.  Hense, this is why large systems (mini/mainframe) use
large quantities of processors to handle the I/O side and often drives
are heavily multi-ported (something that gets really hairy).

> Maybe your experience is that they are, but mine is not.  It may be that
> I am spoilt having worked primarily in investment banks.  (Lets be
> clear,
> I am objecting to your use of 'ALWAYS')
> 
> My observation has been that systems that have been sensibly
> configured are net IO bound or CPU bound, but rarely disk transfer
> bound.  Sometimes seek bound, I grant.

Net I/O bound is I/O bound, it's not always disk.  But "sensibly"
configured describes very few systems, and therefore this discussion.
Most people are obsessed with megahertz, and confuse performance inside
the cache and throughput throughout the entire system.  I/O can also be
memory bandwidth----which begs the question of why banking isn't more
popular in PCs.

> My experience is mostly with trading systems for derivatives.  These
> tend to behave more like decision support systems than transaction
> processing systems.  On the Sybase systems I'm most familiar with,
> CPU becomes the limiting factor quickly - even on mullti-CPU
> 'big iron' from Sun and HP.  Where this hasn't been the case, some
> minor tuning to avoid table scans and help index coverage has been
> enough
> to bring the working set well within affordable RAM.

I suspect these look nothing like PCs with IDe drives though :-) The
point is that MOST servers are not configured correctly.  The slow
movement int he PC industry to I2O is promissing, but it'd be nice if
you could offload the entire file system to a processor---but that might
be dreaming at this point.  The CPU shouldn't be involved in the
calculation of i-nodes, it's irrelevnt to a CPU's job.

> On web servers, it is usually the case that you are either bound on your
> ISP connection, or are CPU bound on CGI/ISAPI/whatever running dynamic
> services.

Alas, web servers are a weird situation ebcause the Internet as a whole
is so obscenely slow in the grand scheme of things.

> I agree that cutting seeks by splitting IOs is a good idea, and any
> database
> tuning book will tell you that.  But for any budget, you can afford way
> more
> IDEs and if you're performance bound on your swap device then you really
> have screwed up the way the system budget was spent.

Once you start adding large numbers of drives though the cost
effectiveness goes away on IDE... you can only practically handle 1
drive per controller (slave drives are a kludge and performance
limiting), and you start having to add controllers into the machine.
Given there are currently no "massive" controllers for IDE, you MIGHT be
able to control 4x2 + 2 (4 PCI slots) drives, which is a rather small
system in many cases.  This could be contorlled by a single SCSI
controller and probably provide roughly the same bandwidth, and
substantially less hassle, without sucking up all the slots.

> > Note that anythign that is heavily disk bound, i.e. database, or news
> > servers, should put even more emphasis on the I/O side of the
> > house---something unfortunately that PCs still are pretty 
> > miserable at.
> 
> Not sure this is a wise thing to say.  FreeBSD does rather well at
> controlling
> IO and you have a hard time getting mid-range Suns and HPs to hit disks
> as hard
> as a PC will, simply because its so much easier to get 80MB/s subsystems
> and
> disks for your PC.

Specs and reality are very different ideas :-) Most PCs only have 1 PCI
bus, and it's trivial to saturate.  Most PCs have dinky memory
subsystems (do the calculations of how fast a PII/450 is v. SDRAM's
bandwidth).  

Don't get me wrong, I think FreeBSD is fabulous for 99% of the world,
especially if some thought is given to offloading RAID, etc to hardware,
and I use it almost exclusively for my desktop and all mid/small
projects.  But I wouldn't look at it for creating huge disk farms, you
can't get PC hardware that works that way---you can get INtel hardware
that will, see DG and Sequent.

Chris

NOW, FreeBSD on a Sequent NUMAQ that's interesting :-)

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19981007175038.58996>