Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 16 Apr 2003 10:52:08 -0400
From:      David Gilbert <dgilbert@velocet.ca>
To:        "Michael Conlen" <meconlen@obfuscated.net>
Cc:        'Alain Fauconnet' <alain@ait.ac.th>
Subject:   RE: tweaking FreeBSD for Squid using
Message-ID:  <16029.28184.11825.471228@canoe.velocet.net>
In-Reply-To: <000a01c30423$bd0f2a40$2b038c0a@corp.neutelligent.com>
References:  <20030416024844.GC7867@ait.ac.th> <000a01c30423$bd0f2a40$2b038c0a@corp.neutelligent.com>

next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> "Michael" == Michael Conlen <meconlen@obfuscated.net> writes:

Michael> A lot of what you face doing a Squid server is backplane and
Michael> other bus issues, though it's dependant on what you call
Michael> "high performance"

Michael> A pair of Sun E220R's (2 SPARC II processors) for example
Michael> handled 1 million requests a day on a pair of mirrored 72 GB
Michael> drives each. (Granted they were very nice 72GB drives). The
Michael> thing about the Sun boxes was that they could get information
Michael> out of memory really really fast, and the NIC cards could
Michael> work to their full potential. Every device that did IO was on
Michael> it's own PCI bus.

There are several orders of magnitude in difference between
motherboards (even of the same chipset) for PCI performance.  PCI
seems to be a bus that can be implemented well ... or very, very
poorly.

If you're planning to serve up 100Mbit plus from a PC, test several
good (ie: expensive) motherboards in a bakeoff.  Motherboards change
so often that I can't even give you recomendations ... you can't buy
them anymore.

Ironically, many of the best motherboards for performance have also
been high DOA.  The K7S5A, for instance, had a DOA rate of 50% for us
(50% crashed on memory stress tests, etc), but the good ones culled
from the litter are among the best boards we have in production.

Michael> It used to be that IDE drives took more processing power from
Michael> the host to perform it's operations, where as SCSI does
Michael> not. If that's still true I'd use that as a reason to stay
Michael> away from IDE.

The real advantage of SCSI (for large request rates) is tagged command
queueing.  Many spindles + tagged queueing = fast.

Michael> The other advantage of SCSI, if you need great disk IO, is
Michael> that you can have a lot of spindles. On a large SCSI system
Michael> in a Sun for example I can get a single drive array to look
Michael> like one SCSI device (with 14 disks in it) and put a lot of
Michael> arrays on a channel. If I buy small, fast SCSI disks I can
Michael> take full advantage of the 160 MB/sec array, where as I've
Michael> seen a big fast IDE disk push no more than 10 MB/sec. The
Michael> arrays can do RAID before it gets to the controller card, so
Michael> you don't need the RAID in the box at all.

RAID isn't always a win with Squid.

Michael> Speaking of which, does anyone know of SCSI disk arrays with
Michael> hardware RAID that work with FreeBSD?

Michael> I've moved out of the Sun world and in to the FreeBSD world
Michael> professionally and have no idea what's out there for PC
Michael> hardware.

As I've said before, in the category of non-silly-expensive RAID,
vinum is faster than any I've tested.

that said, SCSI<-->SCSI raid systems should all work with FreeBSD.
Look in the hardware release notes for PCI raid devices, but
dis-recomend them.

Dave.

-- 
============================================================================
|David Gilbert, Velocet Communications.       | Two things can only be     |
|Mail:       dgilbert@velocet.net             |  equal if and only if they |
|http://daveg.ca                              |   are precisely opposite.  |
=========================================================GLO================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?16029.28184.11825.471228>