Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 03 Jun 2013 06:39:57 -0400
From:      Michael Powell <nightrecon@hotmail.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: Max top end computer for Freebsd to run on
Message-ID:  <kohrpp$nc4$1@ger.gmane.org>
References:  <51ABAC4D.4040302@a1poweruser.com> <51ABB457.5060205@gmail.com> <CAD4099=yEL%2B7BSbpUrBA2s69cHB1vEC54W-vZZmdpUzN-2z_qA@mail.gmail.com> <51AC00D9.4030502@hdk5.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Al Plant wrote:

> James wrote:
>> Several modest servers applied well will take you further than one b=
ig
>> iron=97and for less cost.
>=20
> James I agree. I have witnessed the benefit of what you say. Putting
> your faith in one big server can be a problem if the box fails,
> especially hardware failure.
>=20
> Keeping a spare server in a rack that can be switched in to service
> quickly can save you if one dies. Time (waiting for parts), most
> failures are hardware if your running FreeBSD. Even most Linux boxes.=

>=20

There are 2 approaches, and applying both together is what I favor. Sca=
le up=20
(vertical) is a horsepower per box kind of thing. Scale out (horizontal=
)=20
adds more of the same kind of box(es) in parallel. The resulting redund=
ancy=20
will keep you up and online.

Sizing matters somewhat. Having excess horsepower that sits unused is e=
xtra=20
money spent on one box that could have been applied to scale out redund=
ancy.=20
If you can size one machine to match your current and projected workloa=
d,=20
then if there are two, or more, of these and one fails the remaining ca=
n=20
shoulder the load while you get the broken one back up.

Where the balance point is struck will depend on workload. Let's say=20=

(hypothetical) one box as a web/database server can handle 1,000=20
connections/users per second within desired latency and response time. =
If a=20
spike in demand suddenly comes that box will slow to a crawl (or even f=
all=20
over) as it tries to keep up, as it is lacking the extra horsepower ove=
rhead=20
that would otherwise be sitting idle if it did. Scaling out (horizontal=
ly)=20
by adding more boxes will distribute this spike across multiple machine=
s and=20
remain within the desired processing response/latency time so together =
they=20
can handle 2,000 when the need is present. Need another 1,000? Add anot=
her=20
box, and so on.

So the trick is to understand your workload. Don't go overboard on just=
 one =20
huge high-power machine which sits mostly idle and takes you offline if=
 it=20
fails. Spend the money on more moderately sized boxen. Me, I like to ha=
ve at=20
least 3 of everything (if I can) such that they are sized so that 2 of =
them=20
together can easily handle the desired load. The third one is for redun=
dancy=20
and the 'what-if' spike in demand.

Another advantage here is you can take one offline for updates, then pu=
t it=20
back online and test it out for problems. If there is no problem then y=
ou=20
can take one of the other two down and update it. This way you can do=20=

updates without your service being offline. But the trick is still to=20=

understand your specific workload first, then spread the money around=20=

accordingly.

-Mike





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?kohrpp$nc4$1>