From owner-freebsd-questions@FreeBSD.ORG Fri Jul 24 14:49:04 2009 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C6C851065697 for ; Fri, 24 Jul 2009 14:49:04 +0000 (UTC) (envelope-from dweimer@orscheln.com) Received: from proxy2.orscheln.com (proxy2.orscheln.com [216.106.0.225]) by mx1.freebsd.org (Postfix) with ESMTP id 9718A8FC1D for ; Fri, 24 Jul 2009 14:49:04 +0000 (UTC) (envelope-from dweimer@orscheln.com) Received: from neuman.orscheln.oi.local (neuman.orscheln.com [10.20.10.160]) by proxy2.orscheln.com (8.14.3/8.14.3) with ESMTP id n6OEn32N088138; Fri, 24 Jul 2009 09:49:03 -0500 (CDT) (envelope-from dweimer@orscheln.com) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-MimeOLE: Produced By Microsoft Exchange V6.5 Date: Fri, 24 Jul 2009 09:49:02 -0500 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: VMWare ESX and FBSD 7.2 AMD64 guest Thread-Index: AcoMVncqCYndZhLoSIigMprzLi5BQgADl5Xg References: <20090724120017.F3B82106571A@hub.freebsd.org> From: "Dean Weimer" To: Cc: steve@ibctech.ca Subject: RE: VMWare ESX and FBSD 7.2 AMD64 guest X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Jul 2009 14:49:05 -0000 > This message has a foot that has nearly touched down over the OT > borderline. >=20 > We received an HP Proliant DL360G5 collocation box yesterday that has > two processors, and 8GB of memory. >=20 > All the client wants to use this box for is a single instance of > Windows > web hosting. Knowing the sites the client wants to aggregate into IIS, > I > know that the box is far over-rated. >=20 > Making a long story short, they have agreed to allow us to put their > Windows server inside of a virtual-ized container, so we can use the > unused horsepower for other vm's (test servers etc). >=20 > My problem is performance. I'm only willing to make this box virtual = if > I can keep the abstraction performance loss to <25% (my ultimate goal > would be 15%). >=20 > The following is what I have, followed by my benchmark findings: >=20 > # 7.2-RELEASE AMD64 >=20 > FreeBSD 7.2-RELEASE #0: Fri May 1 07:18:07 UTC 2009 > root@driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC >=20 > Timecounter "i8254" frequency 1193182 Hz quality 0 > CPU: Intel(R) Xeon(R) CPU 5150 @ 2.66GHz (2666.78-MHz > K8-class CPU) > Origin =3D "GenuineIntel" Id =3D 0x6f6 Stepping =3D 6 >=20 > usable memory =3D 8575160320 (8177 MB) > avail memory =3D 8273620992 (7890 MB) >=20 > FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs > cpu0 (BSP): APIC ID: 0 > cpu1 (AP): APIC ID: 1 > cpu2 (AP): APIC ID: 6 > cpu3 (AP): APIC ID: 7: >=20 > Benchmarks: >=20 > # time make -j4 buildworld (under vmware) >=20 > 5503.038u 3049.500s 1:15:46.25 188.1% 5877+1961k 3298+586716io > 2407pf+0w >=20 > # time make -j4 buildworld (native) >=20 > 4777.568u 992.422s 33:02.12 291.1% 6533+2099k 25722+586485io 3487pf+0w >=20 > ...both builds were from the exact same sources, and both runs were > running with the exact same environment. I was extremely careful to > ensure that the environments were exactly the same. >=20 > I'd appreciate any feedback on tweaks that I can make (either to > VMWare, > or FreeBSD itself) to make the virtualized environment much more > efficient. >=20 > Off-list is fine. >=20 > Cheers, >=20 > Steve I haven't actually done any benchmarks to compare the performance, but I = have been running production FreeBSD servers on VMware for a couple of = years. I currently have two 6.2 systems running CUPS, one on VMware = Server, and the other on ESX 3.5. I also have a 7.0 system and two 7.1 = systems running Squid on ESX 3.5 as well. The thing that I noticed as = the biggest bottle neck for any guest within VMware is the Disk I/O = (with the exception of video which isn't an issue for a server). = Compiling software does take longer, because of this, however if you = tune your disks properly the performance under real application load = doesn't seem to be an issue. Using soft updates on the file system = seems to help out a lot, but be aware of the consequences. That being said, on the Systems I have running squid we average 9G of = traffic a day on the busiest system with about 11% cache hit rate, These = proxies sit close to idle after hours. Looking at the information from = systat -vmstat, the system is almost idle during the day under the full = load as well, you just can't touch FreeBSD with only 2 DSL lines for web = traffic. Its faster than the old native system was, however there is an = iSCSI SAN behind the ESX server for disk access, and we went from a Dell = PowerEdge 850 to a Dell PowerEdge 2950. It does share that server with = around 15 or more other servers (Mostly windows, some Linux) depending = on the current load. Which brings us to another point, It seems to do = just fine when VMware VMotion moves it between servers. Not sure if this information helps you out any, but my recommendation = would be that if your application will be very disk intensive, avoid the = Virtual machine. In my case with the Squid, gaining the redundancy of = the VMware coupled with VMotion was worth the potential hit in = performance. As we are soon implementing a second data center across = town that will house additional VMware servers and thanks to a 10G fiber = ring, will allow us to migrate servers while running between = datacenters. Also keep in mind that as of vSphere 4 (We will be = upgrading to this once the new data center is complete, just waiting on = the shipment of the racks at this point), VMware does officially support = FreeBSD 7.1, so you might want to go with that instead of 7.2, as there = may be a performance issue with 7.2, but it's also just as likely that = it was a timing issue on releases that 7.1 is supported and 7.2 isn't. = As of ESXi 4.0 (released 5-21-2009), I believe it has the same code base = as vSphere 4, so the same guests should be supported. Thanks, =A0=A0=A0=A0 Dean Weimer =A0=A0=A0=A0 Network Administrator =A0=A0=A0=A0 Orscheln Management Co