Date: Thu, 12 Oct 2006 22:19:00 -0400 From: Jeff Dickens <jeff@intranet.seamanpaper.com> To: Jeff Dickens <jeff@intranet.seamanpaper.com> Cc: freebsd-questions@freebsd.org Subject: Re: optimal kernel options for VMWARE guest system Message-ID: <452EF794.7040803@intranet.seamanpaper.com> In-Reply-To: <452D36DD.1070800@intranet.seamanpaper.com> References: <4522969F.9010504@seamanpaper.com> <200610031605.54121.lists@jnielsen.net> <4523C9C2.6060000@seamanpaper.com> <452D36DD.1070800@intranet.seamanpaper.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Jeff Dickens wrote: > Jeff Dickens wrote: >> John Nielsen wrote: >>> On Tuesday 03 October 2006 12:58, Jeff Dickens wrote: >>> >>>> I have some Freebsd systems that are running as VMware guests. I'd >>>> like >>>> to configure their kernels so as to minimize the overhead on the >>>> VMware >>>> host system. After reading and partially digesting the white paper on >>>> timekeeping in VMware virtual machines >>>> (http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I >>>> might want to make some changes. >>>> >>>> Has anyone addressed this issue? >>>> >>> >>> I haven't read the white paper (yet; thanks for the link), but I've >>> had good results with recent -STABLE VM's running under ESX server >>> 3. Some thoughts: >>> >>> As I do on most of my installs, I trimmed down GENERIC to include >>> just the drivers I use. In this case that was mpt for the disk and >>> le for the network (although I suspect forcing the VM to present >>> e1000 hardware and then using the em driver would work as well if >>> not better). >>> >>> The VMware tools package that comes with ESX server does a poor job >>> of getting itself to run, but it can be made to work without too >>> much difficulty. Don't use the port, run the included install script >>> to install the files, ignore the custom network driver and compile >>> the memory management module from source (included). If using X.org, >>> use the built-in vmware display driver, and copy the vmmouse driver >>> .o file from the VMware tools dist to the appropriate dir under >>> /usr/X11. Even though the included file is for X.org 6.8, it works >>> fine with 6.9/7.0 (X.org 7.1 should include the vmmouse driver.) Run >>> the VMware tools config script from a non-X terminal (and you can >>> ignore the warning about running it remotely if you're using SSH), >>> so it won't mess with your X display (it doesn't do anything not >>> accomplished above). Then run the rc.d script to start the VMware >>> tools. >>> >>> I haven't noticed any timekeeping issues so far. >>> >>> JN >>> _______________________________________________ >>> >> What is the advantage of using the "e1000 hardware", and is this >> documented somewhere? I got the vxn network driver working without >> issues; I just had to edit the .vxn file manually: I'm using the >> free VMware server V1 rather than the ESX server. >> >> ethernet0.virtualDev="vmxnet" >> >> I've got timekeeping running stably on these. I turn on time sync >> via vmware tools in the .vmx file: >> >> tools.syncTime = "TRUE" >> >> and in the guest file's rc.conf start ntpd with flags "-Aqgx &" so it >> just syncs once at boot and exits. >> >> I'm not using X on these. They're supposed to be clean & lean >> systems to run such things as djbdns and qmail. And they do work >> well. My main goal is to reduce the background load on the VMware >> host system so that it isn't spending more time than it has to >> simulating interrupt controllers for the guests. I'm wondering about >> the "disable ACPI" boot option. I suppose I first should figure out >> how to even roughly measure the effect of any changes I might make. >> > Well, I've done some pseudo-scientific measurement on this. I > currently have five freebsd virtual systems running, and one Centos 4 > (linux 2.6), This command give some info on the background cpu usage: > > (The host is a Centos 3 system, linux 2.4) > > [root@otter root]# ps auxww | head -1 > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > [root@otter root]# ps auxww | grep vmx > root 18031 12.7 1.5 175440 39916 ? S< Oct09 345:50 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/Goose/freebsd-6.1-i386.vmx -@ "" > root 18058 12.9 1.4 174772 36916 ? S< Oct09 351:01 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/Duck/freebsd-6.1-i386.vmx -@ "" > root 18072 16.2 5.5 246372 141776 ? S< Oct09 440:16 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/BlueJay/freebsd-6.1-i386.vmx -@ "" > root 18086 12.9 1.4 174688 38464 ? S< Oct09 351:47 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/Heron/freebsd-6.1-i386.vmx -@ "" > root 18100 9.4 4.1 385712 107348 ? S< Oct09 256:25 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/Newt/freebsd-6.1-i386.vmx -@ "" > root 18139 12.2 2.5 299388 65132 ? S< Oct09 330:35 > /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual > Machines/Centos4/Centos4.vmx -@ "" > root 28930 0.0 0.0 3680 672 pts/3 S 14:08 0:00 grep vmx > [root@otter root]# > > > As one can see the one called "Newt" is consistently lower in the > "%CPU" column. Curiously enough, this *is* the one I built a custom > kernel for. > The config file I used is posted below: Besides commenting out > devices I wasn't using & NFS, etc, I commented out the apic and > pctimer devices. Do you think I'm on the right track for reducing > interrupt frequency? > > Also, if I were to want to move this kernel to other FreeBSD systems, > how much has to move, the whole /boot/kernel directory? > > Finally I did have to re-run the vmware-config-tools.pl script after > rebuilding the kernel. > <snip> Could anyone perhaps share their thoughts on just this part of my question? Thanks in advance. --->Also, if I were to want to move this kernel to other FreeBSD systems, how much has to move, the whole /boot/kernel directory?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?452EF794.7040803>