From owner-freebsd-emulation Thu Feb 1 13:40:10 2001 Delivered-To: freebsd-emulation@freebsd.org Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by hub.freebsd.org (Postfix) with ESMTP id 8FB3637B491; Thu, 1 Feb 2001 13:39:47 -0800 (PST) Received: from fledge.watson.org (robert@fledge.pr.watson.org [192.0.2.3]) by fledge.watson.org (8.11.1/8.11.1) with SMTP id f11Ldkh20304; Thu, 1 Feb 2001 16:39:46 -0500 (EST) (envelope-from robert@fledge.watson.org) Date: Thu, 1 Feb 2001 16:39:45 -0500 (EST) From: Robert Watson X-Sender: robert@fledge.watson.org To: mdillon@FreeBSD.org Cc: emulation@FreeBSD.org Subject: long hangs running vmware -- vm system interactions? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-emulation@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org Matt, I don't know if you've ever spent much time running vmware, but I've noticed that it appears to bring out the worst in FreeBSD, especially when it comes to mapped memory regions. I have two scenarios in mind, and this is using the vmware2 port, if you want to give it a try. The first is during the initial startup of a virtual machine with a guest operating system. Their emulated BIOS spends some time touching each page of memory during the boot process, which apparently involves creating a huge file on /var/tmp (or where-ever), mmap'ing it, and then touching the pages. I'm not sure what the sequence of events for creating the file is, but one suspects they seek to create a sparse file, and then the touching of pages causes the pages/blocks to be allocated sequentially. During this procedure, other processes that attempt to touch the disk generally hang, presumably waiting on a lock, or just the opportunity to perform disk I/O. It seems like the long hanging should be avoidable... The second scenario occurs when vmware runs out of disk space for its paging file during the previously described procedure. I lost control of the system for around ten minutes, and observed a lot of the following messages appearing in dmesg: ... pid 5 (syncer), uid 0 on /var: file system full vnode_pager_putpages: I/O error 28 vnode_pager_putpages: residual I/O 65536 at 6942 pid 5 (syncer), uid 0 on /var: file system full vnode_pager_putpages: I/O error 28 vnode_pager_putpages: residual I/O 65536 at 6943 pid 5 (syncer), uid 0 on /var: file system full vnode_pager_putpages: I/O error 28 vnode_pager_putpages: residual I/O 65536 at 6944 pid 5 (syncer), uid 0 on /var: file system full vnode_pager_putpages: I/O error 28 vnode_pager_putpages: residual I/O 65536 at 6945 pid 5 (syncer), uid 0 on /var: file system full vnode_pager_putpages: I/O error 28 ... Now, I understand that fundamentally life sucks when you over-commit and find out the hard way that you don't have the resources, but the fact that fairly regularly used applications have this problem due to poor disk layout and use strategies suggests we could be handling it better. Interestingly, VMware even notices it has run out of room, but that doesn't seem to save you from suffering through this anyway. I did recover control of the system eventually, but it took a long time, and at the very least, it would be nice if the recovery time was faster. Some of the problem may have been the printf'ing, which in turn spewed log messages to syslogd, which then wanted to write to /var, feeding back. In any case, if you haven't had the opportunity to explore how VMware stresses FreeBSD, you might give it a spin sometime... :-) Thanks, Robert N M Watson FreeBSD Core Team, TrustedBSD Project robert@fledge.watson.org NAI Labs, Safeport Network Services To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-emulation" in the body of the message