From owner-freebsd-scsi@FreeBSD.ORG Mon Aug 18 07:44:59 2008 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EED64106569C for ; Mon, 18 Aug 2008 07:44:59 +0000 (UTC) (envelope-from Carole.Macheret@ch.meggitt.com) Received: from gw.vibro-meter.com (gw.vibro-meter.com [62.2.232.101]) by mx1.freebsd.org (Postfix) with ESMTP id 3184A8FC1C for ; Mon, 18 Aug 2008 07:44:58 +0000 (UTC) (envelope-from Carole.Macheret@ch.meggitt.com) Received: from Vm-Fribourg-MTA by gw.vibro-meter.com with Novell_GroupWise; Mon, 18 Aug 2008 09:44:57 +0200 Message-Id: <48A9445E.1F16.0013.0@ch.meggitt.com> X-Mailer: Novell GroupWise Internet Agent 7.0.3 Date: Mon, 18 Aug 2008 09:44:45 +0200 From: "Carole Macheret" To: "Scott Long" References: <4874F53A0200001300130DE3@gw.vibro-meter.com> <48A465B10200001300132295@gw.vibro-meter.com> <48A46586.1F16.0013.0@ch.meggitt.com><48A46586.1F16.0013.0@ch.meggitt.com> <48A4666C.6080008@samsco.org> In-Reply-To: <48A4666C.6080008@samsco.org> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Cc: freebsd-scsi@freebsd.org, Roland Rothen Subject: Re: g_vfs_done X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Aug 2008 07:45:00 -0000 Thanks for your answer. We will do some tests as soon as we will have the = opportunity since the system is productive! Carole =20 >>> Scott Long 14.08.2008 19:07 >>> Carole Macheret wrote: > Hello, >=20 > We are using FreeBSD 7.0-RELEASE #1 running Squid and Zabbix on vmware = ESX 3.0.2 and our vmware ESX servers access our SAN through IpStor cluster = (Storage virtualization and mirroring).=20 >=20 > We have 2 storages (EVA 6100) and the IpStor solution allows us to = mirror disks on both EVAs. >=20 > We have a problem with both the Zabbix and Squid FreeBSD virtual = machines, when the virtual machine is loosing its disks (EVA controller = reboot or ipstor cluster failover), we have several "g_vfs_done() : = da1s1d[WRITE(offset=3D2312431234, length=3D12453)] error=3D 5" errors then = the host is definitively frozen. The disk loss lasts 1-5 seconds. Windows = virtual machines do freeze during the loss then continue working. On = Windows we had to specify a longer timeout for local disk in registry. >=20 > Does anybody has an idea what could be tuned to avoid this problem ? >=20 > Attached you can find the dmesg and a screenshot of the g_vfs_done = error... >=20 > Thanks in advance for your help >=20 So the virtual disks that the FreeBSD images are using in VMWare are on an IpStor, and those periodically go away, yes? What's probably happening is that the VMWare host is triggering an event in the FreeBSD client VM that essentially is making the virtual disks go away. Inside the FreeBSD VM, the SCSI layer tries to talk to the disk and gets a selection timeout since the disk is no longer there. It doesn't know that this is a temporary state, and it declares the I/O as failed. At that point, the BSD VM gets upset and everything gets bad. There is a property called kern.cam.da.default_timeout. It's set to 60 seconds, but I don't think that it will help you in this case, since it's likely that the i/o is failing because of a selection timeout, not because the virtual disk is slow in completing the i/o. The kern.cam.da.retry_count property is set to 5, and changing it might help since it might be able to force enough retries to give time for the virtual disk to come back. Try the following command on a running system: sysctl kern.cam.da.retry_count=3D100 This will allow for about 25 seconds worth of retries (a selection attempt takes 250ms, so you'll get about 4 retries per second). If this doesn't work, try configuring VMWare to give you a serial console that you can capture on the host, then set bootverbose during boot and send me the log once the problem happens. Scott