From owner-freebsd-xen@freebsd.org Wed Sep 20 18:15:09 2017 Return-Path: Delivered-To: freebsd-xen@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 46FB1E1BA5C for ; Wed, 20 Sep 2017 18:15:09 +0000 (UTC) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: from pdx.rh.CN85.dnsmgr.net (br1.CN84in.dnsmgr.net [69.59.192.140]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 322F87602C for ; Wed, 20 Sep 2017 18:15:08 +0000 (UTC) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: from pdx.rh.CN85.dnsmgr.net (localhost [127.0.0.1]) by pdx.rh.CN85.dnsmgr.net (8.13.3/8.13.3) with ESMTP id v8KIF7Ta089959; Wed, 20 Sep 2017 11:15:07 -0700 (PDT) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: (from freebsd-rwg@localhost) by pdx.rh.CN85.dnsmgr.net (8.13.3/8.13.3/Submit) id v8KIF7Gi089958; Wed, 20 Sep 2017 11:15:07 -0700 (PDT) (envelope-from freebsd-rwg) From: "Rodney W. Grimes" Message-Id: <201709201815.v8KIF7Gi089958@pdx.rh.CN85.dnsmgr.net> Subject: Re: Storage 'failover' largely kills FreeBSD 10.x under XenServer? In-Reply-To: <62BC29D8E1F6EA5C09759861@[10.12.30.106]> To: Karl Pielorz Date: Wed, 20 Sep 2017 11:15:07 -0700 (PDT) CC: freebsd-xen@freebsd.org X-Mailer: ELM [version 2.4ME+ PL121h (25)] MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-BeenThere: freebsd-xen@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Discussion of the freebsd port to xen - implementation and usage List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Sep 2017 18:15:09 -0000 > Hi All, > > We recently experienced an "unplanned storage" fail over on our XenServer > pool. The pool is 7.1 based (on certified HP kit), and runs a mix of > FreeBSD (all 10.3 based except for a legacy 9.x VM) - and a few Windows > VM's - storage is provided by two Citrix certified Synology storage boxes. > > During the fail over - Xen see's the storage paths go down, and come up > again (re-attaching when they are available again). Timing this - it takes > around a minute, worst case. > > The process killed 99% of our FreeBSD VM's :( > > The earlier 9.x FreeBSD box survived, and all the Windows VM's survived. > > Is there some 'tuneable' we can set to make the 10.3 boxes more tolerant of > the I/O delays that occur during a storage fail over? > > I've enclosed some of the error we observed below. I realise a full storage > fail over is a 'stressful time' for VM's - but the Windows VM's, and > earlier FreeBSD version survived without issue. All the 10.3 boxes logged > I/O errors, and then panic'd / rebooted. > > We've setup a test lab with the same kit - and can now replicate this at > will (every time most to all the FreeBSD 10.x boxes panic and reboot, but > Windows prevails) - so we can test any potential fixes. > > So if anyone can suggest anything we can tweak to minimize the chances of > this happening (i.e. make I/O more timeout tolerant, or set larger > timeouts?) that'd be great. As you found one of these let me point out the pair of them: kern.cam.ada.default_timeout: 30 kern.cam.ada.retry_count: 4 Rather than increasing default_timeout you might try increasing retry_count. Though it would seem that the default settings should of allowed for a 2 minute failure window, it may be that these are not working as I expect in this situation. ... > > Errors we observed: > > ada0: disk error cmd=write 11339752-11339767 status: ffffffff > ada0: disk error cmd=write Did you actually get this 4 times, then it fell through to the next error? There should be some retry counts in here some place counting up to 4, then cam/ada should give up and pass the error up the stack. > g_vfs_done():11340544-11340607gpt/root[WRITE(offset=4731097088, > length=8192)] status: ffffffff error = 5 > (repeated a couple of times with different values) > > Machine then goes on to panic: Ah, okay, so it is repeating.. these messages should be 30 seconds apart, there should be exactly 4 of them, then you get the panic. If that is the case try cranking kern.cam.ada.retry_count up and see if that resolves your issue. > g_vfs_done():panic: softdep_setup_freeblocks: inode busy > cpuid = 0 > KDB: stack backtrace: > #0 0xffffffff8098e810 at kdb_backtrace+0x60 > #1 0xffffffff809514e6 at vpanic+0x126 > #2 0xffffffff809513b3 at panic+0x43 > #3 0xffffffff80b9c685 at softdep_setup_freeblocks+0xaf5 > #4 0xffffffff80b86bae at ffs_truncate+0x44e > #5 0xffffffff80bbec49 at ufs_setattr+0x769 > #6 0xffffffff80e81891 at VOP_SETATTR_APV+0xa1 > #7 0xffffffff80a053c5 at vn_trunacte+0x165 > #8 0xffffffff809ff236 at kern_openat+0x326 > #9 0xffffffff80d56e6f at amd64_syscall+0x40f > #10 0xffffffff80d3c0cb at Xfast_syscall+0xfb > > > Another box also logged: > > ada0: disk error cmd=read 9970080-9970082 status: ffffffff > g_vfs_done():gpt/root[READ(offset=4029825024, length=1536)]error = 5 > vnode_pager_getpages: I/O read error > vm_fault: pager read error, pid 24219 (make) > > And again, went on to panic shortly thereafter. > > I had to hand transcribe the above from screen shots / video, so apologies > if any errors crept in. > > I'm hoping there's just a magic sysctl / kernel option we can set to up the > timeouts? (if it is as simple as timeouts killing things) Yes, freebsd does not live long when its disk drive goes away... 2.5 minutes to panic in almost all cases of a drive failure. -- Rod Grimes rgrimes@freebsd.org