Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Feb 2019 10:30:09 +0100
From:      Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To:        Eric Bautsch <eric.bautsch@pobox.com>
Cc:        <freebsd-xen@freebsd.org>
Subject:   Re: Issues with XEN and ZFS
Message-ID:  <20190211093009.lnzak3l4ub65b67n@mac>
In-Reply-To: <a3b338c7-d3ea-b552-2fbb-f3185f7dc96a@pobox.com>
References:  <a3b338c7-d3ea-b552-2fbb-f3185f7dc96a@pobox.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for the testing!

On Fri, Feb 08, 2019 at 07:35:04PM +0000, Eric Bautsch wrote:
> Hi.
> 
> 
> Brief abstract: I'm having ZFS/Xen interaction issues with the disks being
> declared unusable by the dom0.
>
> 
> The longer bit:
> 
> I'm new to FreeBSD, so my apologies for all the stupid questions. I'm trying
> to migrate from Linux as my virtual platform host (very bad experiences with
> stability, let's leave it at that). I'm hosting mostly Solaris VMs (that
> being my choice of OS, but again, Betamax/VHS, need I say more), as well as
> a Windows VM (because I have to) and a Linux VM (as a future desktop via
> thin clients as and when I have to retire my SunRay solution which also runs
> on a VM for lack of functionality).
> 
> So, I got xen working on FreeBSD now after my newbie mistake was pointed out to me.
> 
> However, I seem to be stuck again:
> 
> I have, in this initial test server, only two disks. They are SATA hanging
> off the on-board SATA controller. The system is one of those Shuttle XPC
> cubes, an older one I had hanging around with 16GB memory and I think 4
> cores.
> 
> I've given the dom0 2GB of memory and 2 core to start with.

2GB might be too low when using ZFS, I would suggest 4G as a minimum
when using ZFS for reasonable performance, even 8G. ZFS is quite
memory hungry.

> The root filesystem is zfs with a mirror between the two disks.
> 
> The entire thing is dead easy to blow away and re-install as I was very
> impressed how easy the FreeBSD automatic installer was to understand and
> pick up, so I have it all scripted. If I need to blow stuff away to test, no
> problem and I can always get back to a known configuration.
> 
> 
> As I only have two disks, I have created a zfs volume for the Xen domU thus:
> 
> zfs create -V40G -o volmode=dev zroot/nereid0
> 
> 
> The domU nereid is defined thus:
> 
> cat - << EOI > /export/vm/nereid.cfg
> builder = "hvm"
> name = "nereid"
> memory = 2048
> vcpus = 1
> vif = [ 'mac=00:16:3E:11:11:51,bridge=bridge0',
>         'mac=00:16:3E:11:11:52,bridge=bridge1',
>         'mac=00:16:3E:11:11:53,bridge=bridge2' ]
> disk = [ '/dev/zvol/zroot/nereid0,raw,hda,rw' ]
> vnc = 1
> vnclisten = "0.0.0.0"
> serial = "pty"
> EOI
> 
> nereid itself also auto-installs, it's a Solaris 11.3 instance.
> 
> 
> As it tries to install, I get this in the dom0:
> 
> Feb  8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0):
> WRITE_FPDMA_QUEUED. ACB: 61 18 a0 ef 88 40 46 00 00 00 00 00
> Feb  8 18:57:16 bianca.swangage.co.uk last message repeated 4 times
> Feb  8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0): CAM
> status: CCB request was invalid

That's weird, and I would say it's not related to ZFS, the same could
likely happen with UFS since this is an error message from the
disk controller hardware.

Can you test whether the same happens _without_ Xen running?

Ie: booting FreeBSD without Xen and then doing some kind of disk
stress test, like fio [0].

Thanks, Roger.

[0] https://svnweb.freebsd.org/ports/head/benchmarks/fio/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190211093009.lnzak3l4ub65b67n>