Date: Sat, 26 Apr 2014 12:57:15 +0100 From: "seanrees@gmail.com" <seanrees@gmail.com> To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com> Cc: "freebsd-xen@freebsd.org" <freebsd-xen@freebsd.org> Subject: Re: VM in Xen 4.1; poor packet forwarding performance on xn0 Message-ID: <CAJGy1F3hcAXi4xh3Yd-QEoWrWuJb2%2BGfmZG1x9tVFS4Qo7ar9w@mail.gmail.com> In-Reply-To: <53567847.10203@citrix.com> References: <CAJGy1F0%2BG1zq9hVbifTM2Vq6HHEmCM9hnFvQ=4t-4d5x=npaCA@mail.gmail.com> <53567847.10203@citrix.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi Roger, Thanks for the patch -- sadly, it didn't work. No change. I did have to modify a bit for releng/10.0; for some reason patch refused to apply it cleanly. It looked fairly straightforward but I attached inline the patch I ultimately applied below just in case I got it wrong. Are there any other potential differences between Xen 3.4 and 4.1? (my provider migrated my problem VPS to a 3.4 host and the problem evaporated; I am trying this on a new 4.1 VPS that I was able to reproduce the problem on). Sean Index: hvm.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- hvm.c (revision 264963) +++ hvm.c (working copy) @@ -626,6 +626,7 @@ xhp.domid =3D DOMID_SELF; xhp.index =3D HVM_PARAM_CALLBACK_IRQ; +#if 0 if (xen_feature(XENFEAT_hvm_callback_vector) !=3D 0) { int error; @@ -638,6 +639,7 @@ printf("Xen HVM callback vector registration failed (%d). " "Falling back to emulated device interrupt\n", error); } +#endif xen_vector_callback_enabled =3D 0; if (dev =3D=3D NULL) { /* @@ -783,7 +785,7 @@ info.mfn =3D vtophys(vcpu_info) >> PAGE_SHIFT; info.offset =3D vtophys(vcpu_info) - trunc_page(vtophys(vcpu_info)); - rc =3D HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info); + rc =3D 1; if (rc !=3D 0) DPCPU_SET(vcpu_info, &HYPERVISOR_shared_info->vcpu_info[cpu]); else On Tue, Apr 22, 2014 at 3:10 PM, Roger Pau Monn=C3=A9 <roger.pau@citrix.com= >wrote: > On 18/04/14 16:57, seanrees@gmail.com wrote: > > Hi there freebsd-xen, > > > > I tried first on freebsd-questions@ without success, so I thought to > retry > > here. > > > > I run OpenVPN on a FreeBSD 10.0-Rp1 VM in Xen 4.1 (HVM). I am > experiencing > > slow network performance on xn0 that seems to have developed after I > > upgraded to FreeBSD 10 (no other changes). I can only achieve about > 0.5mbps > > through this interface when forwarding packets; packets in a single > > direction are fine (e.g; downloading to the VPS or pushing from the VPS= ) > > and clock in at many (>10 usually) mbps. > > > > Interestingly, my identical VM (configuration managed centrally) runnin= g > on > > Xen 3.4 (HVM) does *not* have this issue. > > Hello, > > The difference between Xen 3.4 and Xen 4.1 is that FreeBSD will make use > of the vector callback, the PV timer and PV IPIs when running on Xen > > 4.0 (which should provide better performance). I'm attaching a patch > that will make FreeBSD behave the same way when running on either Xen > 3.4 or Xen 4.1 (by disabling all this new additions), could you please > give it a try? > > > > > I did a little debugging and here's what I've noticed: > > - Not related to OpenVPN, repro'd using ssh -d. > > - Slow VM has a very low rate of context switches (~250) while > > forwarding, fast VM has a lot more (~2000) sampled over 5 seconds using > > systat -v. > > - I can't repro a context switch limit (tried a limited fork() bomb). > > - Tried with *and* without LRO and TSO on xn0 (and all combinations o= f > > LRO and TSO on/off) > > I've got the feeling that the issue you are seeing is not related to the > Xen version itself, but the Linux Dom0 kernel version (which I suppose > is different in the Xen 3.4 and Xen 4.1 hosts). Could you ask your > provider which Linux Dom0 kernel are they using on the different hosts? > > Roger. >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJGy1F3hcAXi4xh3Yd-QEoWrWuJb2%2BGfmZG1x9tVFS4Qo7ar9w>