Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 31 May 2012 00:01:15 -0400
From:      Andrew Gallatin <gallatin@cs.duke.edu>
To:        freebsd-net@freebsd.org
Subject:   Re: some questions on virtual machine bridging.
Message-ID:  <4FC6ED0B.5080000@cs.duke.edu>
In-Reply-To: <20120528161240.GA38291@onelab2.iet.unipi.it>
References:  <20120528161240.GA38291@onelab2.iet.unipi.it>

next in thread | previous in thread | raw e-mail | index | archive | help
On 05/28/12 12:12, Luigi Rizzo wrote:
> I am doing some experiments with implementing a software bridge
> between virtual machines, using netmap as the communication API.
>
> I have a first prototype up and running and it is quite fast (10 Mpps
> with 60-byte frames, 4 Mpps with 1500 byte frames, compared to the
> ~500-800Kpps @60 bytes that you get with the tap interface used by
> openvswitch or the native linux bridging).

That is awesome!

>    - and of course, using PCI passthrough you get more or less hw speed
>      (constrained by the OS), but need support from an external switch
>      or the NIC itself to do forwarding between different ports.
>    anything else ?

In terms of PCI passthrough / SR-IOV there are the emerging/competing
EVB and VEPA standards to allow VM<->VM communication to
go on the wire to a "real" switch, then back to the correct VM.

> * any high-performance virtual switching solution around ?
>    As mentioned, i have measured native linux bridging and in-kernel ovs
>    and the numbers are above (not surprising; the tap involves a syscall
>    on each packet if i am not mistaken, and internally you need a
>    data copy)

You should probably compare to ESXi.  I've seen ~1Mpps going to or from
from 1..N VMs and in or out a port on a 10GbE interface with ESX4
and newer.

Drew



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4FC6ED0B.5080000>