From owner-freebsd-virtualization@FreeBSD.ORG Fri May 24 18:35:32 2013 Return-Path: Delivered-To: freebsd-virtualization@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 453B5336 for ; Fri, 24 May 2013 18:35:32 +0000 (UTC) (envelope-from gofd-freebsd-virtualization@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id 0A206886 for ; Fri, 24 May 2013 18:35:32 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1Ufwpp-0001P4-B0 for freebsd-virtualization@freebsd.org; Fri, 24 May 2013 20:35:25 +0200 Received: from august.inf.tu-dresden.de ([141.76.48.124]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 24 May 2013 20:35:25 +0200 Received: from jsteckli by august.inf.tu-dresden.de with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 24 May 2013 20:35:25 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-virtualization@freebsd.org From: Julian Stecklina Subject: Re: virtio-net vs qemu 1.5.0 Date: Fri, 24 May 2013 20:35:13 +0200 Lines: 43 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: august.inf.tu-dresden.de User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6 In-Reply-To: X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 May 2013 18:35:32 -0000 Hello, I filed a bug: http://www.freebsd.org/cgi/query-pr.cgi?pr=178955 Julian On 05/23/2013 02:00 PM, Julian Stecklina wrote: > Hello, > > I just compiled qemu 1.5.0 and noticed that virtio network (on CURRENT as of today) seems to have problems updating the MAC filter table: > > vtnet0: error setting host MAC filter table > > As far as I understand, if_vtnet.c does the following in vtnet_rx_filter_mac. It appends two full struct vtnet_mac_tables (one for unicast and one for multicast) to the request. Each consists of the number of actual entries in the table and space for 128 (mostly unused) entries in total. > > The qemu code parses this differently. It first reads the number of elements in the first table and then skips over so many MAC addresses and then expects the header to the second table (which in our case points to zero'd memory). Then it skips those 0 MAC entries as well and expects that it has consumed the whole request and returns an error, because there is still data left. The relevant code is in qemu/hw/net/virtio-net.c in virtio_net_handle_rx_mode. > > Assuming the qemu code is correct (of which I am not sure) the correct solution would be to enqueue only so many MACs in the original requests as are actually used. The following (a bit dirty) patch fixes this for me: > > > diff --git a/sys/dev/virtio/network/if_vtnet.c b/sys/dev/virtio/network/if_vtnet.c > index ffc349a..6f00dfb 100644 > --- a/sys/dev/virtio/network/if_vtnet.c > +++ b/sys/dev/virtio/network/if_vtnet.c > @@ -2470,9 +2470,9 @@ vtnet_rx_filter_mac(struct vtnet_softc *sc) > sglist_init(&sg, 4, segs); > error |= sglist_append(&sg, &hdr, sizeof(struct virtio_net_ctrl_hdr)); > error |= sglist_append(&sg, &filter->vmf_unicast, > - sizeof(struct vtnet_mac_table)); > + sizeof(uint32_t) + ETHER_ADDR_LEN*filter->vmf_unicast.nentries); > error |= sglist_append(&sg, &filter->vmf_multicast, > - sizeof(struct vtnet_mac_table)); > + sizeof(uint32_t) + ETHER_ADDR_LEN*filter->vmf_multicast.nentries); > error |= sglist_append(&sg, &ack, sizeof(uint8_t)); > KASSERT(error == 0 && sg.sg_nseg == 4, > ("error adding MAC filtering message to sglist")); > > Any virtio guru here to comment on this? > > Julian >