From owner-freebsd-current@FreeBSD.ORG Sat Jun 15 11:23:07 2013 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 8AA815B4 for ; Sat, 15 Jun 2013 11:23:07 +0000 (UTC) (envelope-from luigi@onelab2.iet.unipi.it) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id 5140F124F for ; Sat, 15 Jun 2013 11:23:06 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id 6DBF07300A; Sat, 15 Jun 2013 13:26:10 +0200 (CEST) Date: Sat, 15 Jun 2013 13:26:10 +0200 From: Luigi Rizzo To: Alfred Perlstein Subject: Re: copyin()/copyout() constraints ? Message-ID: <20130615112610.GA59673@onelab2.iet.unipi.it> References: <20130612180115.GA27892@onelab2.iet.unipi.it> <51B8BFC4.303@mu.org> <201306141207.29779.jhb@freebsd.org> <20130614163812.GA50980@onelab2.iet.unipi.it> <51BB925D.2000609@mu.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51BB925D.2000609@mu.org> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-current@freebsd.org X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 15 Jun 2013 11:23:07 -0000 On Fri, Jun 14, 2013 at 02:59:57PM -0700, Alfred Perlstein wrote: > On 6/14/13 9:38 AM, Luigi Rizzo wrote: > > On Fri, Jun 14, 2013 at 12:07:29PM -0400, John Baldwin wrote: > >> On Wednesday, June 12, 2013 2:36:52 pm Alfred Perlstein wrote: > >>> On 6/12/13 11:01 AM, Luigi Rizzo wrote: > >>>> hi, > >>>> is it possible to run copyin() or copyout() in one of these cases: > >>>> 1. while holding a spinlock > >>>> 2. while holding a regular mutex/lock > >>>> 3. while holding a read lock (on an RWLOCK or RMLOCK) > >>>> 4. while holding a write lock (on an RWLOCK or RMLOCK) > >>>> > >>>> I suspect #1 is forbidden, but am a bit unclear for the > >>>> other cases. > >>> No on all of the above unless the memory is wired. > > ok i suppose i'll move to an sx lock, which i have been told > > allows me to sleep ? > > > > My use case is that while i run the copyin(), and possibly take a > > page fault, nobody destroys the destination buffer. So i wanted > > to hold a read lock (sx_slock() ?) in the thread doing the copy > > (there may be several writers to different parts of the destination), > > and a write lock (sx_xlock() ?) for the other thread which may > > destroy the buffer. > > We may be putting cart before horse, or horse into cart or something. :) > > You may want to just wire the user buffer so it can't get ripped out > from under you. I'll investigate, but i am not sure i can afford the cost of wiring and unwiring every single buffer. My application is a VALE/netmap switch interconnecting two virtual machines, as below: B and C are netmap buffers, and are wired (in the host) A is an mbuf/skbuf within the guest OS (so for the host is not wired). The current code is able to push 5-6 Mpps with 3 copies: A->B (done in userspace by a qemu thread for VM1), B->C (a memcpy in the kernel of the host) C->D (done in userspace by a qemu thread for VM2) With "indirect buffers" in netmap/vale, i can eliminate the A->B copy, and do A->C with a copyin in the kernel of the host. But the per-packet budget is minuscule, and i am afraid that doing an unconditional vslock() on each buffer is going to be too expensive (and then i should also unwire the page ? +------------+ +-------------------------------+ +--------------+ | VM1 | | VALE switch | | VM2 | | | | | | | | mbuf | | .-----+ .-----. | | mbuf | | .------. | | |B | memcpy |C | | | .-----. | | |A +------> +-------------->| +----->|D | | | | | | | | | (now) | | | | | | | | | | | | '-----' '-+---' | | | | | | | | | | copyin ^ | | | | | | | +------------------------------' | | | | | | '------' | | (with indirect buffers) | | '-----' | | | | | | | +------------+ +-------------------------------+ +--------------+ cheers luigi