From owner-p4-projects@FreeBSD.ORG Tue Oct 23 23:39:17 2007 Return-Path: Delivered-To: p4-projects@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 32767) id 4C71516A468; Tue, 23 Oct 2007 23:39:17 +0000 (UTC) Delivered-To: perforce@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 015D716A419 for ; Tue, 23 Oct 2007 23:39:17 +0000 (UTC) (envelope-from zec@icir.org) Received: from xaqua.tel.fer.hr (xaqua.tel.fer.hr [161.53.19.25]) by mx1.freebsd.org (Postfix) with ESMTP id 9836113C4BD for ; Tue, 23 Oct 2007 23:39:16 +0000 (UTC) (envelope-from zec@icir.org) Received: by xaqua.tel.fer.hr (Postfix, from userid 20006) id 0C6159B64D; Wed, 24 Oct 2007 01:39:07 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on xaqua.tel.fer.hr X-Spam-Level: X-Spam-Status: No, score=-4.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.1.7 Received: from [192.168.200.100] (zec2.tel.fer.hr [161.53.19.79]) by xaqua.tel.fer.hr (Postfix) with ESMTP id E48A59B64A for ; Wed, 24 Oct 2007 01:39:05 +0200 (CEST) From: Marko Zec To: Perforce Change Reviews Date: Wed, 24 Oct 2007 01:38:59 +0200 User-Agent: KMail/1.9.7 References: <200710230018.l9N0IO8l020652@repoman.freebsd.org> <200710232314.38149.zec@icir.org> <471E7645.1030503@elischer.org> In-Reply-To: <471E7645.1030503@elischer.org> MIME-Version: 1.0 Content-Disposition: inline Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200710240139.00008.zec@icir.org> Cc: Subject: Re: PERFORCE change 127942 for review X-BeenThere: p4-projects@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: p4 projects tree changes List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Oct 2007 23:39:17 -0000 On Wednesday 24 October 2007 00:31:33 Julian Elischer wrote: > Marko Zec wrote: > > On Tuesday 23 October 2007 02:49:24 Julian Elischer wrote: > >> question: > >> > >> can processes in two vimages communicate if they both have access > >> to the same named pipe/fifo in the filesystem? > > > > Yes, provided that they open the fifo while they would be both > > attached to the same vnet. Once the sockets would become open the > > processes could reassociate to arbitrary vimages, while the sockets > > would remain bound to their original vnets for their entire > > lifetime duration. > > hmm that's not what I want... what I want is an ability for processes > in two overlapping vimages to communicate easily without incuring the > overhead of going throigh a virtual router. > > another possibility is a > local: interface (address 127.1.[vnet number]) which acts like a > local net between the virtual machines. Uhh I'd rather not take that path... This would require at least a) lots of special casing all around IP stack; and b) that vimages/vnets would need to be directly addressable by small integers. I'd prefer if we could work out a solution where symbolic (textual) naming of vimages/vnets would be sufficient for all purposes... > > As an alternative, we could / should introduce an extended socket() > > syscall where an additional argument would explicitly specify to > > which vimage/vnet the new socket should belong. > > if a process in the root vimage makes fifo in > /vimages/vimage1/usr/tmp/fifo1 > > and a process in vimage1 (that is chrooted at /vimages/vimage1/) > opens the fifo at /usr/tmp/fifo1 > > why can't they communicate? I'm surprised at this.. You're right the example you gave above actually works I just tried this out (now I'm slightly surprised :). However netstat -f unix will show the socket pair only in one of the vimages/vnets... I don't know why I thought there was also a prison_check() call somewhere inside or around unp_connect() but apparently there isn't... So while this obviously works for you I'm not entirely sure that this is the behavior we wish to have... Cheers, Marko