From owner-freebsd-arch Wed Oct 23 10:25:58 2002 Delivered-To: freebsd-arch@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 377F837B401; Wed, 23 Oct 2002 10:25:55 -0700 (PDT) Received: from inje.iskon.hr (inje.iskon.hr [213.191.128.16]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6F38143E42; Wed, 23 Oct 2002 10:25:49 -0700 (PDT) (envelope-from zec@tel.fer.hr) Received: from tel.fer.hr (zg04-116.dialin.iskon.hr [213.191.137.117]) by mail.iskon.hr (8.11.4/8.11.4/Iskon 8.11.3-1) with ESMTP id g9NHOnE15127; Wed, 23 Oct 2002 19:24:53 +0200 (MEST) Message-ID: <3DB6DB7B.F1184FC@tel.fer.hr> Date: Wed, 23 Oct 2002 19:25:15 +0200 From: Marko Zec X-Mailer: Mozilla 4.8 [en] (Windows NT 5.0; U) X-Accept-Language: en MIME-Version: 1.0 To: Bill Coutinho Cc: freebsd-arch@freebsd.org, freebsd-net@freebsd.org Subject: Re: BSD network stack virtualization + IEEE 802.1Q References: Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG Bill Coutinho wrote: > Sean Chittenden, in FreeBSD-Arch list, pointed me to your "BSD network stack > virtualization" site. > > What I'm trying to achieve is one box with many independent "virtual > servers" (using the Jail subsystem), but with each vistual server attached > to a different VLAN using the same physical NIC. This NIC should be > connected to a switch with the 802.1Q protocol. > > My question is: is it possible to associate a "virtual stack" to a VLAN > number in a 802.1Q enabled net interface, and combine it with the Jail > subsystem "struct jail"? Yes, you can do that with the virtualized network stack very easily, but without using jail(8). Here is a sample step-by-step example, which I hope accomplishes what you want: First, we have to create two new virtual images. Here we also set the hostnames for the new vimages, which is of course not the mandatory step, but makes things more comprehensible: tpx30# vimage -c my_virtual_node1 tpx30# vimage -c my_virtual_node2 tpx30# vimage my_virtual_node1 hostname node1 tpx30# vimage my_virtual_node2 hostname node2 We the create two vlan interfaces, and associate them with the physical ifc and vlan tags: tpx30# ifconfig vlan create vlan0 tpx30# ifconfig vlan create vlan1 tpx30# ifconfig vlan0 vlan 1001 vlandev fxp0 tpx30# ifconfig vlan1 vlan 1002 vlandev fxp0 tpx30# ifconfig fxp0: flags=8843 mtu 1500 inet 192.168.201.130 netmask 0xffffff00 broadcast 192.168.201.255 ether 00:09:6b:e0:d5:fc media: Ethernet autoselect (10baseT/UTP) status: active vlan0: flags=8842 mtu 1500 ether 00:09:6b:e0:d5:fc vlan: 1001 parent interface: fxp0 vlan1: flags=8842 mtu 1500 ether 00:09:6b:e0:d5:fc vlan: 1002 parent interface: fxp0 lo0: flags=8049 mtu 16384 inet 127.0.0.1 netmask 0xff000000 tpx30# Further, we move (reassign) the vlan interfaces to the appropriate virtual images. The vlan ifcs will disappear from the current (master) virtual image: tpx30# vimage -i my_virtual_node1 vlan0 tpx30# vimage -i my_virtual_node2 vlan1 tpx30# ifconfig fxp0: flags=8843 mtu 1500 inet 192.168.201.130 netmask 0xffffff00 broadcast 192.168.201.255 ether 00:09:6b:e0:d5:fc media: Ethernet autoselect (10baseT/UTP) status: active lo0: flags=8049 mtu 16384 inet 127.0.0.1 netmask 0xff000000 tpx30# Now we spawn a new interactive shell in one of the created virtual images. We can now manage the interfaces in the usual way, start new processes/daemons, configure ipfw... tpx30# vimage my_virtual_node1 Switched to vimage my_virtual_node1 node1# ifconfig vlan0 1.2.3.4 node1# ifconfig vlan0: flags=8843 mtu 1500 inet 1.2.3.4 netmask 0xff000000 broadcast 1.255.255.255 ether 00:09:6b:e0:d5:fc vlan: 1001 parent interface: fxp0@master lo0: flags=8008 mtu 16384 node1# inetd node1# exit Note that you won`t be able to change the vlan tag and/or parent interface inside the virtual image where vlan interface resides, but only in the virtual image that contains the physical interface (that was the "master" vimage in this example). Finally, here is the summary output from vimage -l command issued in the master virtual image: tpx30# vimage -l "master": 37 processes, load averages: 0.00, 0.02, 0.00 CPU usage: 0.26% (0.26% user, 0.00% nice, 0.00% system) Nice level: 0, no CPU limit, no process limit, child limit: 15 2 network interfaces, 2 child vimages "my_virtual_node2": 0 processes, load averages: 0.00, 0.00, 0.00 CPU usage: 0.00% (0.00% user, 0.00% nice, 0.00% system) Nice level: 0, no CPU limit, no process limit 2 network interfaces, parent vimage: "master" "my_virtual_node1": 1 processes, load averages: 0.00, 0.00, 0.00 CPU usage: 0.24% (0.20% user, 0.00% nice, 0.04% system) Nice level: 0, no CPU limit, no process limit 2 network interfaces, parent vimage: "master" Hope this helps, Marko To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message