Date: Mon, 18 Mar 2013 13:20:30 +0100 From: "Alexander V. Chernikov" <melifaro@ipfw.ru> To: Andre Oppermann <andre@freebsd.org> Cc: Sami Halabi <sodynet1@gmail.com>, "Alexander V. Chernikov" <melifaro@FreeBSD.org>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: Re: MPLS Message-ID: <3659B942-7C37-431F-8945-C8A5BCD8DC67@ipfw.ru> In-Reply-To: <514649A5.4090200@freebsd.org> References: <CAEW%2Bogb_b6fYLvcEJdhzRnoyjr0ORto9iNyJ-iiNfniBRnPxmA@mail.gmail.com> <CAEW%2BogZTE4Uw-0ROEoSex=VtC%2B0tChupE2RAW5RFOn=OQEuLLw@mail.gmail.com> <CAEW%2BogYbCkCfbFHT0t2v-VmqUkXLGVHgAHPET3X5c2DnsT=Enw@mail.gmail.com> <5146121B.5080608@FreeBSD.org> <514649A5.4090200@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 17.03.2013, at 23:54, Andre Oppermann <andre@freebsd.org> wrote: > On 17.03.2013 19:57, Alexander V. Chernikov wrote: >> On 17.03.2013 13:20, Sami Halabi wrote: >>>> ITOH OpenBSD has a complete implementation of MPLS out of the box, maybe >> Their control plane code is mostly useless due to design approach (routing daemons talk via kernel). > > What's your approach? It is actually not mine. We have discussed this a bit in radix-related thread. Generally quagga/bird (and other hiperf hardware-accelerated and software routers) have feature-rich RIb from which best routes (possibly multipath) are installed to kernel/fib. Kernel main task should be to do efficient lookups while every other advanced feature should be implemented in userland. > >> Their data plane code, well.. Yes, we can use some defines from their headers, but that's all :) >>>> porting it would be short and more straight forward than porting linux LDP >>>> implementation of BIRD. >> >> It is not 'linux' implementation. LDP itself is cross-platform. >> The most tricky place here is control plane. >> However, making _fast_ MPLS switching is tricky too, since it requires chages in our netisr/ethernet >> handling code. > > Can you explain what changes you think are necessary and why? We definitely need ability to dispatch chain of mbufs - this was already discussed in intel rx ring lock thread in -net. Currently significant number of drivers support interrupt moderation permitting several/tens/hundreds of packets to be received on interrupt. For each packet we have to run some basic checks, PFIL hooks, netisr code, l3 code resulting in many locks being acquired/released per each packet. Typically we rely on NIC to put packet in given queue (direct isr), which works bad for non-hashable types of traffic like gre, PPPoE, MPLS. Additionally, hashing function is either standard (from M$ NDIS) or documented permitting someone malicious to generate 'special' traffic matching single queue. Currently even if we can add m2flowid/m2cpu function able to hash, say, gre or MPLS, it is unefficient since we have to lock/unlock netisr queues for every packet. I'm thinking of * utilizing m_nextpkt field in mbuf header * adding some nh_chain flag to netisr If given netisr does not support flag and nextpkt is not null we simply call such netisr in cycle. * netisr hash function accepts mbuf 'chain' and pointer to array (Sizeof N * ptr), sorts mbuf to N netisr queues saving list heads to supplied array. After that we put given lists to appropriate queues. * teach ethersubr RX code to deal with mbuf chains (not easy one) * add some partial support of handling chains to fastfwd code > > -- > Andre > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3659B942-7C37-431F-8945-C8A5BCD8DC67>
