Date: Fri, 18 Dec 2015 21:07:32 -0500 From: Patrick Kelsey <kelsey@ieee.org> To: "Jonathan T. Looney" <jtl@freebsd.org> Cc: Ryan Stone <rysto32@gmail.com>, "freebsd-transport@freebsd.org" <freebsd-transport@freebsd.org>, Gleb Smirnoff <glebius@freebsd.org> Subject: Re: Extending FIBs to support multi-tenancy Message-ID: <CAD44qMWbLMO%2BD0qsFs6=zb64TW_VQTOKVSTHC3QrND8_%2BUDTzA@mail.gmail.com> In-Reply-To: <D29A146A.4DCC0%jlooney@juniper.net> References: <CAFMmRNxVUDNQ-H=r24iOQOAbnvXi17s77HC-ap%2B4_K1AHEbSvA@mail.gmail.com> <D29A146A.4DCC0%jlooney@juniper.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Dec 18, 2015 at 8:32 PM, Jonathan T. Looney <jtl@freebsd.org> wrote: > On 12/18/15, 5:26 PM, "owner-freebsd-transport@freebsd.org on behalf of > Ryan Stone" <owner-freebsd-transport@freebsd.org on behalf of > rysto32@gmail.com> wrote: > > >- they may use independent routing tables > [...] > >- traffic from different tenant networks is not guaranteed to be > >segregated > >in any way -- it might all come in the same network interface, without any > >vlan tagging or any other encapsulation that might differentiate tenant > >networks > > The combination of these two requirements seems slightly odd to me. > Usually, you need separate routing tables because you have separate > interfaces. When you have shared interfaces, you can usually use the same > routing table. > > I think it might help to have more information about the reasoning for > these requirements, as it seems that this combination is what is leading > you towards making the FIB assignment be an address property. > > > > >1) > >We don't really want to change all of our services to instantiate one > >listening socket for every tenant network. Instead we're looking at > >implementing (and upstreaming) a kernel extension that allows a listening > >socket to be wildcarded across all FIBs (note: yesterday I described this > >feature as allowing us to pick-and-choose FIBs, but people internally have > >convinced me that a wildcard match would make their lives significantly > >easier). When a new connection attempt to a listening socket in this mode > >is accepted, the socket would not inherit its FIB from the listening > >socket. Instead, it would be set based on the local IP address of the > >connection. > > Makes sense. My employer does something similar in their stack: listen > sockets can be assigned to a particular FIB or be wildcard entries that > listen in all FIBs. We haven't noticed any scaling problems, but we > typically don't have high connection setup rates, either. > > In any case, I think this makes sense. > I did have an earlier concern that the worst-case wildcard search time for an inpcb lookup might be doubled, depending on the desired properties of the FIB wildcarding. That would only be true if the FIB number was made part of the hash key in order to support the desired behavior (as in that case twice as many buckets may need to be searched), but I don't see that as necessary to achieve what's being described here. With the FIB remaining outside the hash key, the only impact to lookup would be that if a wildcard-FIB inpcb is encountered during a bucket walk, the remainder of the bucket would have to be walked to rule out a match with a specific FIB, which is a relatively small cost that would only be incurred by applications using the wildcard-FIB feature. > > > >2) > >Currently, FIBs are a property of an interface (struct ifnet). We aren't > >very enthusiastic about the prospect of having to create thousands of > >interfaces to support thousands of network interfaces. We would instead > >like to make the FIB a property of the interface address. > > I don't understand the motivation for this. It would help if you would > provide more context for the use case. (See my earlier comments.) > > At minimum, before proceeding, you should connect with the folks who had > talked about wanting to make changes to ifnet. (Among other things, I > think they had considered creating separate physical interface, logical > interface, and interface address constructs.) I'm not sure what happened > to that project, but I think it is still an ongoing project. I think Gleb > (cc'd) was involved in that, so you might want to check with him. > > > >3) > >The idea of a per-thread FIB has gotten the most pushback so far, and I > >understand the objection. I'll explain the problem that we're trying to > >solve with this. When a new request comes in, we may need to perform > >authentication through LDAP or Kerberos. The problem is that the existing > >open-source implementations that we are using manage sockets directly. We > >really don't want to have to go through them and make their APIs entirely > >FIB-aware -- that is far too much churn. By moving awareness of the > >current FIB into the kernel, existing calls to socket() can do the right > >thing transparently. > > > >We're not entirely happy with the solution, but the "right" way to solve > >the problem involves rototilling a number of libraries. Even if we could > >convince the upstream projects to take patches, it's far more work than > >we're willing to take on. > > Thanks for sharing more details on the use case. It certainly helps > clarify the reasoning. > > However, I wonder if this really solves all of your problems. For example, > you talk about needing to perform LDAP or Kerberos authentication. You are > already going to need to make your application smart enough to figure out > which servers to use based on the source of the incoming request. That may > or may not require adding intelligence to your libraries to give you > enough information to identify the incoming connection. > I believe what Ryan is saying is that he would be using an INADDR_ANY, FIB_ANY listen for a given service, and for any incoming connection, the FIB would be chosen based on the local address used in that connection. That is what drives the constraint he gave that a given service lives at a unique IP address across all tenant networks > > Further, per-thread FIBs may not solve your scaling problem. You initially > stated that your objection to VNET was that you would need a minimum of "A > * B * C threads to ensure that any given service on any single tenant > network could fully utilize the system's resources to process requests". > If you assign threads to a particular FIB, then you are back in the A * B > * C scaling model that you didn't want. > I think it would be reduced to A * C threads, where A was the number of services and C the number of CPUs - what you would drop is the B dimension (replication of service connections across all tenant networks). > > However, on the other hand, if you maintain a smaller pool of threads and > continually reassign their FIB, you could hit interesting problems if any > of your libraries implement their own thread pools or event-driven > libraries (e.g. libisc2). In those cases, they may try to switch contexts > between connections as events occur. How will you ensure the thread's FIB > is always assigned correctly? It seems like this could become quite > complicated, depending on the exact situation. > > Per-thread FIBs have a lot of potential concerns, ESPECIALLY when > implemented by programs or libraries that aren't expecting to work this > way. The biggest concerns I see are complexity and troubleshooting: you > need to make sure that every thread knows which FIB it is using and only > handles connections for that FIB. If you make one mistake, your connection > suddenly can go to the wrong place. > > There's an earlier message of mine that got sent off for moderation (due to source address and my subscription config) that may yet surface, in which I suggest leaving FIB selection policy in the application by using wrapper functions around the desired set of socket library calls (see ld(1) --wrap). -Patrick
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAD44qMWbLMO%2BD0qsFs6=zb64TW_VQTOKVSTHC3QrND8_%2BUDTzA>