From owner-freebsd-pf@FreeBSD.ORG Mon Mar 31 19:41:18 2014 Return-Path: Delivered-To: freebsd-pf@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CE360C3D; Mon, 31 Mar 2014 19:41:18 +0000 (UTC) Received: from mail-we0-x22a.google.com (mail-we0-x22a.google.com [IPv6:2a00:1450:400c:c03::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1CE26BF3; Mon, 31 Mar 2014 19:41:17 +0000 (UTC) Received: by mail-we0-f170.google.com with SMTP id w61so5327741wes.1 for ; Mon, 31 Mar 2014 12:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=hN/mgwrowX7TSd9FpV6/YMAxmhGIFKxaw33zJOs2vQo=; b=Db1LXoabRI0cbeKipnN80UrJbWOjkn3l9QZuB7jM9efL/qOqA62ZpyLHWfjXbPyGcr RvPCPjP/UFEUHeizpALLAegnRWXhqvfUQdy37HR/VJOASmbNocEDunEI4nKfr8J0FNe6 YJRsjP9E8f1PVtQsfZTwdzvR5PGxh1/tFtDG2AJNUBcP/CWx/8nY3w6XPb6ldNOTsoyP dra3ifGHN5DH1IHZBRNqxY9JvNkMIGa/6qIQrwZcSg1i+EOR7ud0jm3DmcJEcmLAd/J3 KJhpX/S42WIOBXmGsmb0fDLDX8meCri7w/u0T0gaNK6f3VK7gxEgw9kB1uGjNV5DD0nk fWGA== X-Received: by 10.180.101.40 with SMTP id fd8mr14617366wib.1.1396294876464; Mon, 31 Mar 2014 12:41:16 -0700 (PDT) Received: from localhost ([178.150.115.244]) by mx.google.com with ESMTPSA id w1sm35202825eel.16.2014.03.31.12.41.15 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 31 Mar 2014 12:41:15 -0700 (PDT) Sender: Mikolaj Golub Date: Mon, 31 Mar 2014 22:41:13 +0300 From: Mikolaj Golub To: Martin Matuska Subject: Re: CFR projects/pf: vnet awareness for pf_overloadqueue Message-ID: <20140331194109.GA17582@gmail.com> References: <5337D55A.6030607@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5337D55A.6030607@FreeBSD.org> User-Agent: Mutt/1.5.22 (2013-10-16) Cc: freebsd-pf@FreeBSD.org X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 19:41:19 -0000 On Sun, Mar 30, 2014 at 10:27:06AM +0200, Martin Matuska wrote: > Hi, > > with the pf_mtag_z patch applied, the second patch that fixes panics I > experience is the overload queue patch. > > I have looked into solving this via context (adding a "struct vnet" > member to pf_overload_entry). This leaves unsolved problems - first, we > have now vnet information on per-entry instead of per-queue. > > There are two places in pf_overload_task() where we are not processing > an entry but need vnet information: > > 1. V_pf_idhash[i] in pf_overload_task(): > > for (int i = 0; i <= pf_hashmask; i++) { > struct pf_idhash *ih = &V_pf_idhash[i]; > struct pf_state_key *sk; > struct pf_state *s; > > PF_HASHROW_LOCK(ih); > LIST_FOREACH(s, &ih->states, entry) { > > 2. end of pf_overload_task() but that is only the debug tunable > V_pf_status_debug: > > if (V_pf_status.debug >= PF_DEBUG_MISC) > printf("%s: %u states killed", __func__, killed); > > On the other hand, if we want to keep per-vnet overloadqueues than it > makes sense to store vnet information on queue level. > If we pack vnet information into each entry and the overloadqueue has > global locks anyway, why not keeping a single global queue with entries > from different vnets? > > At the current state the code causes panics if pf_overload_task() is > fired because vnet context is missing. It needs to be fixed in any of > the ways. A patch for adding per-queue vnet information is attached. Martin, I think you missed my point in the message to src@. You don't need to embed vnet into pf_overloadqueue. What you need is a way to access it in a right vnet context, i.e. by passing vnet as an argument to pf_overload_task: pf_vnet_initialize(): - TASK_INIT(&V_pf_overloadtask, 0, pf_overload_task, &V_pf_overloadqueue); + TASK_INIT(&V_pf_overloadtask, 0, pf_overload_task, curvnet); pf_overload_task(void *c, int pending): { + struct vnet *vnet; struct pf_overload_head queue; struct pfr_addr p; struct pf_overload_entry *pfoe, *pfoe1; uint32_t killed = 0; + vnet = (vnet *)c; + CURVNET_SET(vnet); PF_OVERLOADQ_LOCK(); - queue = *(struct pf_overload_head *)c; - SLIST_INIT((struct pf_overload_head *)c); + queue = V_pf_overloadqueue; + SLIST_INIT(V_pf_overloadqueue); PF_OVERLOADQ_UNLOCK(); -- Mikolaj Golub