From owner-freebsd-net@FreeBSD.ORG Thu Jan 5 13:43:47 2012 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3E140106564A; Thu, 5 Jan 2012 13:43:47 +0000 (UTC) (envelope-from sodynet1@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id D944D8FC0C; Thu, 5 Jan 2012 13:43:46 +0000 (UTC) Received: by iadj38 with SMTP id j38so1382652iad.13 for ; Thu, 05 Jan 2012 05:43:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=iG9YHsFv2G2YSAbbdtRfADtGC+uwkmWsNkFUZ7ztb6g=; b=k8m7GftFG89XKpdCCEpY5K+z2cySDjYTlUXtZ8r6B389xOlsvFMG5WpvcTHxveyxQr 9yHBPtzaj7yZ7SCfvp17dq/5Jdret8gS++WjH38ABYg9GKC3NBXg0tf6I078uL9Wl2tg qbGtsAADNEsw8KwKO+k2Fucf0HWwZ2ojJ55bA= MIME-Version: 1.0 Received: by 10.42.117.193 with SMTP id u1mr2100141icq.24.1325771026097; Thu, 05 Jan 2012 05:43:46 -0800 (PST) Received: by 10.231.41.206 with HTTP; Thu, 5 Jan 2012 05:43:45 -0800 (PST) In-Reply-To: <20120105113427.GL34721@glebius.int.ru> References: <20111227044754.GK8035@FreeBSD.org> <20111227083503.GP8035@glebius.int.ru> <20120105095855.GI34721@glebius.int.ru> <20120105110116.GK34721@glebius.int.ru> <20120105113427.GL34721@glebius.int.ru> Date: Thu, 5 Jan 2012 15:43:45 +0200 Message-ID: From: Sami Halabi To: Gleb Smirnoff , Alexander Motin Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-net@freebsd.org Subject: Re: ng_mppc_decompress: too many (4094) packets dropped, disabling node X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jan 2012 13:43:47 -0000 Hmm.. Somthing strange, i did: net.graph.recvspace=8388608 net.graph.maxdgram=8388608 and i suddenly got disconnections and logs like: Jan 5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available the mpd as follows: Jan 5 16:10:01 mpd2 mpd: Incoming L2TP packet from 172.25.229.3 1701 Jan 5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:01 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701 Jan 5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:03 mpd2 mpd: Incoming L2TP packet from 172.19.246.206 1701 Jan 5 16:10:03 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:06 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701 Jan 5 16:10:06 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:11 mpd2 mpd: [L-14] Accepting PPTP connection Jan 5 16:10:11 mpd2 mpd: [L-14] Link: OPEN event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Open event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: state change Initial --> Starting Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: LayerStart Jan 5 16:10:11 mpd2 mpd: [L-14] PPTP: attaching to peer's outgoing call Jan 5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available Jan 5 16:10:11 mpd2 mpd: [L-14] PPTP call cancelled in state CONNECTING Jan 5 16:10:11 mpd2 mpd: [L-14] Link: DOWN event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Close event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: state change Starting --> Initial Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: LayerFinish Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Down event Jan 5 16:10:11 mpd2 mpd: [L-14] Link: SHUTDOWN event Jan 5 16:10:11 mpd2 mpd: [L-14] Link: Shutdown Jan 5 16:10:11 mpd2 mpd: [L-14] Accepting PPTP connection Jan 5 16:10:11 mpd2 mpd: [L-14] Link: OPEN event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Open event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: state change Initial --> Starting Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: LayerStart Jan 5 16:10:11 mpd2 mpd: [L-14] PPTP: attaching to peer's outgoing call Jan 5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available Jan 5 16:10:11 mpd2 mpd: [L-14] PPTP call cancelled in state CONNECTING Jan 5 16:10:11 mpd2 mpd: [L-14] Link: DOWN event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Close event Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: state change Starting --> Initial Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: LayerFinish Jan 5 16:10:11 mpd2 mpd: [L-14] LCP: Down event Jan 5 16:10:11 mpd2 mpd: [L-14] Link: SHUTDOWN event Jan 5 16:10:11 mpd2 mpd: [L-14] Link: Shutdown Jan 5 16:10:16 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701 Jan 5 16:10:16 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:21 mpd2 mpd: Incoming L2TP packet from 172.25.229.3 1701 Jan 5 16:10:21 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Jan 5 16:10:23 mpd2 mpd: Incoming L2TP packet from 172.19.246.206 1701 Jan 5 16:10:23 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space available Now i just returned to my original sysctl: net.graph.recvspace=40960 net.graph.maxdgram=40960 and everything seems fine any ideas? Sami 2012/1/5 Gleb Smirnoff > On Thu, Jan 05, 2012 at 01:21:12PM +0200, Sami Halabi wrote: > S> Hi > S> > S> after i upgraded the recvspace here are the results: > S> # ./a > S> Rec'd response "getsessconfig" (4) from "[22995]:": > S> Args: { session_id=0xcf4 peer_id=0x1bdc control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[228bd]:": > S> Args: { session_id=0xee79 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[22883]:": > S> Args: { session_id=0x1aa2 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[227f3]:": > S> Args: { session_id=0x1414 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[22769]:": > S> Args: { session_id=0x913f peer_id=0x4c44 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[2272f]:": > S> Args: { session_id=0x4038 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[225df]:": > S> Args: { session_id=0xc460 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[225c5]:": > S> Args: { session_id=0xe2b1 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[224ef]:": > S> Args: { session_id=0xf21d peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[223e5]:": > S> Args: { session_id=0x6d95 peer_id=0xf423 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[2228c]:": > S> Args: { session_id=0xd06c peer_id=0x8288 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[22274]:": > S> Args: { session_id=0x8425 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[22218]:": > S> Args: { session_id=0xedc7 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[221fc]:": > S> Args: { session_id=0x4474 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[221ef]:": > S> Args: { session_id=0xd2bb peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[221d5]:": > S> Args: { session_id=0x9980 peer_id=0xa9e6 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[2210d]:": > S> Args: { session_id=0x97f peer_id=0xe8e control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[220c5]:": > S> Args: { session_id=0x456 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[2201a]:": > S> Args: { session_id=0x1c38 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[21d9c]:": > S> Args: { session_id=0x21e5 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[21c73]:": > S> Args: { session_id=0xe657 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[219c1]:": > S> Args: { session_id=0xc517 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[2199f]:": > S> Args: { session_id=0x1417 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[21913]:": > S> Args: { session_id=0x2eef peer_id=0x83f4 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[21737]:": > S> Args: { session_id=0xdbaa peer_id=0xb21b control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[216ce]:": > S> Args: { session_id=0x60 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[21560]:": > S> Args: { session_id=0x4390 peer_id=0x6baa control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[2142c]:": > S> Args: { session_id=0xbcb5 peer_id=0x8ef8 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[21231]:": > S> Args: { session_id=0x8335 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[21200]:": > S> Args: { session_id=0x2b16 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[211f2]:": > S> Args: { session_id=0x8022 peer_id=0x4095 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[211d7]:": > S> Args: { session_id=0x51b7 peer_id=0xf716 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[21115]:": > S> Args: { session_id=0x98a1 peer_id=0xd453 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[20699]:": > S> Args: { session_id=0xb179 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[205a1]:": > S> Args: { session_id=0x3328 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[2052f]:": > S> Args: { session_id=0x55f peer_id=0x2a4b control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[20160]:": > S> Args: { session_id=0xe4a5 peer_id=0x5b6 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1ff54]:": > S> Args: { session_id=0xaa4d peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1fd8e]:": > S> Args: { session_id=0xd9d8 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1e9bf]:": > S> Args: { session_id=0xac50 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1dc3e]:": > S> Args: { session_id=0x5124 peer_id=0xd652 control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[1d8b4]:": > S> Args: { session_id=0xf5b9 peer_id=0xcd control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1d79e]:": > S> Args: { session_id=0x9a87 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[1d216]:": > S> Args: { session_id=0xe89d peer_id=0xd74a control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[1c78f]:": > S> Args: { session_id=0xe3e5 peer_id=0x1 control_dseq=1 enable_dseq=1 } > S> Rec'd response "getsessconfig" (4) from "[19344]:": > S> Args: { session_id=0xf452 peer_id=0xbf7e control_dseq=1 enable_dseq=1 > } > S> Rec'd response "getsessconfig" (4) from "[18fb3]:": > S> Args: { session_id=0x11b peer_id=0x4296 control_dseq=1 enable_dseq=1 } > > Hmm, looks like enable_dseq=1 everywhere. Then I have no idea yet, when > at which circumstances ng_mppc can receive an out of order datagram. > > -- > Totus tuus, Glebius. > -- Sami Halabi Information Systems Engineer NMS Projects Expert