Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 5 Jan 2012 15:43:45 +0200
From:      Sami Halabi <sodynet1@gmail.com>
To:        Gleb Smirnoff <glebius@freebsd.org>, Alexander Motin <mav@freebsd.org>
Cc:        freebsd-net@freebsd.org
Subject:   Re: ng_mppc_decompress: too many (4094) packets dropped, disabling node
Message-ID:  <CAEW%2BogaXqHss41LCMCp5nXz3rBTpw4mEijy6h_XSgyCwcSBNUQ@mail.gmail.com>
In-Reply-To: <20120105113427.GL34721@glebius.int.ru>
References:  <CAEW%2Bogbn6jizawgLCHCcTLMSmdjCKFvPkJa33jrJ5AAnjww=fg@mail.gmail.com> <20111227044754.GK8035@FreeBSD.org> <CAEW%2BogY_iHUb=n=G45d5U_r5XfD39YDwgNkowu1QN%2BeWL5K5Fw@mail.gmail.com> <20111227083503.GP8035@glebius.int.ru> <CAEW%2BogYHtvPFqMAM17_fHWzvKAiSqEOyQ3dtQRwmD6DJSHsrEA@mail.gmail.com> <20120105095855.GI34721@glebius.int.ru> <CAEW%2BogYLToV4dtW=Y-yQvY1C_YdbqQfEnr87F8dt%2BF57FxZgkw@mail.gmail.com> <20120105110116.GK34721@glebius.int.ru> <CAEW%2BogYQXRbWCwx41rwUMSoh5hDM6eAZcG=-BjzqZj6rBzWGRg@mail.gmail.com> <20120105113427.GL34721@glebius.int.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
Hmm..

Somthing strange, i did:
net.graph.recvspace=8388608
net.graph.maxdgram=8388608


and i suddenly got disconnections and logs like:
Jan  5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available

the mpd as follows:

Jan  5 16:10:01 mpd2 mpd: Incoming L2TP packet from 172.25.229.3 1701
Jan  5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:01 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701
Jan  5 16:10:01 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:03 mpd2 mpd: Incoming L2TP packet from 172.19.246.206 1701
Jan  5 16:10:03 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:06 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701
Jan  5 16:10:06 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:11 mpd2 mpd: [L-14] Accepting PPTP connection
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: OPEN event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Open event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: state change Initial --> Starting
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: LayerStart
Jan  5 16:10:11 mpd2 mpd: [L-14] PPTP: attaching to peer's outgoing call
Jan  5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available
Jan  5 16:10:11 mpd2 mpd: [L-14] PPTP call cancelled in state CONNECTING
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: DOWN event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Close event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: state change Starting --> Initial
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: LayerFinish
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Down event
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: SHUTDOWN event
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: Shutdown
Jan  5 16:10:11 mpd2 mpd: [L-14] Accepting PPTP connection
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: OPEN event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Open event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: state change Initial --> Starting
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: LayerStart
Jan  5 16:10:11 mpd2 mpd: [L-14] PPTP: attaching to peer's outgoing call
Jan  5 16:10:11 mpd2 mpd: PPTP: NgMkSockNode: No buffer space available
Jan  5 16:10:11 mpd2 mpd: [L-14] PPTP call cancelled in state CONNECTING
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: DOWN event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Close event
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: state change Starting --> Initial
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: LayerFinish
Jan  5 16:10:11 mpd2 mpd: [L-14] LCP: Down event
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: SHUTDOWN event
Jan  5 16:10:11 mpd2 mpd: [L-14] Link: Shutdown
Jan  5 16:10:16 mpd2 mpd: Incoming L2TP packet from 172.27.173.112 1701
Jan  5 16:10:16 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:21 mpd2 mpd: Incoming L2TP packet from 172.25.229.3 1701
Jan  5 16:10:21 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available
Jan  5 16:10:23 mpd2 mpd: Incoming L2TP packet from 172.19.246.206 1701
Jan  5 16:10:23 mpd2 mpd: L2TP: ppp_l2tp_ctrl_create: No buffer space
available


Now i just returned to my original sysctl:
net.graph.recvspace=40960
net.graph.maxdgram=40960

and everything seems fine

any ideas?

Sami

2012/1/5 Gleb Smirnoff <glebius@freebsd.org>

> On Thu, Jan 05, 2012 at 01:21:12PM +0200, Sami Halabi wrote:
> S> Hi
> S>
> S> after i upgraded the recvspace here are the results:
> S> # ./a
> S> Rec'd response "getsessconfig" (4) from "[22995]:":
> S> Args:   { session_id=0xcf4 peer_id=0x1bdc control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[228bd]:":
> S> Args:   { session_id=0xee79 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[22883]:":
> S> Args:   { session_id=0x1aa2 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[227f3]:":
> S> Args:   { session_id=0x1414 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[22769]:":
> S> Args:   { session_id=0x913f peer_id=0x4c44 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[2272f]:":
> S> Args:   { session_id=0x4038 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[225df]:":
> S> Args:   { session_id=0xc460 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[225c5]:":
> S> Args:   { session_id=0xe2b1 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[224ef]:":
> S> Args:   { session_id=0xf21d peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[223e5]:":
> S> Args:   { session_id=0x6d95 peer_id=0xf423 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[2228c]:":
> S> Args:   { session_id=0xd06c peer_id=0x8288 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[22274]:":
> S> Args:   { session_id=0x8425 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[22218]:":
> S> Args:   { session_id=0xedc7 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[221fc]:":
> S> Args:   { session_id=0x4474 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[221ef]:":
> S> Args:   { session_id=0xd2bb peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[221d5]:":
> S> Args:   { session_id=0x9980 peer_id=0xa9e6 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[2210d]:":
> S> Args:   { session_id=0x97f peer_id=0xe8e control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[220c5]:":
> S> Args:   { session_id=0x456 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[2201a]:":
> S> Args:   { session_id=0x1c38 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[21d9c]:":
> S> Args:   { session_id=0x21e5 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[21c73]:":
> S> Args:   { session_id=0xe657 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[219c1]:":
> S> Args:   { session_id=0xc517 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[2199f]:":
> S> Args:   { session_id=0x1417 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[21913]:":
> S> Args:   { session_id=0x2eef peer_id=0x83f4 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[21737]:":
> S> Args:   { session_id=0xdbaa peer_id=0xb21b control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[216ce]:":
> S> Args:   { session_id=0x60 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[21560]:":
> S> Args:   { session_id=0x4390 peer_id=0x6baa control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[2142c]:":
> S> Args:   { session_id=0xbcb5 peer_id=0x8ef8 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[21231]:":
> S> Args:   { session_id=0x8335 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[21200]:":
> S> Args:   { session_id=0x2b16 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[211f2]:":
> S> Args:   { session_id=0x8022 peer_id=0x4095 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[211d7]:":
> S> Args:   { session_id=0x51b7 peer_id=0xf716 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[21115]:":
> S> Args:   { session_id=0x98a1 peer_id=0xd453 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[20699]:":
> S> Args:   { session_id=0xb179 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[205a1]:":
> S> Args:   { session_id=0x3328 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[2052f]:":
> S> Args:   { session_id=0x55f peer_id=0x2a4b control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[20160]:":
> S> Args:   { session_id=0xe4a5 peer_id=0x5b6 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1ff54]:":
> S> Args:   { session_id=0xaa4d peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1fd8e]:":
> S> Args:   { session_id=0xd9d8 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1e9bf]:":
> S> Args:   { session_id=0xac50 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1dc3e]:":
> S> Args:   { session_id=0x5124 peer_id=0xd652 control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[1d8b4]:":
> S> Args:   { session_id=0xf5b9 peer_id=0xcd control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1d79e]:":
> S> Args:   { session_id=0x9a87 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[1d216]:":
> S> Args:   { session_id=0xe89d peer_id=0xd74a control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[1c78f]:":
> S> Args:   { session_id=0xe3e5 peer_id=0x1 control_dseq=1 enable_dseq=1 }
> S> Rec'd response "getsessconfig" (4) from "[19344]:":
> S> Args:   { session_id=0xf452 peer_id=0xbf7e control_dseq=1 enable_dseq=1
> }
> S> Rec'd response "getsessconfig" (4) from "[18fb3]:":
> S> Args:   { session_id=0x11b peer_id=0x4296 control_dseq=1 enable_dseq=1 }
>
> Hmm, looks like enable_dseq=1 everywhere. Then I have no idea yet, when
> at which circumstances ng_mppc can receive an out of order datagram.
>
> --
> Totus tuus, Glebius.
>



-- 
Sami Halabi
Information Systems Engineer
NMS Projects Expert



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAEW%2BogaXqHss41LCMCp5nXz3rBTpw4mEijy6h_XSgyCwcSBNUQ>