Date: Fri, 02 Feb 2001 23:23:32 +0000 From: Brian Somers <brian@Awfulhak.org> To: Mike Nowlin <mike@argos.org> Cc: freebsd-net@FreeBSD.ORG, brian@Awfulhak.org Subject: Re: PPP - CHAP failure after CHAP success??? Message-ID: <200102022323.f12NNW606872@hak.lan.Awfulhak.org> In-Reply-To: Message from Mike Nowlin <mike@argos.org> of "Fri, 02 Feb 2001 17:41:06 EST." <Pine.LNX.4.21.0102021709580.24513-100000@jason.argos.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Hmm,
I can't see how this can happen without any previous log lines saying
that a chap packet has been received.
If this is repeatable, can you try doing a ``show timer'' right after
the SUCCESS response has been sent ? If the radius timer wasn't
cleared properly this might result, but I can't see how that could
happen...
> On a recently cvsup'd machine (4.2-S as of two days ago), incoming PPP
> w/CHAP via RADIUS has suddenly broken. Basically, RADIUS OK's the
> connection, addr info is transferred & approved, everything looks normal,
> until after the log line listing myaddr and hisaddr - why is it doing
> CHAP again, and what happened to my RADIUS server? README.changes diffs
> only mentioned MSCHAPv2 and MPPE changes - disabled both of those, but it
> doesn't make any difference.
>
> --mike
>
>
> Feb 2 16:06:56 rimmer ppp[320]: tun1: Phase: bundle: Authenticate
> Feb 2 16:06:56 rimmer ppp[320]: tun1: Phase: deflink: his = none, mine =
> CHAP 0x05
> Feb 2 16:06:56 rimmer ppp[320]: tun1: Phase: Chap Output: CHALLENGE
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Chap Input: RESPONSE (16
> bytes from argos)
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Radius: Request sent
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Radius: ACCEPT received
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: MTU 1500
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: VJ enabled
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: IP 10.99.1.6
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Netmask
> 255.255.255.252
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Chap Output: SUCCESS
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Warning: 10.99.1.6: Cannot
> determine ethernet address for proxy ARP
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: deflink: lcp -> open
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: bundle: Network
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: FSM: Using "deflink" as a
> transport
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: State change Initial
> --> Closed
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: LayerStart.
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: IPCP: deflink: SendConfigReq(1) state = Closed
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: IPADDR[6] 10.129.1.2
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: COMPPROTO[6] 16 VJ slots
> with slot compression
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: State change Closed
> --> Req-Sent
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: IPCP: deflink: RecvConfigReq(3) state = Req-Sent
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: IPADDR[6] 10.99.1.6
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: COMPPROTO[6] 16 VJ slots
> with slot compression
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: IPCP: deflink: SendConfigAck(3) state = Req-Sent
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: IPADDR[6] 10.99.1.6
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: COMPPROTO[6] 16 VJ slots
> with slot compression
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: State change
> Req-Sent --> Ack-Sent
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: IPCP: deflink: RecvConfigAck(1) state = Ack-Sent
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: State change
> Ack-Sent --> Opened
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: deflink: LayerUp.
> Feb 2 16:06:57 rimmer ppp[320]: tun1: IPCP: myaddr 10.129.1.2 hisaddr =
> 10.99.1.6
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Warning: 10.99.1.6: Cannot
> determine ethernet address for proxy ARP
>
> ...new stuff starts here - these lines never showed up before.....
>
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: radius: No RADIUS servers
> specified
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: Chap Output: FAILURE
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: deflink: open -> lcp
> Feb 2 16:06:57 rimmer ppp[320]: tun1: LCP: deflink: LayerDown
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: LCP: deflink: SendTerminateReq(3) state = Opened
> Feb 2 16:06:57 rimmer ppp[320]: tun1: LCP: deflink: State change Opened
> --> Closing
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: LCP: deflink: RecvTerminateReq(4) state = Closing
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: LCP: deflink: SendTerminateAck(4) state = Closing
> Feb 2 16:06:57 rimmer
> ppp[320]: tun1: LCP: deflink: RecvTerminateAck(3) state = Closing
> Feb 2 16:06:57 rimmer ppp[320]: tun1: LCP: deflink: LayerFinish
> Feb 2 16:06:57 rimmer ppp[320]: tun1: LCP: deflink: State change Closing
> --> Closed
> Feb 2 16:06:57 rimmer ppp[320]: tun1: LCP: deflink: State change Closed
> --> Initial
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Warning: deflink: Unable to set
> physical to speed 0
> Feb 2 16:06:57 rimmer ppp[320]: tun1: Phase: deflink: Disconnected!
--
Brian <brian@Awfulhak.org> <brian@[uk.]FreeBSD.org>
<http://www.Awfulhak.org> <brian@[uk.]OpenBSD.org>
Don't _EVER_ lose your sense of humour !
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200102022323.f12NNW606872>
