Date: Fri, 17 Mar 95 9:28:18 MST From: terry@cs.weber.edu (Terry Lambert) To: mycroft@ai.mit.edu (Charles M. Hannum) Cc: peter@bonkers.taronga.com, hackers@FreeBSD.org, tech-net@NetBSD.ORG Subject: Re: Batch Telnet (Re: diskless and 3Com 509) Message-ID: <9503171628.AA28851@cs.weber.edu> In-Reply-To: <199503171558.KAA27534@duality.gnu.ai.mit.edu> from "Charles M. Hannum" at Mar 17, 95 10:58:46 am
next in thread | previous in thread | raw e-mail | index | archive | help
> > > The problem is in the server, not the client. > > That's incorrect. > > The close should be queued by the remote server only after the > remote server has recieved confirmation that the packets have > arrived at the client. > > By default, this is not how things work with sockets. > > That's analogous to stating that Berkeley TCP does not implement TCP > correctly. At least in this case, that's not true. > > The problem is that the client is closing the connection. If the > telnet client's stdin gets an EOF, one might expect it should at most > close the write side of the TCP connection (i.e. use the shutdown(2) > system call). That's incorrect. The message returned is "Connection closed by remote host.", not "Connection closed." as it would be if you were correct that the local telnet was closing the connection. Obviously, you missed the post where the same behaviour was observed when doing "echo LIST; echo QUIT; sleep 60000" as the piped input. The problem is clearly that the client is getting a close before it reads the data to be displayed. The question is whether the problem is that the server is queueing the close early or whether the client is processing the close early (or both, although it would take one bug to uncover the other). The lack of a disconnect capability could easily cause the server to send the higher priority close before send its data buffers; since the server in question uses buffered I/O, the question to resolve to answer that is whether or not the server is trying to send more than 4k of data and/or flushing it's output buffer a "reasonably long time" before closing the socket. In the same post as the time delay piped input, there was noted that the server acted differently when contacted not-by-a-BSD-host. It's unclear as to whether this was the same host that the server was running on, or simply a non-BSD host; the use of an alternate network interface (loopback) could corrupt the behaviour sufficiently to invalidate the claims of "it works here but not in BSD". > With a more reasonable protocol, you'd expect the half-close to be > propagated to the server, and the server would notice it and shut down > if appropriate. However, TCP doesn't have a half-close mechanism. The implementation of "half close" is a client issue, not a TCP issue. If two streams were acquired instead of one in the telnet protocol implementation, a half close could be implemented. Lachman got into big trouble with the half close when they freed the streams context on the client as a result of a remote t_disconnect instead of waiting for the client's close to do so. Thus the first read returned the EOF, but the second read referenced an illegal kernel address and caused the client process to simply exit with no warning. Clearly, if there were two streams, on disconnectable by the client and one by the server, this would not have happened. > The telnet client should at least have an option to not shut down if > it gets EOF from stdin. It's not clear to me whether or not that > should be the default. Agreed, but it would not resolve the topic at hand. Peter should be using "expect" to run his telnet. He is already making the invalid assumption that the remote NNTP is capable of handling type ahead. Terry Lambert terry@cs.weber.edu --- Any opinions in this posting are my own and not those of my present or previous employers.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9503171628.AA28851>