From owner-freebsd-hackers Sun Jun 29 10:58:16 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id KAA10256 for hackers-outgoing; Sun, 29 Jun 1997 10:58:16 -0700 (PDT) Received: from sendero-ppp.i-connect.net (sendero-ppp.i-Connect.Net [206.190.143.100]) by hub.freebsd.org (8.8.5/8.8.5) with SMTP id KAA10238 for ; Sun, 29 Jun 1997 10:58:06 -0700 (PDT) Received: (qmail 11937 invoked by uid 1000); 29 Jun 1997 17:42:01 -0000 Message-ID: X-Mailer: XFMail 1.2-alpha [p0] on FreeBSD Content-Type: text/plain; charset=iso-8859-8 Content-Transfer-Encoding: 8bit MIME-Version: 1.0 In-Reply-To: <199706291005.UAA29898@godzilla.zeta.org.au> Date: Sun, 29 Jun 1997 10:42:01 -0700 (PDT) Organization: Atlas Telecom From: Simon Shapiro To: Bruce Evans Subject: Re: Clists limited to 1024 bytes? Cc: hackers@FreeBSD.ORG, bmcgover@cisco.com Sender: owner-hackers@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk Hi Bruce Evans; On 29-Jun-97 you wrote: > >> Anyway, 19200 bps is not a heavy load unless there are a lot of active > >> ports. With 32 active 16550 ports it would be fairly heavy, but still > >> gives less than 6% of the throughput of a single 10Mb/s ethernet. > > > >I was thinking more (on a 16550) about what happens at 115,200, 230,400, > >and more. These are speeds we see already today with ISDN lines. > >The option of an external TA (such as a Motorola BitSRFR) is very > apealing, > >but behavior at these speeds needs careful consideration. > > > >How would you adjust the drivers to acomodate these speeds? > > 115200 was fast 10 years ago, but 230400 is currently not well support > (if you change the hardware clock to get it, then then the buffer sizes > are too small). How many ports do you need? Actually, about 15 years ago we used 384Kbps on terminals hooked to a Unix box (the tahoe was capable of supporting 256 of these, if i remember correctly), so even 10 years ago... :-) Yes, we use a doubled hardware clock. 2 ports is well enough. > >We experienced a lot of complex problems with SCSI transactions until we > >bumped the sio interrupt bufferto double its size. While performance > (on > >the sio ports - we use them only for PPP) did not drop visibly, the > strange > >incidence of dropping biodone() calls virtually stopped. > > This probably just made a race less common. It would be interesting to actually solve this mystery; how does a buffer overflow in the sio (under PPP) cause biodone to lose a completion. We know, with very high degree of certainty, that we do not lose interrupts, nor miss a call to scsi_done (which calls biodone, somehow). It appears that from scsi_done() up things drop in this case. Not every time. Nasty... Simon