From owner-freebsd-isp Tue Feb 29 18:10:35 2000 Delivered-To: freebsd-isp@freebsd.org Received: from copernicus.acol.com (copernicus.acol.com [216.204.50.170]) by hub.freebsd.org (Postfix) with ESMTP id 70EFE37B843 for ; Tue, 29 Feb 2000 18:10:27 -0800 (PST) (envelope-from viper@2ghz.net) Received: from 2ghz.net (ppp106.net-resource.com [216.204.46.106]) by copernicus.acol.com (8.9.3/8.9.3) with ESMTP id VAA30980; Tue, 29 Feb 2000 21:10:21 -0500 (EST) Message-ID: <38BC7C9B.D6CD19AB@2ghz.net> Date: Tue, 29 Feb 2000 21:12:43 -0500 From: Adam Rheaume X-Mailer: Mozilla 4.7 [en] (Win98; I) X-Accept-Language: en MIME-Version: 1.0 To: Mark Holloway Cc: freebsd-isp@FreeBSD.ORG Subject: Re: OC3 versus T1 Circuits References: <200002291838.NAA29138@etinc.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-isp@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org If they are all going to be pure ansi and only 60 concurrent connections running at lets say 38400 a full T1 would be fine. With OE and maybe some web/file stuff to be safe I would say a bonded 2 T1's would be perfect. Maybe overkill also.. What per K will each user pull that is the question also will they all use it at the same time. Think like a ISP, when there are dial in users there is never enough for the whole cust base to dial in at once it is based on a percentage. -=>Adam<=- Dennis wrote: > > Why would you use bonded T1s rather than a HSSI frac T3, which would allow you > to set any speed up to T3? > > You could build a freebsd box with hssi for under 4K and have the maximum > flexibility. > > Dennis > > At 08:53 AM 2/29/00 -0800, Mark Holloway wrote: > > > > I have a situation and maybe some of you can please advise: > > > > I have a core LAN/MAN/WAN campus with approximately 80 servers. I have > about > > ten different remote sites throughout the city (the MAN) which clients log > > into a Windows NT domain and then access certain applications. Until late > > 1999 they were running these applications in a client/server fashion. The > > ten sites are all on a shared FDDI ring, but each location is a 10MB, > shared, > > half duplex connection. The original strategy was to have a full OC3 from > > the main campus going to a Sprint Central Office, then have 10MB fractional > > OC3 going to each site (almost like Frame Relay in the MAN). However, we > > have since setup many Windows Terminal Servers (25 servers @ 200 clients per > > server) and the clients are using Citrix on their local desktops. This > > solution works well. But now I am wondering if the fractional OC3 is > > overkill?? I was thinking maybe either a T1 line or two T1 lines bonded for > > EACH SITE rather than a 10MB OC3 for each site would be more realistic? > Is a > > T1 really .15 MB? Or 1.5MB? I think the slowness that most people > > experience is due to the nature of the FDDI. Each site averages about 60 > > clients, but a couple have up to 150 clients. When using Citrix everything > > runs fine. The only apps they would run locally are Outlook and some telnet > > sessions (pure ANSI, little overhead). > > > > I apologize if this is too off topic, but I've always tried to contribute to > > this list whenever possible. One thing to keep in mind is that for each OC3 > > remote connection we were going to buy a 3Com Pathbuilder 330 (designed for > > fractional OC3). This is approximately $12,000 + the Pathbuilder 700 > > Ethernet blade for the the WAN switch at the main campus (another several > > thousand dollars). A Cisco 2500 or 2600 with bonded T1 is under $2000. > > > > PLEASE, if anyone has any insite, feedback, or comments, I'd really > > appreciate it. > > > > Regards, > > Mark > > > > To Unsubscribe: send mail to majordomo@FreeBSD.org > with "unsubscribe freebsd-isp" in the body of the message -- ----------------- System Tech www.acol.com 508-865-8561 ----------------- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-isp" in the body of the message