Date: Wed, 30 Jun 2004 08:25:57 -0400 From: Bill Moran <wmoran@potentialtech.com> To: Joe Schmoe <non_secure@yahoo.com> Cc: freebsd-questions@freebsd.org Subject: Re: max concurrent scp sessions - and testing methodology for them... Message-ID: <20040630082557.50a8e09a.wmoran@potentialtech.com> In-Reply-To: <20040630073904.89140.qmail@web53302.mail.yahoo.com> References: <20040630073904.89140.qmail@web53302.mail.yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Joe Schmoe <non_secure@yahoo.com> wrote: > I have read several documents on the number of > concurrent https sessions a FreeBSD system is capable > of. > > However, I wonder how well this relates to how many > ssh sessions (scp file transfers, specifically) that a > FreeBSD server can handle. Can anyone throw out some > basic numbers for this ? Assuming a 1ghz p3 and 2gigs > of RAM, and assuming that everyone is transferring a > totally different file. (so there is no amount of > cache hits - everything comes straight off the drives) I doubt that will pan out in reality. Depending on the number of files and how much RAM is available, there's always some % chance that a file will be in cache. However, overall, it's not a bad testing scheme, as you're trying to get worst case scenerio. > I would think the major bottleneck would be disk - you > would start chugging the disks far before you used up > all the CPU on a 1ghz p3 ... but what is the second > bottleneck ? Is it cpu, or is it ram (or mbufs, etc.) I would suspect that as well, but with fast disks, it may not be the case (there are a lot of different classes of disks out there. Keep in mind, also, that scp is heavy processor overhead because it's encrypting everything, so you may find the CPU bottlenecks the throughput first. > Would it be a reasonable test to just start up scp > sessions from the machine to itself and then divide > the number of sessions you can acceptably create by > the number 2 ? Or is this somehow a flawed test ? This dodges the (remote) possibility that the NIC might be the bottleneck (since it's using the loopback) ... I would start the parallel scps from a different machine (just have them store the downloaded file in /dev/null to avoid the download machine's HDD becomming the bottleneck) You can then monitor the "server" using top/netstat/whatever and figure out what causes the first bottleneck. The difficult thing will be that different hardware will bottleneck at different places. You might even find that different brands of the same speed CPU bottleneck differently. I'm not aware of any published tests of this kind of thing, so your results would probably be pretty interesting to the community. If I were to guess ... I would expect that your prediction that the disks would be the first bottleneck is probably right. If you upgraded to fast enough disks, I would expect the CPU to become the next bottleneck. -- Bill Moran Potential Technologies http://www.potentialtech.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040630082557.50a8e09a.wmoran>