From owner-freebsd-stable Thu Feb 17 12:35:27 2000 Delivered-To: freebsd-stable@freebsd.org Received: from malasada.lava.net (malasada.lava.net [199.222.42.2]) by hub.freebsd.org (Postfix) with ESMTP id 0D87437B823 for ; Thu, 17 Feb 2000 12:35:18 -0800 (PST) (envelope-from cliftonr@lava.net) Received: from localhost (3671 bytes) by malasada.lava.net via sendmail with P:stdio/R:inet_hosts/T:smtp (sender: ) (ident using unix) id for ; Thu, 17 Feb 2000 10:35:17 -1000 (HST) (Smail-3.2.0.106 1999-Mar-31 #3 built 1999-Dec-7) Date: Thu, 17 Feb 2000 10:35:17 -1000 From: Clifton Royston To: Brad Knowles Cc: Tom , freebsd-stable@FreeBSD.ORG Subject: Re: Initial performance testing w/ postmark & softupdates... Message-ID: <20000217103516.C19043@lava.net> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 1.0pre2i In-Reply-To: Sender: owner-freebsd-stable@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Thu, Feb 17, 2000 at 08:33:28PM +0100, Brad Knowles wrote: > At 10:37 AM -0800 2000/2/17, Tom wrote: > > Uhhh... the paper says that an Ultra 1/170 running Solaris 2.5. That is > > an old system. > > I've run tmpfs tests with postmark on Ultra 2 and Ultra 5 systems > with faster CPUs and newer versions of the OS, and they didn't run > anywhere *NEAR* that fast (I've got an Ultra 5 I'm testing right > now). Heck, one person told me he had an older laptop running Linux > with ReiserFS and he was getting better throughput going to disk than > Sun did with tmpfs! > > Again, I have to seriously wonder what they were really testing on. I also found that a bit weird, because from the squid list, I had some correspondence with a guy who'd gotten absolutely *horrible* results benchmarking Squid on the Solaris Ultra 1 tmpfs. (Down at the level of what newer machines saw for throughput to disk file systems.) [other comments excerpted] ... > Let me get rawio working on this machine, and we can compare the > low-level hardware performance of this disk device to the previous > four-way 10kRPM vinum striped device I used for my previous > benchmarking, and then we can extrapolate as to what might happen on > that system if we enabled softupdates on it. I'd be willing to bet > that it would beat the crap out of the F630, and if I was allowed to > do that on a nine-way striped 10kRPM volume, I could do some pretty > good damage against the current crop of NFS servers. I guess the real question as to whether that is a fair comparison, is whether softupdates is getting to the level of predictability and absolutely 100% recoverability for unexpected shutdowns or crashes which is expected of the current crop of NFS servers. I lost around 30GB of Usenet last year via trying out softupdates. It wasn't valuable data - "just Usenet" - but it did cost us some significant downtime failing to fsck it and eventually having to newfs and reinstall the configuration. I know people are saying it's come a long way recently, but I haven't heard anybody yet ready to swear you can pull the plug on the server, disk and all, plug it back in, and have it restart and fsck without errors or manual intervention in a reasonable period of time. That's the kind of performance that a Netapp or an EMC Celerra is currently promising. > Take a look at what Joe Greco is talking about doing for his > next-generation USENET news spool server (see message-id > <38ac286b$0$86644@news.execpc.com> in news.software.nntp), and tell > me that this wouldn't beat the crap out of NetApp as an NFS server. I'll take a look. Joe always has interesting things to say; but I don't know that a news spool server necessarily has the same design priorities as an NFS server. Again, not saying the idea is unworkable, just urging a little caution. -- Clifton -- Clifton Royston -- LavaNet Systems Architect -- cliftonr@lava.net The named which can be named is not the Eternal named. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-stable" in the body of the message