Date: Thu, 3 Feb 2005 17:59:30 -0500 (EST) From: Jeff Roberson <jroberson@chesapeake.net> To: Nick Pavlica <linicks@gmail.com> Cc: freebsd-performance@freebsd.org Subject: Re: My disk I/O testing methods for FreeBSD 5.3 ... Message-ID: <20050203175519.K18864@mail.chesapeake.net> In-Reply-To: <dc9ba044050203143647cee0c2@mail.gmail.com> References: <dc9ba044050203143647cee0c2@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 3 Feb 2005, Nick Pavlica wrote: > All, > I would like to share the methods that I have been using in my disk > I/O testing. The detailed results of these tests have been posted to > the performance and questions mailing lists under the title " FreeBSD > 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion". I > originally started this testing as due diligence in an up coming > project. As a result of this testing I discovered an elegant > operating system that I enjoy working with. Nick, first, I'd like to thank you for your efforts so far. I think your tests have been very informative. I'd like to see what we can do to get to the bottom of the differences. Can you perform one test which varied greatly between 5.x and 4.x and collect some data for us? To start with, the output of vmstat 1 piped to a file would be informative. Do you have any indication that 5.x is actually cpu bound in a case where 4.x is not? I'm wondering if this is a latency issue or a cpu utilization issue. I intend to backport some code that lets me graph system activity into RELENG_5. Are you setup to cvsup to this tag? Would it be convenient for you to do so? Thanks, Jeff > > Intent Of This Testing: > 1)To measure the disk I/O performance of various operating systems for > use as a production database server. > 2)Help improve the disk I/O performance of FreeBSD 5.x and greater by > assisting the FreeBSD development team in identifying possible > performance issues, and provide them with data to measure the success > of various changes to the operating system. > > Operating Systems tested: > Fedora Core 3 with EXT3, and XFS. I tested with and with out patches. > SUSE Enterprise Server 9 with Riser FS. > FreeBSD 4.11R > FreeBSD 5.3R, RELENG_5_3, RELENG_5 > NetBSD 2.0R > OpenBSD 3.6R > > Test Hardware: > Compaq DeskPro, PIII 800, 384Mb Ram, 10Gb IDE HD. > Dell PE 2400, Dual PIII 550, 512Mb Ram, (2)10K,LVD SCSI, RAID 1, PERC > 2SI controller with 64Mb ram. > Dell PE SC400, 2.4Ghz P4, 256MB Ram, 40Gb IDE HD. > Dell 4600, 2.8 Ghz P4 with HT, 512MB Ram, 80GB IDE HD. > > Installation Notes: > It's my intention to test these Operating Systems using as many of > the default installation options as possible with no special tuning. > The only deviations in my previous testing were as follows: The #linux > xfs option was used when installing Fedora so that I could use XFS, > and a special test where I installed 5.3R with UFS instead of UFS2 (I > didn't see any improvement when using UFS). I installed FreeBSD using > the standard install option, and used the auto allocate features for > partitioning and slicing. I installed Fedora with the stock server > packages and created a 100Mb /boot, 512Mb swap, and allocated the > remaining space to /. I tested FreeBSD5.3R and FC3R with and without > updates. I used cvsup to update FreeBSD and yum update to update > Fedora. I didn't do any updating to FreeBSD4.11R, NetBSD2.0, and > OpenBSD3.6. > > I used the following utilities/tools in my testing: > DD > CP > IOSTAT (iostat -d 2) > Bonnie++ > TOP > SQL,PL, PSQL > Postgresql 8.0 > > DD Example Tests: > - #time dd bs=1024 if=/dev/zero of=tstfile count=1M > - #time dd bs=1024 if=/dev/zero of=tstfile count=2M > - #time dd bs=1024 if=/dev/zero of=tstfile count=3M > > Bonnie++ Example Tests: > #bonnie++ -u root -s 1024 -r 512 -n 5 > #bonnie++ -u root -s 2048 -r 512 -n 5 > #bonnie++ -u root -s 3072 -r 512 -n 5 > > CP Example Tests: > #time cp tstfile tstfile2 > > SQL, PL, PSQL Example Tests: > > CREATE TABLE test1 ( > thedate TIMESTAMP, > astring VARCHAR(200), > anumber INTEGER > ); > > CREATE FUNCTION build_data() RETURNS integer AS ' > DECLARE > i INTEGER DEFAULT 0; > curtime TIMESTAMP; > BEGIN > FOR i IN 1..1000000 LOOP > curtime := ''now''; > INSERT INTO test1 VALUES (curtime, ''test string'', i); > END LOOP; > RETURN 1; > END; > ' LANGUAGE 'plpgsql'; > > SELECT build_data(); > Then the following script is run under the time program to ascertain > how long it takes to run: > CREATE TABLE test2 ( > thedate TIMESTAMP, > astring VARCHAR(200), > anumber INTEGER > ); > CREATE TABLE test3 AS SELECT * FROM test1; > INSERT INTO test2 SELECT * FROM test1 WHERE ((anumber % 2) = 0); > DELETE FROM test3 WHERE ((anumber % 2) = 0); > DELETE FROM test3 WHERE ((anumber % 13) = 0); > CREATE TABLE test4 AS > SELECT test1.thedate AS t1date, > test2.thedate AS t2date, > test1.astring AS t1string, > test2.astring AS t2string, > test1.anumber AS t1number, > test2.anumber AS t2number > FROM test1 JOIN test2 ON test1.anumber=test2.anumber; > UPDATE test3 SET thedate='now' WHERE ((anumber % 5) = 0); > DROP TABLE test4; > CREATE TABLE test4 AS SELECT * FROM test1; > DELETE FROM test4 WHERE ((anumber % 27) = 0); > VACUUM ANALYZE; > VACUUM FULL; > DROP TABLE test4; > DROP TABLE test3; > DROP TABLE test2; > VACUUM FULL; > > Example FS TAB: > > minime# cat /etc/fstab > # Device Mountpoint FStype Options Dump Pass# > /dev/ad0s1b none swap sw 0 0 > /dev/ad0s1a / ufs rw 1 1 > /dev/ad0s1e /tmp ufs rw 2 2 > /dev/ad0s1f /usr ufs rw 2 2 > /dev/ad0s1d /var ufs rw 2 2 > /dev/acd0 /cdrom cd9660 ro,noauto 0 0 > > Verification Of Test: > I have been able to get consistent results in all of my testing. > However, I think the best verification would be to have as many people > as possible test the disk I/O performance on a range of hardware, > testing methods, and configurations. > > Summary Of Results: > The results of my testing have consistently demonstrated that > FreeBSD5.3+ has dramatically slower disk I/O performance than all of > the other operating systems that were tested. FreeBSD 4.11R was the > performance leader followed by Fedora C3 with XFS. All of the BSD > distributions, with the exception of 5.3+, were able to consistently > demonstrate a throughput of 56-58Mb/s sustained throughput, while 5.3+ > consistently demonstrated a throughput of 12-15Mb/s (58 -15 = 43 ?). > > Please let me know if you need any additional details. > > Thanks! > --Nick Pavlica > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050203175519.K18864>