Date: Wed, 2 Feb 2005 13:58:59 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: Mike Tancsa <mike@sentex.net> Cc: freebsd-performance@freebsd.org Subject: Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 and dragonfly Message-ID: <200502022158.j12LwxGn002992@apollo.backplane.com> References: <20050130120437.93214.qmail@web26810.mail.ukl.yahoo.com> <6.2.1.2.0.20050201193210.0489e6f8@64.7.153.2>
next in thread | previous in thread | raw e-mail | index | archive | help
Urmmm. how about a bit more information... what are the machine configurations? The disk topology? The networking? The graphs are almost completely unannotated, it's hard to figure out what the numbers actually mean. I can figure some things out. Clearly the BSD write numbers are dropping at a block size of 2048 due to vfs.write_behind being set to 1. Just as clearly, Linux is not bothering to write out ANY data, and then able to take advantage of the fact that the test file is being destroyed by iozone (so it can throw away the data rather then write it out). This skews the numbers to the point where the benchmark doesn't even come close to reflecting reality, though I do believe it points to an issue with the BSDs ... the write_behind heuristic is completely out of date now and needs to be reworked. The read tests are less clear. iozone runs its read tests just after it runs its write tests. so filesystem syncing and write flushing is going to have a huge effect on the read numbers. I suspect that this is skewing the results across the spectrum. In particular, I don't see anywhere near the difference in cache-read performance between FreeBSD-5 and DragonFly. But I guess I'll have to load up a few test boxes myself and do my own comparisons to figure out what is going on. -Matt
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200502022158.j12LwxGn002992>