From owner-freebsd-performance@FreeBSD.ORG Wed Apr 16 00:31:16 2008 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A9B5C106566B for ; Wed, 16 Apr 2008 00:31:16 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: from tombraider.mr.itd.umich.edu (smtp.mail.umich.edu [141.211.93.161]) by mx1.freebsd.org (Postfix) with ESMTP id 6391A8FC12 for ; Wed, 16 Apr 2008 00:31:16 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: FROM atom.heart.mother (c-68-40-199-244.hsd1.mi.comcast.net [68.40.199.244]) BY tombraider.mr.itd.umich.edu ID 480548D0.71063.20938 ; 15 Apr 2008 20:31:12 -0400 Message-ID: <480548CF.5080104@umich.edu> Date: Tue, 15 Apr 2008 20:31:11 -0400 From: "Benjeman J. Meekhof" User-Agent: Thunderbird 2.0.0.9 (X11/20071031) MIME-Version: 1.0 To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: ZFS, Dell PE2950 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2008 00:31:16 -0000 Hi, I posted earlier about some results with this same system using UFS2. Now trying to test ZFS. This is a Dell PE2950 with two Perc6 controllers and 4 md1000 disk shelves with 750GB drives. 16GB RAM, dual quad core Xeon. I recompiled our kernel to use the ULE scheduler instead of default. I could not get through an entire run of iozone without a system reboot/crash. ZFS is clearly labeled experimental, of course. It seems to die for sure around 10 processes, sometimes less (this is the end of my output from iozone): Children see throughput for 10 readers = 135931.72 KB/sec Parent sees throughput for 10 readers = 135927.24 KB/sec Min throughput per process = 13351.26 KB/sec Max throughput per process = 14172.05 KB/sec Avg throughput per process = 13593.17 KB/sec Min xfer = 31586816.00 KB Some zpool info below - each volume below is a raid6 of 30PD on one controller. I may try different hardware volume configs for fun. zpool create test mfid0 mfid2 # pool is automatically mounted at /test # pool: test # state: ONLINE # scrub: none requested #config: # # NAME STATE READ WRITE CKSUM # test ONLINE 0 0 0 # mfid0 ONLINE 0 0 0 # mfid2 ONLINE 0 0 0 # #errors: No known data errors -Ben