From owner-freebsd-fs@FreeBSD.ORG Tue May 1 08:50:29 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5C2EE16A403 for ; Tue, 1 May 2007 08:50:29 +0000 (UTC) (envelope-from kvs@binarysolutions.dk) Received: from solow.pil.dk (relay.pil.dk [195.41.47.164]) by mx1.freebsd.org (Postfix) with ESMTP id 2DE9513C45A for ; Tue, 1 May 2007 08:50:29 +0000 (UTC) (envelope-from kvs@binarysolutions.dk) Received: from coruscant.local (naboo.binarysolutions.dk [80.196.17.173]) by solow.pil.dk (Postfix) with ESMTP id 241761CC0F6 for ; Tue, 1 May 2007 10:33:12 +0200 (CEST) Received: by coruscant.local (Postfix, from userid 502) id 39E5F2FEE76; Tue, 1 May 2007 10:33:10 +0200 (CEST) To: freebsd-fs@freebsd.org From: Kenneth Vestergaard Schmidt Date: Tue, 01 May 2007 10:33:10 +0200 Message-ID: User-Agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.96 (darwin) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Sun Fire X4500, FreeBSD and ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 May 2007 08:50:29 -0000 Mjello. Just thought I'd say that we've got a Sun Fire X4500 running with -CURRENT as of yesterday and ZFS. Works beautifully, after we disabled MSI and increased VM_KMEM_SIZE_MAX. Without the increased VM_KMEM_SIZE_MAX, we got the usual panic (kmem_map too small). I haven't tried adjusting maxvnodes - that might also have helped. However, the machine has 16 GB RAM, so it might as well be used for something. I'm not quite sure how to tweak the box efficiently, but for now the bottleneck is our network, so we're going to upgrade some pieces and try again. We configured the 48 drives as follows: - ad52 and ad60 are magic - the BIOS is hardcoded to boot from them, so we put them in a gmirror - 5 RAIDZ2's, each with 9 disks, for a usable total of 7 per array - one global hotspare # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT void 20.3T 62.1G 20.3T 0% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT void 48.2G 15.5T 41.9K /void All in all, a fun little toy :) -- Best Regards Kenneth Schmidt