From owner-freebsd-isp@FreeBSD.ORG Mon Jul 6 21:24:33 2009 Return-Path: Delivered-To: freebsd-isp@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B90E51065689 for ; Mon, 6 Jul 2009 21:24:33 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from qw-out-2122.google.com (qw-out-2122.google.com [74.125.92.25]) by mx1.freebsd.org (Postfix) with ESMTP id 6F5FE8FC26 for ; Mon, 6 Jul 2009 21:24:33 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by qw-out-2122.google.com with SMTP id 5so1640225qwd.7 for ; Mon, 06 Jul 2009 14:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=S8iLc7EvF6I5yzH1GdRI2VkZrt+1KL8LFOEtJTBPKQM=; b=MGK+q4b1IbYU6EW6bbxRzBK+Y/vJAadHnO7cDVCCY1XRdZHw8NVLRVZJnIxHSAuJIu xDoM43um2va9PzHJGUlFLJ/SXVKl14r3AA3Uv9BRXRhllgqNA1+68I9sHtkYGVuElyTi 9+JGvFEP6bPeZq+XJOjQpVvspyJvZeNXaW4ZQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=TljGwBo5FKHa14Z2rj5BxequWzi69fuE9YwKpgjYJ2Ll3jsDPB4AWHRv4zhMYNz2LZ klGNR1kX074s8XHIyEUO8tI9woxxSrW4CsaicY6ds64jpL/ffrqbVYNPdDeqj0LtTG1r a7VX4Uuq6UQHAZYFp01Ny0AfMAO8iSz5C7o0E= MIME-Version: 1.0 Received: by 10.229.86.194 with SMTP id t2mr2733404qcl.49.1246913530161; Mon, 06 Jul 2009 13:52:10 -0700 (PDT) In-Reply-To: <4A5209CA.4030304@interazioni.it> References: <4A5209CA.4030304@interazioni.it> Date: Mon, 6 Jul 2009 13:52:10 -0700 Message-ID: From: Freddie Cash To: "Tonix (Antonio Nati)" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-isp@freebsd.org Subject: Re: ZFS in productions 64 bit X-BeenThere: freebsd-isp@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Internet Services Providers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jul 2009 21:24:34 -0000 On Mon, Jul 6, 2009 at 7:27 AM, Tonix (Antonio Nati) wrote: > Is anyone using in heavy production environment a ZFS FS with AMD 64 bit? > We're using FreeBSD 7.2 on our backup servers. The primary backup server does remote backups for over 105 servers, every night. And then pushes the changes to the secondary backup server, every day. Both servers are: 5U Chenbro case, with 24 hot-swappable SATA drive bays 1350 watt, 4-way redundant PSU (yes, it's overkill) Tyan h2000M motherboard 2x AMD Opteron 2220 CPUs @ 2.8 GHz (dual-core) 3Ware 9550SXU-12ML PCI-X RAID controller 3Ware 9650SE-12ML PCIe RAID controller Intel PRO/1000MT PCI-X quad-port gigabit NIC 24x 500 GB SATA harddrives 2x CompactFlash drives in CF-to-IDE or CF-to-SATA adapters The CompactFlash are configured using gmirror, and hold / and /usr. (/usr is there as we originally had some issues booting into single-user mode and getting the zpool up and running.) zpool is configured with 3x raidz2 vdevs, each vdev uses 8 harddrives. Gives us ~ 10 TB usable space in the pool. Everything other than / and /usr are ZFS, including /usr/src, /usr/obj, /usr/ports, /var, /tmp, /usr/local, /home, and so on. Over the course of a backup run, we average 80 MBytes/sec writes, which is limited by the horrible upload performance of the remote ADSL sites. We've benchmarked the system maxing out at 550 MBytes/sec write and 5.5 GBytes/sec read. We had to do a lot of manual tuning when we started out, to limit vm.kmem_size_max and vfs.zfs.arc_max, and to disable prefetch (vfs.zfs.prefetch_disable=1), as we started with 7-STABLE shortly after 7.0 was released. With FreeBSD 7.2, we've removed the tuning, but left prefetch disabled (with prefetch enabled, we'd lockup the system after about 5 hours of heavy rsync usage ... no swap space left). Our backups are done using rsync. We serialise the backups of the systems at each remote site, but run the backups for multiple sites in parallel. The only "non-standard" change we made was to switch to openssh-portable from ports, and enable the HPN patches. We saw our rsync throughput go up by 30% after tuning the network sysctls, using HPN. Other than trying to use USB sticks instead of CompactFlash originally, during the initial tuning phase, and when experimenting with prefetch, the system has been rock solid. Our next big ZFS project will be using similar hardware to create our own SAN setup, using iSCSI exports, for a virtualisation setup (Linux+KVM on the processing nodes, FreeBSD+ZFS on the storage nodes). -- Freddie Cash fjwcash@gmail.com