From owner-freebsd-hackers@FreeBSD.ORG Sat Jun 12 15:36:43 2010 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 53B5E1065673 for ; Sat, 12 Jun 2010 15:36:43 +0000 (UTC) (envelope-from tdamas@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id F0A808FC13 for ; Sat, 12 Jun 2010 15:36:42 +0000 (UTC) Received: by vws20 with SMTP id 20so1899792vws.13 for ; Sat, 12 Jun 2010 08:36:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:received:from:date :message-id:subject:to:content-type; bh=8aR6iS7cNYk/6dHzll5JZJgwL4tCzMoRSsACdXH0a+0=; b=I+8OmoHOzgOCfIq6DMUEEquwvVxKo8pGjTGwuQIHsD3QOtbWW4kb6ZRc6zFUtL6IyG 7JZrcNmocA+qwhmH7GniTPJS8wEJmnsJcP+gH3EO1Ae2ilFUR3uM/zYNJv+gbwiOntqY YEUcEPImI4eLfgJ/6xOz+noZ1pL9bJ7Hgsyxs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:from:date:message-id:subject:to:content-type; b=ZcVIz48+6gvw+mW+tO3RdGDORw/VPEzHsgKzMEsDo85pEy6yN3tTU6ZKNOvYh0lzyt aWGgpoKdLH9l6hQmsu8YoAULQu3uoE5MwMDBkERqkU0n0xOVWvk3ktjtd0xe8NYkF6QP QfGz8JEzRy7g2NV97d2QSG0NjvjMKO1iGdtaU= Received: by 10.220.127.2 with SMTP id e2mr761186vcs.137.1276357002109; Sat, 12 Jun 2010 08:36:42 -0700 (PDT) MIME-Version: 1.0 Received: by 10.220.192.73 with HTTP; Sat, 12 Jun 2010 08:36:22 -0700 (PDT) From: Thiago Damas Date: Sat, 12 Jun 2010 12:36:22 -0300 Message-ID: To: freebsd-hackers@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: strange zfs behavior X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jun 2010 15:36:43 -0000 Hi, I'm testing some configuration using ZFS with 4 disks seagate: ad4: 953869MB at ata2-master UDMA100 SATA 3Gb/s ad6: 953869MB at ata3-master UDMA100 SATA 3Gb/s ad8: 953869MB at ata4-master UDMA100 SATA 3Gb/s ad10: 953869MB at ata5-master UDMA100 SATA 3Gb/s The system its amd64 8.1-BETA1 (tested too in 8.0-p3). My only tuning its (in /boot/loader.conf): vm.kmem_size_scale="2" vfs.zfs.txg.timeout=5 The machine has 4Gb RAM, and SATA controller its LSI53C1020/1030 (adaptec 1020) At first, I used the following: zpool create -f -m /storage tank mirror /dev/ad4 /dev/ad6 mirror /dev/ad8 /dev/ad10 and I noticed ad10 slower than others (svc_t) svc_t: http://i48.tinypic.com/34s1ndd.gif http://i45.tinypic.com/m9x6ra.gif wait: http://i47.tinypic.com/2uqksv5.gif http://i49.tinypic.com/200qza9.gif Now, I swapped the configuration: zpool create -f -m /storage tank mirror /dev/ad10 /dev/ad8 mirror /dev/ad6 /dev/ad4 and now I have ad4 slower than others svc_t: http://i49.tinypic.com/2uxtqww.gif http://i50.tinypic.com/10dbcix.gif wait: http://i46.tinypic.com/331f5lf.gif http://i46.tinypic.com/2lc7c5k.gif Will always the last disk in zfs configuration perform like that? Any comments?