From owner-freebsd-stable@FreeBSD.ORG  Tue Jan 22 14:01:45 2013
Return-Path: <owner-freebsd-stable@FreeBSD.ORG>
Delivered-To: freebsd-stable@freebsd.org
Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115])
 by hub.freebsd.org (Postfix) with ESMTP id BA25DF84
 for <freebsd-stable@freebsd.org>; Tue, 22 Jan 2013 14:01:45 +0000 (UTC)
 (envelope-from me@vorakl.name)
Received: from argon.h.anonisp.net (argon.h.anonisp.net [82.193.122.248])
 by mx1.freebsd.org (Postfix) with ESMTP id 02E9980D
 for <freebsd-stable@freebsd.org>; Tue, 22 Jan 2013 14:01:44 +0000 (UTC)
Received: by argon.h.anonisp.net (Postfix, from userid 982)
 id 123CEFD05F; Tue, 22 Jan 2013 16:01:44 +0200 (EET)
Received: from nikel.h.anonisp.net (nikel.vaganupa2.fbsd-ng.org [10.101.4.2])
 by argon.h.anonisp.net (Postfix) with ESMTP id DF641FD041
 for <freebsd-stable@freebsd.org>; Tue, 22 Jan 2013 16:01:43 +0200 (EET)
Received: by nikel.h.anonisp.net (SMTPd, from userid 982)
 id 67C586A042D; Tue, 22 Jan 2013 16:01:43 +0200 (EET)
Received: from [192.168.68.203] (unknown [195.216.206.4])
 by nikel.h.anonisp.net (SMTPd) with ESMTPA id 4E11B6A041B
 for <freebsd-stable@freebsd.org>; Tue, 22 Jan 2013 16:01:43 +0200 (EET)
Message-ID: <50FE9BB5.9090308@vorakl.name>
Date: Tue, 22 Jan 2013 16:01:25 +0200
From: Oleksii Tsvietnov <me@vorakl.name>
MIME-Version: 1.0
To: freebsd-stable@freebsd.org
Subject: busy on all disks that are a part of the ZFS pools without any load
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Spam-Value: 0.000000 (Ham)
X-BeenThere: freebsd-stable@freebsd.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: Production branch of FreeBSD source code <freebsd-stable.freebsd.org>
List-Unsubscribe: <http://lists.freebsd.org/mailman/options/freebsd-stable>,
 <mailto:freebsd-stable-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-stable>
List-Post: <mailto:freebsd-stable@freebsd.org>
List-Help: <mailto:freebsd-stable-request@freebsd.org?subject=help>
List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-stable>,
 <mailto:freebsd-stable-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 22 Jan 2013 14:01:45 -0000

Hello.

I have a problem with all diks that are a part of ZFS pools.
There is a busy state (6-7%) on all of them in the iostat. But only 
there! There aren't any load at all on the disks and other system 
utilities, such as gstat, 'zpool iostat', 'systat -iostat' which are 
reporting a zero busy status.

24 disks -> 8 ZFS pools
All 24 disks are on the 3Ware controller in JBOD mode (each disk works 
as a single disk without any hardware RAIDs)

twa0: <3ware 9000 series Storage Controller> port 0xd800-0xd8ff mem 
0xf6000000-0xf7ffffff,0xfaedf000-0xfaedffff irq 16 at device 0.0 on pci2
twa0: INFO: (0x15: 0x1300): Controller details:: Model 9650SE-24M8, 24 
ports, Firmware FE9X 4.08.00.006, BIOS BE9X 4.08.00.001

# uname -a
FreeBSD gfs521 9.1-STABLE FreeBSD 9.1-STABLE #5 r245163:

# iostat -xzt da,scsi
                         extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
da0       12.6   2.5  1248.5   173.2    0  29.0   7
da1       12.6   2.6  1227.7   173.2    0  22.6   6
da2       12.5   2.5  1233.3   173.2    0  29.3   7
da3       10.2   2.4   994.7   165.5    0  28.1   6
da4       10.5   2.4  1035.5   165.5    0  28.0   6
da5       10.7   2.4  1049.8   165.5    0  28.6   6
da6       14.6   2.5  1418.9   165.2    0  28.4   8
da7       14.4   2.5  1387.2   165.2    0  28.6   8
da8       14.3   2.5  1376.7   165.2    0  27.8   8
da9       10.8   2.5  1065.2   161.7    0  27.0   6
da10      11.0   2.5  1100.9   161.7    0  27.5   6
da11      10.4   2.5  1015.1   161.7    0  27.6   6
da12      13.5   2.4  1365.8   168.7    0  28.9   7
da13      13.9   2.4  1364.2   168.7    0  26.9   7
da14      13.9   2.4  1373.9   168.7    0  27.1   7
da15      13.6   2.6  1308.5   165.3    0  24.5   7
da16      14.3   2.5  1417.0   165.3    0  24.9   7
da17      14.0   2.5  1376.6   165.3    0  25.1   7
da18      17.0   2.4  1697.2   164.4    0  19.8   6
da19      16.0   2.4  1578.0   164.4    0  20.2   6
da20      16.5   2.4  1635.6   164.4    0  23.5   7
da21       8.7   2.5   802.8   186.3    0  27.2   6
da22       8.7   2.5   800.1   186.3    0  26.9   6
da23       8.6   2.5   797.0   186.3    0  27.1   6

# gstat
     0      0      0      0    0.0      0      0    0.0    0.0| da0
     0      0      0      0    0.0      0      0    0.0    0.0| da1
     0      0      0      0    0.0      0      0    0.0    0.0| da2
     0      0      0      0    0.0      0      0    0.0    0.0| da3
     0      0      0      0    0.0      0      0    0.0    0.0| da4
     0      0      0      0    0.0      0      0    0.0    0.0| da5
     0      0      0      0    0.0      0      0    0.0    0.0| da6
     0      0      0      0    0.0      0      0    0.0    0.0| da7
     0      0      0      0    0.0      0      0    0.0    0.0| da8
     0      0      0      0    0.0      0      0    0.0    0.0| da9
     0      0      0      0    0.0      0      0    0.0    0.0| da10
     0      0      0      0    0.0      0      0    0.0    0.0| da11
     0      0      0      0    0.0      0      0    0.0    0.0| da12
     0      0      0      0    0.0      0      0    0.0    0.0| da13
     0      0      0      0    0.0      0      0    0.0    0.0| da14
     0      0      0      0    0.0      0      0    0.0    0.0| da15
     0      0      0      0    0.0      0      0    0.0    0.0| da16
     0      0      0      0    0.0      0      0    0.0    0.0| da17
     0      0      0      0    0.0      0      0    0.0    0.0| da18
     0      0      0      0    0.0      0      0    0.0    0.0| da19
     0      0      0      0    0.0      0      0    0.0    0.0| da20
     0      0      0      0    0.0      0      0    0.0    0.0| da21
     0      0      0      0    0.0      0      0    0.0    0.0| da22
     0      0      0      0    0.0      0      0    0.0    0.0| da23

# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data1       11.4G  5.43T      0      0      0      0
data2       9.05G  5.43T      0      0      0      0
data3       10.1G  5.43T      0      0      0      0
data4       4.15G  5.43T      0      0      0      0
data5       11.9G  5.43T      0      0      0      0
data6       10.1G  5.43T      0      0      0      0
data7       76.1G  5.36T      0      0      0      0
data8       5.38M  5.44T      0      0      0      0

# zpool status -xv
all pools are healthy


Thanks for any ideas!