From owner-freebsd-current@FreeBSD.ORG Mon Dec 19 21:07:50 2011 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 63677106566C; Mon, 19 Dec 2011 21:07:50 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id DFEEF8FC0A; Mon, 19 Dec 2011 21:07:49 +0000 (UTC) Received: from digsys226-136.pip.digsys.bg (digsys226-136.pip.digsys.bg [193.68.136.226]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.4/8.14.4) with ESMTP id pBJL7b0Q021839 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Mon, 19 Dec 2011 23:07:42 +0200 (EET) (envelope-from daniel@digsys.bg) Mime-Version: 1.0 (Apple Message framework v1251.1) Content-Type: text/plain; charset=iso-8859-1 From: Daniel Kalchev In-Reply-To: <4EEFA5E4.9070803@freebsd.org> Date: Mon, 19 Dec 2011 23:07:36 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <4EEF488E.1030904@freebsd.org> <83648C73-E45F-4ABA-8E83-4C8903A683AB@digsys.bg> <4EEFA5E4.9070803@freebsd.org> To: Stefan Esser X-Mailer: Apple Mail (2.1251.1) Cc: FreeBSD Current Subject: Re: Uneven load on drives in ZFS RAIDZ1 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Dec 2011 21:07:50 -0000 On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote: > Am 19.12.2011 19:03, schrieb Daniel Kalchev: >> I have observed similar behavior, even more extreme on a spool with = dedup enabled. Is dedup enabled on this spool? >=20 > Thank you for the report! >=20 > Well, I had dedup enabled for a few short tests. But since I have got > "only" 8GB of RAM and dedup seems to require an order of magnitude = more > to be working well, I switched dedup off again after a few hours. You will need to get rid of the DDT, as those are read nevertheless even = with dedup (already) disabled. The tables refer to already deduped data. In my case, I had about 2-3TB of deduced data, with 24GB RAM. There was = no shortage of RAM and I could not confirm that ARC is full.. but = somehow the pool was placing heavy read on one or two disks only (all = others, nearly idle) -- apparently many small size reads. I resolved my issue by copying the data to a newly created filesystem in = the same pool -- luckily there was enough space available, then removing = the 'deduped' filesystems. That last operation was particularly slow and at one time I had = spontaneous reboot -- the pool was 'impossible to mount', and as weird = as it sounds, I had 'out of swap space' killing the 'zpool list' = process. I let it sit for few hours, until it has cleared itself. I/O in that pool is back to normal now. There is something terribly wrong with the dedup code. Well, if your test data is not valuable, you can just delete it. :) Daniel