From owner-freebsd-fs@freebsd.org Sun Jun 19 20:45:51 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9B5C4A7716A for ; Sun, 19 Jun 2016 20:45:51 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qk0-x230.google.com (mail-qk0-x230.google.com [IPv6:2607:f8b0:400d:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5E1242BB9 for ; Sun, 19 Jun 2016 20:45:51 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by mail-qk0-x230.google.com with SMTP id p10so140108947qke.3 for ; Sun, 19 Jun 2016 13:45:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kraus-haus-org.20150623.gappssmtp.com; s=20150623; h=subject:mime-version:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=Y8QUsRYPkDNx1mE8TkzoGx1smx0hwgWAZ0hU0prQxJo=; b=fKJ0X4cnh58+yQz1FtHK7j1vWSxthvkDMkl4RRyxn0ctAzPd2UgSiU0aIvzl6feJF8 qc4IJHCqSokSqZHzJeciaiMIkOvdIeKPAcIRaSBbtX+yf/YChVZKTRZUr1IibMRCt7td kYFz3d8TEp1VVKTuqsAmuhMznB9xawF/XuOBhsksncNF2rvU71sVUbMKvtdtxJcxuh0B xe+2xSQZumaxFPh+1CTfAbqlE+KEyw3CpIbxH6KOVc64Ch8K5mnmzt92j9//L6y5KxU8 DJg/OY/Tpps7d+sm0qy9mmkUywxqR04/FeC5WYVjDD4rNpX1KirQUwdtOQjCpkUQRs+X Uv9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:mime-version:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=Y8QUsRYPkDNx1mE8TkzoGx1smx0hwgWAZ0hU0prQxJo=; b=dMyPaGe3tECfLbE99DZNSbcGQVUfq83xuXNcHkTesFNMmKC0WU5FQ7lGGmZAM2dhqg LcV5TpSg7wKF5TijX8Jt6NFlQcReP4WoqPM1MFS38lZE+4qeEjoajudbwaR82YqxT7LY UM86G+kcz3kAIV9pMMciJC3iOmpz4B+keLoOiRtxk3voqPgTTXCaDSLGQo8JTSEEvFA7 2PyvT5ODSTYxQR3bUVxn5502vhWALGujanNTthMABTAwUzvMnu/okRo/P3VBOKBhARhK wwaQzMGUMWC1glO+Vu/C/xMzjEvYxuLf6RUafi0gn4N8sKKVB8BQfdkLKT9buME5CDf3 f7Hw== X-Gm-Message-State: ALyK8tJmdtyHdMO08C0C3vecprpJ2YjjIr5THd+jJ19GphpkJee6eXJe8IE5ZV4fwMNe5A== X-Received: by 10.55.77.4 with SMTP id a4mr16212142qkb.198.1466369150352; Sun, 19 Jun 2016 13:45:50 -0700 (PDT) Received: from [192.168.2.133] (pool-100-4-209-221.albyny.fios.verizon.net. [100.4.209.221]) by smtp.gmail.com with ESMTPSA id t27sm9480703qtc.30.2016.06.19.13.45.49 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 19 Jun 2016 13:45:49 -0700 (PDT) Subject: Re: High CPU Interrupt using ZFS Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Content-Type: text/plain; charset=utf-8 From: Paul Kraus In-Reply-To: <57cfcda4-6ff7-0c2e-4f58-ad09ce7cab28@gmail.com> Date: Sun, 19 Jun 2016 16:45:48 -0400 Cc: FreeBSD Filesystems Content-Transfer-Encoding: quoted-printable Message-Id: <2F83F199-80C1-4B98-A18D-C5343EE4F783@kraus-haus.org> References: <57cfcda4-6ff7-0c2e-4f58-ad09ce7cab28@gmail.com> To: Kaya Saman X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 19 Jun 2016 20:45:51 -0000 > On Jun 19, 2016, at 3:38 PM, Kaya Saman wrote: > # zpool list > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH = ALTROOT > ZPOOL_2 27.2T 26.3T 884G - 41% 96% 1.00x ONLINE = - > ZPOOL_3 298G 248G 50.2G - 34% 83% 1.00x ONLINE = - > ZPOOL_4 1.81T 1.75T 66.4G - 25% 96% 1.00x ONLINE = - > ZPOOL_5 186G 171G 14.9G - 62% 92% 1.00x ONLINE = - > workspaces 119G 77.7G 41.3G - 56% 65% 1.00x ONLINE = - > zroot 111G 88.9G 22.1G - 70% 80% 1.00x ONLINE = - Are you aware that ZFS performance drops substantially once a pool = exceeds a certain % full, the threshold for which varies with pool type = and work load. It is generally considered a bad idea to run pools more = than 80% full with any configuration or workload. ZFS is designed first = and foremost for data integrity, not performance and running pools too = full causes _huge_ write performance penalties. Does your system hang = correspond to a write request to any of the pools that are more than 80% = full ? The pool that is at 92% capacity and 62% fragmented is especially = at risk for this behavior. The underlying reason for this behavior is that as a pool get more and = more full it takes more and more time to find an appropriate available = slab to write new data to, since _all_ writes are treated as new data = (that is the whole point of the Copy on Write design) _any_ write to a = close to full pool incurs the huge performance penalty. This means that if you write the data and _never_ modify it and that you = can stand the write penalty as you add data to the mostly full zpools, = then you may be able to use ZFS like this, otherwise just don=E2=80=99t. On my virtual hosts, running FreeBSD 10.x and VirtualBox, a pool more = than 80% full will make the VMs unacceptably unresponsive, I strive to = keep the pools at less than 60% capacity. Disk storage is (relatively) = cheap these days.