From owner-freebsd-fs@FreeBSD.ORG Thu Apr 25 15:51:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id B9267CC7; Thu, 25 Apr 2013 15:51:45 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-ob0-x22a.google.com (mail-ob0-x22a.google.com [IPv6:2607:f8b0:4003:c01::22a]) by mx1.freebsd.org (Postfix) with ESMTP id 7BFC51C51; Thu, 25 Apr 2013 15:51:45 +0000 (UTC) Received: by mail-ob0-f170.google.com with SMTP id eh20so2657197obb.15 for ; Thu, 25 Apr 2013 08:51:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=JqtlMfwnTp3Av9gVCFWiT2jiOfNd2Amh+L3G+Mw0WAg=; b=GuMYNlgIMisZkosACO6qv4xlgx1Yts+NCnCQ8AD/SQbctK/4QdxQIQ1H6Tn7S0PR4I JivWyldVts3gEYwuFzZG+5wdDJ+MY6wIgmu263gxB+hIGl0rFiDzybVs4XM5+Ce6hm44 PH5eKgrZpP/HhPK22pgey8xiEic+7XfA7JRZRpy0QYg2aK3JUlM5t5XqzflmZVvuNMP4 bYT+KFMmO+04YHHXJAUGkWS3QDj10HgOoTw5FAT3Q8E+PEcc90ZuZH5QoodQltm/IZ+5 E+YpEmtofkLFszfE+1pPQmzuopuEiZHCteDIFNTnjzGRLKbO2RBJ7We0pKMSqpsxKBWx 0TRQ== MIME-Version: 1.0 X-Received: by 10.60.132.36 with SMTP id or4mr15917987oeb.112.1366905104591; Thu, 25 Apr 2013 08:51:44 -0700 (PDT) Received: by 10.76.141.15 with HTTP; Thu, 25 Apr 2013 08:51:44 -0700 (PDT) In-Reply-To: <51487CE1.5090703@FreeBSD.org> References: <51430744.6020004@FreeBSD.org> <51487CE1.5090703@FreeBSD.org> Date: Thu, 25 Apr 2013 08:51:44 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Freddie Cash To: Andriy Gapon Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Apr 2013 15:51:45 -0000 I haven't had a chance to run any of the DTrace scripts on any of my ZFS systems, but I have narrowed down the issue a bit. If I set primarycache=all and secondarycache=all, then adding an L2ARC device to the pool will lead to zfskern{l2arc_feed_thread} taking up 100% of one CPU core and stalling I/O to the pool. If I set primarycache=all and secondarycache=metadata, then adding an L2ARC device to the pool speeds things up (zfs send/recv saturates a 1 Gbps link; and the nightly rsync backups run finishes 4 hours earlier). I haven't tested the other two combinations (metadata/metadata; metadata/all) as yet. This is consistent across two ZFS systems so far: - 8-core Opteron 6100-series CPU with 48 GB of RAM; 44 GB ARC, 40 GB metadata limit; 3x raidz2 - 2x 8-core Opteron 6100-series CPU with 128 GB of RAM; 64 GB ARC, 60 GB metadata limit; 5x raidz2 Still reading up on dtrace/hwpmc as time permits. Just wanted to pass along the above to show I haven't forgotten about this yet. :) $JOB/$LIFE slows things down sometimes. :) -- Freddie Cash fjwcash@gmail.com