From owner-freebsd-fs@FreeBSD.ORG Sat Feb 15 20:22:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DFAAE355 for ; Sat, 15 Feb 2014 20:22:09 +0000 (UTC) Received: from mail-pd0-f176.google.com (mail-pd0-f176.google.com [209.85.192.176]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B19231683 for ; Sat, 15 Feb 2014 20:22:09 +0000 (UTC) Received: by mail-pd0-f176.google.com with SMTP id w10so13260067pde.21 for ; Sat, 15 Feb 2014 12:22:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=pDp+tN5MEa3JGD/kqHY3HyQ31anyAdbjqV8CyEAQ5Gw=; b=hzeG7jbUc4hPmAiIYAOCTZ7UtQCuyf+9bokl6xBdDdrXEzGhqGVMqTHFGldjSicleo QjCgqkHqimghIq6ppwVEt6F+EzD6LHuvinTiVEzZGkhq5WEAbm4TAbD+U/PIuyy9rSaZ 74yX/o03SRtsTEdd2sneWNMKv62fV59KHHyCsP7eDw1MF3E8/u1iYJ/kUwdjIXL78u32 gOwP0IdxujUSoOMDapcClvdbx87AQQ2BrlfGhlS48a+6POp46kAgQs5cbYBwoKdMCsp0 tOdJbcmeM3A/K2RpEhEUsy5V+kre72RYpcseZKhnKbrNwFbtqwK8aP7zJ0PwyZaR7iyo ObdQ== X-Gm-Message-State: ALoCoQkzTNi5QL6JVGQD7D+Qn/LP4+DaZsG/8+Apkd2oUCRupaJAjWQ8N9AoXxk7sg3uxT5TRDBI MIME-Version: 1.0 X-Received: by 10.67.22.100 with SMTP id hr4mr16897679pad.112.1392495723377; Sat, 15 Feb 2014 12:22:03 -0800 (PST) Received: by 10.70.44.165 with HTTP; Sat, 15 Feb 2014 12:22:03 -0800 (PST) In-Reply-To: <52FF566D.3060601@FreeBSD.org> References: <52B2D8D6.8090306@FreeBSD.org> <52FE0378.7070608@FreeBSD.org> <52FF566D.3060601@FreeBSD.org> Date: Sat, 15 Feb 2014 12:22:03 -0800 Message-ID: Subject: Re: l2arc_feed_thread cpu utlization From: Brendan Gregg To: Andriy Gapon Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 15 Feb 2014 20:22:10 -0000 G'Day Andriy, On Sat, Feb 15, 2014 at 3:58 AM, Andriy Gapon wrote: > on 14/02/2014 22:23 Brendan Gregg said the following: > > G'Day Andriy, > > > > Thanks for the patch. If most of the data is in one list (anyone have > statistics > > to confirm such a likelyhood? I know this happened a lot > pre-list-split), then I > > think this means we only scan that at 1/32nd of the previous rate. It > should > > solve the CPU issue, but could make warmup very slow. > > Brendan, > > I do not have any stats, but I think that the data should be spread more > or less > evenly between the lists. I mean the 16 sub-lists for data and 16 > sub-lists for > metadata. First, a list is picked up based on hash and that _should_ > produce > more or less even distribution. Second, if the hash funciton is not good > enough > then whole list splitting is pointless. > In either case this was just a quick hack on my part. > Ah, I'm sorry, I should have read more of the code earlier; I had assumed the split algorithm was something else, and I'm wrong. It should be even. So, based on get_buf_info(), I think we can DTrace how buf_hash() is mapped to the lists to get an idea of the distribution. Eg (on illumos, which has the same buf_hash() code): # dtrace -n 'fbt::buf_hash:return { @ = lquantize(arg1 & (32 - 1), 0, 32, 1); } tick-30s { exit(0); }' dtrace: description 'fbt::buf_hash:return ' matched 2 probes CPU ID FUNCTION:NAME 7 30 :tick-30s value ------------- Distribution ------------- count < 0 | 0 0 |@@ 20581 1 |@ 12578 2 |@ 6004 3 |@@ 15215 4 |@ 4660 5 | 2952 6 | 3678 7 |@@ 14091 8 | 2402 9 |@@ 20998 10 |@ 5805 11 |@ 6564 12 |@@@@ 35560 13 |@ 13021 14 | 4348 15 |@@ 17035 16 |@@ 15406 17 |@ 5512 18 |@ 13222 19 |@ 5488 20 |@ 5404 21 |@@ 13583 22 |@ 7453 23 |@ 4794 24 | 3738 25 |@@@ 24918 26 |@@ 15566 27 |@ 5324 28 |@ 12112 29 |@@ 13966 30 |@@ 16668 31 |@ 9904 >= 32 | 0 So that looks reasonably even - every bucket is in use. I think your patch should be good. Brendan -- Brendan Gregg, Joyent http://dtrace.org/blogs/brendan