From owner-freebsd-geom@FreeBSD.ORG Sat Nov 27 16:04:58 2010 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C63821065670 for ; Sat, 27 Nov 2010 16:04:58 +0000 (UTC) (envelope-from gcubfg-freebsd-geom@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id 3C6618FC2B for ; Sat, 27 Nov 2010 16:04:57 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1PMNGi-0003Jv-Ba for freebsd-geom@freebsd.org; Sat, 27 Nov 2010 17:04:56 +0100 Received: from cpe-188-129-84-38.dynamic.amis.hr ([188.129.84.38]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 27 Nov 2010 17:04:56 +0100 Received: from ivoras by cpe-188-129-84-38.dynamic.amis.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 27 Nov 2010 17:04:56 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-geom@freebsd.org From: Ivan Voras Date: Sat, 27 Nov 2010 17:04:42 +0100 Lines: 35 Message-ID: References: <1299537757.20101127012903@serebryakov.spb.ru> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: cpe-188-129-84-38.dynamic.amis.hr User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.12) Gecko/20101102 Thunderbird/3.1.6 In-Reply-To: <1299537757.20101127012903@serebryakov.spb.ru> Subject: Re: GEOM profiling - how to? X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Nov 2010 16:04:58 -0000 On 11/26/10 23:29, Lev Serebryakov wrote: > Hello, Freebsd-geom. > > I'm doing some simple benchmarking of geom_raid5 in preparation of > putting it into ports. And I notice strange results. > > It is array of 5 disks, stripsize=128k. All disks are SATA2 disks on > ICH10R, AHCI driver (8.1-STABLE). > > Reading from device itself (dd with bs=512K) gives speed of one HDD > exactly. gstat shows 100% load of RAID geom and 1/5 of this speed > (and 18-22% load) on all disk GEOMs. This "100% load of RAID geom" is an approximation of disk load, not CPU load. I don't know how graid5 module works but if it's like most GEOM modules, you will probably need to use a very small stripe size, basically 128 / number_of_disks so that one request can span multiple drives. In your case, try 32 KiB stripe size or 16 KiB stripe size. > Reading big file from FS (dd with bs=512k, FS block size 32K, > vfs.read_max=32) gives about twice speed and every disk GEOM is > loaded 38-42%. CPU time is about 8% system, 0.5% interrupt, so CPU > is not a bottle neck. With big readahead (btw try larger read_max values, like 128) you get parallelism on the drive hardware level, not GEOM, this is why it works. > How could I profile I/O and GEOM? There is no specific answer to this question; basically you can use gstat to observe performance of every GEOM device individually, and use "top" and similar to observe CPU usage. If you turn on GEOM logging, your logs will be swamped by a huge number of messages which you can, in theory, create a tool to analyze them with.