From owner-freebsd-performance@FreeBSD.ORG Sat Feb 7 01:39:41 2004 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2F15716A4CE for ; Sat, 7 Feb 2004 01:39:41 -0800 (PST) Received: from geminix.org (gen129.n001.c02.escapebox.net [213.73.91.129]) by mx1.FreeBSD.org (Postfix) with ESMTP id E568843D1D for ; Sat, 7 Feb 2004 01:39:40 -0800 (PST) (envelope-from gemini@geminix.org) Message-ID: <4024B259.8010005@geminix.org> Date: Sat, 07 Feb 2004 10:39:37 +0100 From: Uwe Doering Organization: Private UNIX Site User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.6) Gecko/20040119 X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-performance@freebsd.org References: <4025B3BE.3020009@qdsdirect.com> In-Reply-To: <4025B3BE.3020009@qdsdirect.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Received: from gemini by geminix.org with asmtp (TLSv1:AES256-SHA:256) (Exim 3.36 #1) id 1ApOw7-000Jua-00; Sat, 07 Feb 2004 10:39:39 +0100 Subject: Re: Raid 5 performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Feb 2004 09:39:41 -0000 Todd Lewis wrote: > I am using FreeBSD 4.9, with a 3ware RAID 5 > 1 gig memory 2.8g p4 > > Three questions. > > 1. FreeBSD has a 16k block size. The RAID card is set at 64k > Block size(its sweet spot). My logic tells me that > increasing the block size to 64k would increase disk > read and write access. But, everything I read suggest > going above 64k is dangerous. Are their any recomendations > on performance a stability concerns when increasnig the > block size to 64k when using a RAID controler. A RAID controller normally has nothing to do with the file system's block size. Are you sure that you're not mixing this up with the stripe size? Which stripe size to use with a RAID controller depends on your performance priorities. If there are a lot of concurrent disk operations a larger stripe size is better because then a single disk operation tends to be limited to only one disk drive, leaving the remaining drives free to perform other and possibly unrelated disk operations at the same time. On the other hand, if sequential i/o throughput is important a smaller stripe size is better. > 2. The vfs.hirunningspace variable defualts to 1meg. From what I've > read this looks like a buffer. I'm guessing that its set to > 1meg becuase most drives have 1~2 megs of memory. So following > that logic and with safety in mind. For drives with 4 megs > cache, I would set vfs.hirunningspace to 2 megs. 8megs of cache > 4 megs to vfs.hirunningspace. So, my 64 megs raid control would > have a vfs.hirunningspace 32. It is my experience, too, that this variable is too low by default for "intelligent" disk controllers with large buffers. However, the amount of buffer space for outstanding disk operations is taken from the kernel's disk i/o buffer, which is normally auto-sized at boot time, based on the amount of memory you have. But you can also override it. You may want to check 'vfs.maxbufspace' and make 'vfs.hirunningspace' only a fraction of it. Not more that 1/4, for instance. And adapting 'vfs.lorunningspace' accordingly is also a good idea (it's a hysteresis). Uwe -- Uwe Doering | EscapeBox - Managed On-Demand UNIX Servers gemini@geminix.org | http://www.escapebox.net