Date: Thu, 03 Sep 2009 21:13:46 +0300 From: Alexander Motin <mav@FreeBSD.org> To: Emil Mikulic <emikulic@gmail.com> Cc: "Derek \(freebsd lists\)" <482254ac@razorfever.net>, FreeBSD-Current <freebsd-current@freebsd.org> Subject: gmirror 'load' algorithm (Was: Re: siis/atacam/ata/gmirror 8.0-BETA3 disk performance) Message-ID: <4AA0075A.5010109@FreeBSD.org> In-Reply-To: <20090903002106.GB17538@dmr.ath.cx> References: <h7lmvl$ebq$1@FreeBSD.cs.nctu.edu.tw> <4A9E8677.1020208@FreeBSD.org> <20090903002106.GB17538@dmr.ath.cx>
next in thread | previous in thread | raw e-mail | index | archive | help
Emil Mikulic wrote: > On Wed, Sep 02, 2009 at 05:51:35PM +0300, Alexander Motin wrote: >> To completely load gmirror on read operations, you may need to run >> two dd's same time. Also make sure, that your gmirror runs in >> round-robin mode. Default split mode, which should help with linear >> read, is IMHO ineffective, at least with default MAXPHYS and slice >> values. > > On that note, there is an excellent patch in this PR which improves > the way gmirror schedules read requests to different disks: > > http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/113885 > > Could someone please commit this? > > With this patch and a two-way mirror, I can run two linear scans of > different files in parallel and get almost perfect scaling. (result: > this approximately halves the wall-clock time it takes to do a backup of > some fat VM images) > > IIRC, without the patch it's faster to run them sequentially. :( I have played a bit with this patch on 4-disk mirror. It works better then original algorithm, but still not perfect. 1. I have managed situation with 4 read streams when 3 drives were busy, while forth one was completely idle. gmirror prefer constantly seek one of drives on short distances, but not to use idle drive, because it's heads were few gigabytes away from that point. IMHO request locality priority should be made almost equal for any nonzero distances. As we can see with split mode, even small gaps between requests can significantly reduce drive performance. So I think it is not so important if data are 100MB or 500GB away from current head position. It is perfect case when requests are completely sequential. But everything beyond few megabytes from current position just won't fit drive cache. 2. IMHO it would be much better to use averaged request queue depth as load measure, instead of last request submit time. Request submit time works fine only for equal requests, equal drives and serialized load, but it is actually the case where complicated load balancing is just not needed. The fact that some drive just got request does not mean anything, if some another one got 50 requests one second ago and still processes them. -- Alexander Motin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4AA0075A.5010109>