Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 20 Jan 2012 12:51:54 +0200
From:      Alexander Motin <mav@FreeBSD.org>
To:        Nikolay Denev <ndenev@gmail.com>
Cc:        Gary Palmer <gpalmer@freebsd.org>, FreeBSD-Current <freebsd-current@freebsd.org>, Dennis K?gel <dk@neveragain.de>, freebsd-geom@freebsd.org
Subject:   Re: RFC: GEOM MULTIPATH rewrite
Message-ID:  <4F19474A.9020600@FreeBSD.org>
In-Reply-To: <059C17DB-3A7B-41AA-BF91-2F8EBAF17D01@gmail.com>
References:  <4EAF00A6.5060903@FreeBSD.org> <05E0E64F-5EC4-425A-81E4-B6C35320608B@neveragain.de> <4EB05566.3060700@FreeBSD.org> <20111114210957.GA68559@in-addr.com> <059C17DB-3A7B-41AA-BF91-2F8EBAF17D01@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 01/20/12 10:09, Nikolay Denev wrote:
> Another thing I've observed is that active/active probably only makes sense if you are accessing single LUN.
> In my tests where I have 24 LUNS that form 4 vdevs in a single zpool, the highest performance was achieved
> when I split the active paths among the controllers installed in the server importing the pool. (basically "gmultipath rotate $LUN" in rc.local for half of the paths)
> Using active/active in this situation resulted in fluctuating performance.

How big was fluctuation? Between speed of one and all paths?

Several active/active devices without knowledge about each other with 
some probability will send part of requests via the same links, while 
ZFS itself already does some balancing between vdevs.

-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F19474A.9020600>