Date: Fri, 20 Jan 2012 13:08:29 +0200 From: Nikolay Denev <ndenev@gmail.com> To: Alexander Motin <mav@freebsd.org> Cc: Gary Palmer <gpalmer@freebsd.org>, FreeBSD-Current <freebsd-current@freebsd.org>, Dennis K?gel <dk@neveragain.de>, "freebsd-geom@freebsd.org" <freebsd-geom@freebsd.org> Subject: Re: RFC: GEOM MULTIPATH rewrite Message-ID: <-2439788735531654851@unknownmsgid> In-Reply-To: <4F19474A.9020600@FreeBSD.org> References: <4EAF00A6.5060903@FreeBSD.org> <05E0E64F-5EC4-425A-81E4-B6C35320608B@neveragain.de> <4EB05566.3060700@FreeBSD.org> <20111114210957.GA68559@in-addr.com> <059C17DB-3A7B-41AA-BF91-2F8EBAF17D01@gmail.com> <4F19474A.9020600@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 20.01.2012, at 12:51, Alexander Motin <mav@freebsd.org> wrote: > On 01/20/12 10:09, Nikolay Denev wrote: >> Another thing I've observed is that active/active probably only makes sense if you are accessing single LUN. >> In my tests where I have 24 LUNS that form 4 vdevs in a single zpool, the highest performance was achieved >> when I split the active paths among the controllers installed in the server importing the pool. (basically "gmultipath rotate $LUN" in rc.local for half of the paths) >> Using active/active in this situation resulted in fluctuating performance. > > How big was fluctuation? Between speed of one and all paths? > > Several active/active devices without knowledge about each other with some probability will send part of requests via the same links, while ZFS itself already does some balancing between vdevs. > > -- > Alexander Motin I will test in a bit and post results. P.S.: Is there a way to enable/disable active-active on the fly? I'm currently re-labeling to achieve that.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?-2439788735531654851>