From owner-freebsd-geom@freebsd.org Wed Apr 19 03:47:13 2017 Return-Path: Delivered-To: freebsd-geom@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 54E3BD448A4 for ; Wed, 19 Apr 2017 03:47:13 +0000 (UTC) (envelope-from steven@multiplay.co.uk) Received: from mail-oi0-x235.google.com (mail-oi0-x235.google.com [IPv6:2607:f8b0:4003:c06::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 21DC3C6B for ; Wed, 19 Apr 2017 03:47:12 +0000 (UTC) (envelope-from steven@multiplay.co.uk) Received: by mail-oi0-x235.google.com with SMTP id r203so13958144oib.3 for ; Tue, 18 Apr 2017 20:47:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=pFsR746tgOG5R6GUpsFQ851qapRi11vOyvR+Crr7d8w=; b=WvZ+Z3oZOOFSz5G14Xpn6/OM44/1Ov0RTjrzWtjQ4bIveghZmdALpditqQJTb2Wbjh 4EQ7ciwmM2kBLk56eGWQupOsO0G7jhrVhzSRNTDUnfwsn4pV43/Tkd9vYJpOiVuagd+a Prm+RB0ooBzbKixEC1xFL029ZwNfNy5L8OkNL77A1aVOXrBG35CFsUTJc/8sy9wY+lEo /5c478U9Acm1djNCnUwK/bzfl1jsHjUq0iVvlmJnd6VoQbD/rmro2uYizmFfqqiAwbNa qVOfqrIRG6CSstX8y7OFyQLjwAf5KfwSXE5QPpUkZP9dg+Wo2lk6eyqQlyJk0IubDoty pa8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=pFsR746tgOG5R6GUpsFQ851qapRi11vOyvR+Crr7d8w=; b=aJJq4fhqp0N4f/gVS32pAQ2jmayE80/KBxuYzAaz1O2fFvS0n2l0dqbhJmHdAByXuk NrMATy/4Rv3M1qgkFAowkwyvQW52wxl2GEOqGTIYo8mQxKREA1mdJoFviOhwnk2TeOCj DS38X5ZR/HEt+G5ZbLH2uUNYjS6FxDBe1QK7MSBqbSsQSDZou07oGAkumrkfIb5wTPjT JZzZ2PHU7lp8UR+h3S2xcLzcKQvKNJwqGE5t/RSylH7vhZNCL1e7ZJpoEgAbBURXz6gB a9emgwLgpLhMcjXP9MVEGYc5GUNV4JgncmibQ7QHCWhGdD1w6z67qd08yIRFDXHubRiz 9FpQ== X-Gm-Message-State: AN3rC/4+3U3mJRhqiRayRG1JzhAtV50LT8RLSykgpM0HvndwmZDDqlud 5RfEUSfFNC5tPjzfwUO2ZC1GzwTPlScu X-Received: by 10.157.5.106 with SMTP id 97mr371057otw.227.1492573632056; Tue, 18 Apr 2017 20:47:12 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Steven Hartland Date: Wed, 19 Apr 2017 03:47:01 +0000 Message-ID: Subject: Re: The geom_raid(8) is not load-balancing reads across all available subdisks To: Alexander Motin , "M. Warner Losh" , Maxim Sobolev , freebsd-geom@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Apr 2017 03:47:13 -0000 In ZFS we look at the rotational property of the disk as well when calculating which device to read from, which grave a significant performance increase. See: https://svnweb.freebsd.org/base?view=revision&revision=256956 On Tue, 18 Apr 2017 at 23:17, Maxim Sobolev wrote: > Hi, I've got curious as to why running the build on my machine on top of > the RAID1 volume seems to prefer loading one of the drives for reading. > Digging into the code I found this: > > prio += (G_RAID_SUBDISK_S_ACTIVE - sd->sd_state) << 16; > /* If disk head is precisely in position - highly prefer > it. */ > if (G_RAID_SUBDISK_POS(sd) == bp->bio_offset) > prio -= 2 * G_RAID_SUBDISK_LOAD_SCALE; > else > /* If disk head is close to position - prefer it. */ > if (ABS(G_RAID_SUBDISK_POS(sd) - bp->bio_offset) < > G_RAID_SUBDISK_TRACK_SIZE) > prio -= 1 * G_RAID_SUBDISK_LOAD_SCALE; > if (prio < bestprio) { > best = sd; > bestprio = prio; > } > > Both my drives in RAID are SSDs, so I am wondering if this might be the > cause. On one hand SSDs can still have some internal buffer to cache the > nearby data blocks, on the other hand it's really difficult to define how > far that buffer might extend now and few years from now. On top of that, > single SATA link is likely to be bottleneck in today's systems (esp with > Intel XPoint) to get the data into the RAM, so perhaps ripping off this > optimization for good and just round robin requests between all available > subdsks would be a better strategy going forward? > > -Max > _______________________________________________ > freebsd-geom@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-geom > To unsubscribe, send any mail to "freebsd-geom-unsubscribe@freebsd.org" >