From owner-freebsd-fs@freebsd.org Tue Oct 3 14:58:24 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 57EFCE3F699 for ; Tue, 3 Oct 2017 14:58:24 +0000 (UTC) (envelope-from steven@multiplay.co.uk) Received: from mail-wm0-x235.google.com (mail-wm0-x235.google.com [IPv6:2a00:1450:400c:c09::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D2FE877C88 for ; Tue, 3 Oct 2017 14:58:23 +0000 (UTC) (envelope-from steven@multiplay.co.uk) Received: by mail-wm0-x235.google.com with SMTP id i82so15173186wmd.3 for ; Tue, 03 Oct 2017 07:58:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language; bh=2l8y+WmCvuvkO6MuO4FpU34eu/iNFPJi0uNP5KiYwPs=; b=Xbec0zqdBk8hIoxaX/K3OTrJt3fiZMYfjNxXQHWbGSizY1wUTrFjnuy7gOTvTUiKl3 kKO0Hk0EOIkSRt/Ee45+Y7S5UM+0PVLnzoFJGz2CtnHddtu60oZlF4UiN12T+pDN53bI mF2bIf/AehZyQc25DYA+o1ng6YKRPQxrnKKdQjiFrIr5JTa5ZG7HJJsejOMBjSdjNRGu f4p20oFUc6vwc8ngYb79f2Gli5KavSJtRsrDuE6SVdzPacb/SYkrsa5Gn0bS48WWblOY NpzhCfoV/3WsBM7WahmTCfGRIij4obr7vIrwzHMBZmvC5VWlhAEvluss/1ed16gRlfwm hzPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=2l8y+WmCvuvkO6MuO4FpU34eu/iNFPJi0uNP5KiYwPs=; b=DJAsgZ/c92lF8c35k9/mELqMhsMMVZeG2tU2I7zROK6HHpZ2bUT+WP8878cWhWPZKn EAVwrpQJI0rh7M+lWyGPQcCaFyAb6aRTm+UNAc8ur/RfIasf9Rlc/z5H+jrF+ZdHDu/v XazFEEK3p3KIMMRnAcgAeShGf3uWr0yqIIyAwXFhQdZjTOHNp9ArrJ/izjZgFBojrLst hwel3kDLyc0ZRITTpkcXby6JaumndmePS9tgczj0aGP1HqBCuYZlO4GwlVKHpfF+MAzh NJlq64JIedPL8dqNeR9Z3KnjY+vPCywaoXfhjwRZAlhyXGKz0W+KLs9sH/QAfJL8VAuc sqAA== X-Gm-Message-State: AHPjjUjMtUxM9tTK7kpHRGw+HLAYZl67wcSsdU8K4l+AQEUpPQSIXipS 2O6v/bk64SsZY1+zwnC7Mb/aZg== X-Google-Smtp-Source: AOwi7QCGgjl4OXFqHcAb50LuUNJlUhSTB8G8mhp+HEmCsw/lRYlkoblbF5eeQlZDK6ImLfdyUDSmfg== X-Received: by 10.28.109.77 with SMTP id i74mr14805242wmc.67.1507042701050; Tue, 03 Oct 2017 07:58:21 -0700 (PDT) Received: from [10.10.1.111] ([185.97.61.1]) by smtp.gmail.com with ESMTPSA id m138sm9043048wmd.29.2017.10.03.07.58.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 03 Oct 2017 07:58:19 -0700 (PDT) Subject: Re: ZFS prefers iSCSI disks over local ones ? To: Ben RUBSON , Freebsd fs , FreeBSD-scsi Cc: Andriy Gapon References: <4A0E9EB8-57EA-4E76-9D7E-3E344B2037D2@gmail.com> <69fbca90-9a18-ad5d-a2f7-ad527d79f8ba@freebsd.org> <9342D2A7-CE29-445B-9C40-7B6A9C960D59@gmail.com> From: Steven Hartland Message-ID: Date: Tue, 3 Oct 2017 15:58:22 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <9342D2A7-CE29-445B-9C40-7B6A9C960D59@gmail.com> Content-Language: en-US Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Oct 2017 14:58:24 -0000 On 03/10/2017 15:40, Ben RUBSON wrote: > Hi, > > I start a new thread to avoid confusion in the main one. > (ZFS stalled after some mirror disks were lost) > >> On 03 Oct 2017, at 09:39, Steven Hartland wrote: >> >>> On 03/10/2017 08:31, Ben RUBSON wrote: >>> >>>> On 03 Oct 2017, at 09:25, Steven Hartland wrote: >>>> >>>>> On 03/10/2017 07:12, Andriy Gapon wrote: >>>>> >>>>>> On 02/10/2017 21:12, Ben RUBSON wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> On a FreeBSD 11 server, the following online/healthy zpool : >>>>>> >>>>>> home >>>>>> mirror-0 >>>>>> label/local1 >>>>>> label/local2 >>>>>> label/iscsi1 >>>>>> label/iscsi2 >>>>>> mirror-1 >>>>>> label/local3 >>>>>> label/local4 >>>>>> label/iscsi3 >>>>>> label/iscsi4 >>>>>> cache >>>>>> label/local5 >>>>>> label/local6 >>>>>> >>>>>> A sustained read throughput of 180 MB/s, 45 MB/s on each iscsi disk >>>>>> according to "zpool iostat", nothing on local disks (strange but I >>>>>> noticed that IOs always prefer iscsi disks to local disks). >>>>> Are your local disks SSD or HDD? >>>>> Could it be that iSCSI disks appear to be faster than the local disks >>>>> to the smart ZFS mirror code? >>>>> >>>>> Steve, what do you think? >>>> Yes that quite possible, the mirror balancing uses the queue depth + >>>> rotating bias to determine the load of the disk so if your iSCSI host >>>> is processing well and / or is reporting non-rotating vs rotating for >>>> the local disks it could well be the mirror is preferring reads from >>>> the the less loaded iSCSI devices. >>> Note that local & iscsi disks are _exactly_ the same HDD (same model number, >>> same SAS adapter...). So iSCSI ones should be a little bit slower due to >>> network latency (even if it's very low in my case). >> The output from gstat -dp on a loaded machine would be interesting to see too. > So here is the gstat -dp : > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da0 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da1 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da2 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da3 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da4 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da5 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da6 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da7 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da8 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da9 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da10 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da11 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da12 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da13 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da14 > 1 370 370 47326 0.7 0 0 0.0 0 0 0.0 23.2| da15 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da16 > 0 357 357 45698 1.4 0 0 0.0 0 0 0.0 39.3| da17 > 0 348 348 44572 0.7 0 0 0.0 0 0 0.0 22.5| da18 > 0 432 432 55339 0.7 0 0 0.0 0 0 0.0 27.5| da19 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da20 > 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| da21 > > The 4 active drives are the iSCSI targets of the above quoted pool. > > A local disk : > > Geom name: da7 > Providers: > 1. Name: da7 > Mediasize: 4000787030016 (3.6T) > Sectorsize: 512 > Mode: r0w0e0 > descr: HGSTxxx > lunid: 5000xxx > ident: NHGDxxx > rotationrate: 7200 > fwsectors: 63 > fwheads: 255 > > A iSCSI disk : > > Geom name: da19 > Providers: > 1. Name: da19 > Mediasize: 3999688294912 (3.6T) > Sectorsize: 512 > Mode: r1w1e2 > descr: FREEBSD CTLDISK > lunname: FREEBSD MYDEVID 12 > lunid: FREEBSD MYDEVID 12 > ident: iscsi4 > rotationrate: 0 > fwsectors: 63 > fwheads: 255 > > Sounds like then the faulty thing is the rotationrate set to 0 ? > > Absolutely and from the looks you're not stressing the iSCSI disks so they get high queuing depths hence the preference. As load increased I would expect the local disks to start seeing activity.     Regards     Steve