From owner-freebsd-fs@freebsd.org Sat Nov 25 17:38:22 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F0C28DEBBEC for ; Sat, 25 Nov 2017 17:38:22 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Received: from mail-io0-x235.google.com (mail-io0-x235.google.com [IPv6:2607:f8b0:4001:c06::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BB21B678DD for ; Sat, 25 Nov 2017 17:38:22 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Received: by mail-io0-x235.google.com with SMTP id v21so32360521ioi.4 for ; Sat, 25 Nov 2017 09:38:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bsdimp-com.20150623.gappssmtp.com; s=20150623; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=3iyg3cV4DFZHCSkAutindZTnEutdchf5thqOLOgiRDE=; b=sBaoG7R6xwKCU2ZqOf+FmtFLSmDy+IGCKl1rYjgYcQVPD8SX74GnlHXJjXAihqhzxB qYO1Nmusy0hdZBTlqRmtawdcgBucaQkU0Oi4tbIfNvRJu/E2quYUNAGNTvPTvGyMqzqF 0o22Xp9QXupfqN2fY2yqRsZsYLnvRU0yDtOe3kzqC5BNYKWHxtKVL84kJ6Cdtxc0IUaw b2zgXXFmsZ6mo28JPH6uzln2ufZRyYjV2DKL4MjiLRhTsiM1opHMTqoHN+68wIuDXMnW Wb6C83vs2nNGaHHQlk6LuX+AFdQqubx+QyjY6BScJsQNCu/TLG5x/xZaaZgF4FG3KN/4 Wzdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=3iyg3cV4DFZHCSkAutindZTnEutdchf5thqOLOgiRDE=; b=K1MFaE0NLX8M0gARULypF+BPvaO9cadwCkp0Gf3GMb+8SWHx3AXQOiBf2MskJNXWvx FrobNwh9eqd+LUJY4IlzVLBrgLjongvx/3p+QrneBwcZ7LRELI84dBD4e2SGWxgnC+eQ GQZcLm17mkEvpWHHDxn/GLRc87xVcYrI2IpfbcaQu7xA1Nvvict1H/FcOTdG7xUbEXdM 8AcHYjzLtz2cCdF/brSskvr0vusHL/O+hUo5hjh+TwNeWUrQkQBYZKNUCvvbQdiTkOHi qmTewmQEdRtiprY7y80idvVDiyAVs5a/t8EndjWfwr7oAd6fmcKIx9KUMFf73A9YeHwW B+Lw== X-Gm-Message-State: AJaThX77kn4ORaek2gaTn+obi8SeTJZKg+KufBEH6zOrQGePLPIfqJUh 2e0AvenHmXLjQyEEb22NLuvTfE87MuHiWsGNvWRmuQ== X-Google-Smtp-Source: AGs4zMZpYS3c6FiEoS1YalzMszvrxFrWG+Ow/PBeOV+/wjBX9x9Kdxe/x6Qd2CetfGxNr8hrU8xe+Vh7WeKktp0Fnq4= X-Received: by 10.107.104.18 with SMTP id d18mr33320248ioc.136.1511631501988; Sat, 25 Nov 2017 09:38:21 -0800 (PST) MIME-Version: 1.0 Sender: wlosh@bsdimp.com Received: by 10.79.108.204 with HTTP; Sat, 25 Nov 2017 09:38:21 -0800 (PST) X-Originating-IP: [2603:300b:6:5100:9579:bb73:7b7f:aadd] In-Reply-To: <27c9395f-5b3c-a062-3aee-de591770af0b@FreeBSD.org> References: <391f2cc7-0036-06ec-b6c9-e56681114eeb@FreeBSD.org> <64f37301-a3d8-5ac4-a25f-4f6e4254ffe9@FreeBSD.org> <39E8D9C4-6BF3-4844-85AD-3568A6D16E64@samsco.org> <27c9395f-5b3c-a062-3aee-de591770af0b@FreeBSD.org> From: Warner Losh Date: Sat, 25 Nov 2017 10:38:21 -0700 X-Google-Sender-Auth: hRNqrbEqFn4VgCb2ByZQrwSf-oE Message-ID: Subject: Re: add BIO_NORETRY flag, implement support in ata_da, use in ZFS vdev_geom To: Andriy Gapon Cc: Scott Long , FreeBSD FS , freebsd-geom@freebsd.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.25 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 25 Nov 2017 17:38:23 -0000 On Sat, Nov 25, 2017 at 9:58 AM, Andriy Gapon wrote: > On 25/11/2017 18:25, Warner Losh wrote: > > > > > > On Fri, Nov 24, 2017 at 10:17 AM, Andriy Gapon > > wrote: > > > > On 24/11/2017 16:57, Scott Long wrote: > > > > > > > > >> On Nov 24, 2017, at 6:34 AM, Andriy Gapon > wrote: > > >> > > >> On 24/11/2017 15:08, Warner Losh wrote: > > >>> > > >>> > > >>> On Fri, Nov 24, 2017 at 3:30 AM, Andriy Gapon > > > >>> >> wrote: > > >>> > > >>> > > >>> https://reviews.freebsd.org/D13224 > > D13224 > > > > > >>> > > >>> Anyone interested is welcome to join the review. > > >>> > > >>> > > >>> I think it's a really bad idea. It introduces a > 'one-size-fits-all' > > notion of > > >>> QoS that seems misguided. It conflates a shorter timeout with > don't > > retry. And > > >>> why is retrying bad? It seems more a notion of 'fail fast' or s= o > other > > concept. > > >>> There's so many other ways you'd want to use it. And it uses th= e > same return > > >>> code (EIO) to mean something new. It's generally meant 'The > lower layers > > have > > >>> retried this, and it failed, do not submit it again as it will > not > > succeed' with > > >>> 'I gave it a half-assed attempt, and that failed, but > resubmission might > > work'. > > >>> This breaks a number of assumptions in the BUF/BIO layer as wel= l > as > > parts of CAM > > >>> even more than they are broken now. > > >>> > > >>> So let's step back a bit: what problem is it trying to solve? > > >> > > >> A simple example. I have a mirror, I issue a read to one of its > > members. Let's > > >> assume there is some trouble with that particular block on that > > particular disk. > > >> The disk may spend a lot of time trying to read it and would > still fail. > > With > > >> the current defaults I would wait 5x that time to finally get th= e > error back. > > >> Then I go to another mirror member and get my data from there. > > > > > > There are many RAID stacks that already solve this problem by > having a policy > > > of always reading all disk members for every transaction, and > throwing > > away the > > > sub-transactions that arrive late. It=E2=80=99s not a policy tha= t is > always > > desired, but it > > > serves a useful purpose for low-latency needs. > > > > That's another possible and useful strategy. > > > > >> IMO, this is not optimal. I'd rather pass BIO_NORETRY to the > first read, get > > >> the error back sooner and try the other disk sooner. Only if I > know that there > > >> are no other copies to try, then I would use the normal read wit= h > all the retrying. > > >> > > > > > > I agree with Warner that what you are proposing is not correct. > It weakens the > > > contract between the disk layer and the upper layers, making it > less clear who is > > > responsible for retries and less clear what =E2=80=9CEIO=E2=80=9D= means. That > contract is already > > > weak due to poor design decisions in VFS-BIO and GEOM, and Warner > and I > > > are working on a plan to fix that. > > > > Well... I do realize now that there is some problem in this area, > both you and > > Warner mentioned it. But knowing that it exists is not the same as > knowing what > > it is :-) > > I understand that it could be rather complex and not easy to > describe in a short > > email... > > > > But then, this flag is optional, it's off by default and no one is > forced to > > used it. If it's used only by ZFS, then it would not be horrible. > > > > > > Except that it isn't the same flag as what Solaris has (its B_FAILFAST > does > > something different: it isn't about limiting retries but about failing > ALL the > > queued I/O for a unit, not just trying one retry), and the problems tha= t > it > > solves are quite rare. And if you return a different errno, then the EI= O > > contract is still fulfilled. > > Yes, it isn't the same. > I think that illumos flag does even more. Since it isn't the same, and there's not other systems that do a similar thing, that ups the burden of proof that this is a good idea. > Unless it makes things very hard for the infrastructure. > > But I am circling back to not knowing what problem(s) you and Warne= r > are > > planning to fix. > > > > > > The middle layers of the I/O system are a bit fragile in the face of I/= O > errors. > > We're fixing that. > > What are the middle layers? The buffer cache and lower layers of the UFS code is where the problems chiefly lie. > Of course, you still haven't articulated why this approach would be bette= r > > Better than what? Well, anything? > > nor > > show any numbers as to how it makes things better. > > By now, I have. See my reply to Scott's email. I just checked my email, I've seen no such reply. I checked it before I replied. Maybe it's just delayed. Warner