From owner-freebsd-geom@freebsd.org Tue Dec 12 16:26:41 2017 Return-Path: Delivered-To: freebsd-geom@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 116A7E9E078; Tue, 12 Dec 2017 16:26:41 +0000 (UTC) (envelope-from agapon@gmail.com) Received: from mail-lf0-f53.google.com (mail-lf0-f53.google.com [209.85.215.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AAE21645D9; Tue, 12 Dec 2017 16:26:40 +0000 (UTC) (envelope-from agapon@gmail.com) Received: by mail-lf0-f53.google.com with SMTP id 74so23874302lfs.0; Tue, 12 Dec 2017 08:26:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=D+fvP8SIW1g26aBGiBUCVq4Dn+6VuY2SQaiBWoALppU=; b=DuQrKGzFwTDtSCjfyUUTn8BHN2wTCJO9y2Z2j3MSoaRLQoCzXEjaAiz7AcZ884qXMi ut2EGqhjKFVtyOY4jFgZ8iCAKrgTjitkDxr9M9TFDiCrXqerj+iAqtlPHdVI5td8FzJe dapj0JMz852e76qq/MXJv6dULTh7Gs8yCgXHexK42mn/CXU6fO1BG2EVlb4YEMx2zbMv 3zJOnkppDiN2Q8HB3XvZ1X3zJMFHAqAI7+2bDEVxWX3mDrDiuY1m/zr+wJWpIzAM85Ce aF98bQYDE3naLL5oPWLbF5HPv9b3PPlDt63u0VOIQ6Kxoo7OJmVMlZkbJ00LsnCOwsTJ qljg== X-Gm-Message-State: AKGB3mIsd9NvZvJXFUX/3E2gzokdV6cBe3/9z/TVndXcapfj8SMI6KCI xnyaKZo+GrudeWqTUd6q3PAk4EmW X-Google-Smtp-Source: ACJfBouT0adJBAlON6E0hEBJ4IdIfRDYPAxZcYbXVvbgFowQSatuBtfr8MIGowbYlMXbfHjE2i69mA== X-Received: by 10.46.27.24 with SMTP id b24mr2313996ljb.54.1513095992314; Tue, 12 Dec 2017 08:26:32 -0800 (PST) Received: from [192.168.0.88] (east.meadow.volia.net. [93.72.151.96]) by smtp.googlemail.com with ESMTPSA id q81sm3252603lfb.3.2017.12.12.08.26.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Dec 2017 08:26:31 -0800 (PST) Subject: Re: add BIO_NORETRY flag, implement support in ata_da, use in ZFS vdev_geom To: Warner Losh Cc: Scott Long , FreeBSD FS , freebsd-geom@freebsd.org References: <391f2cc7-0036-06ec-b6c9-e56681114eeb@FreeBSD.org> <64f37301-a3d8-5ac4-a25f-4f6e4254ffe9@FreeBSD.org> <39E8D9C4-6BF3-4844-85AD-3568A6D16E64@samsco.org> <33101e6c-0c74-34b7-ee92-f9c4a11685d5@FreeBSD.org> From: Andriy Gapon Message-ID: <8fde1a9e-ea32-af9e-de7f-30e7fe1738cd@FreeBSD.org> Date: Tue, 12 Dec 2017 18:26:30 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Dec 2017 16:26:41 -0000 On 25/11/2017 19:57, Warner Losh wrote: > Let's walk through this. You see that it takes a long time to fail an I/O. > Perfectly reasonable observation. There's two reasons for this. One is that the > disks take a while to make an attempt to get the data. The second is that the > system has a global policy that's biased towards 'recover the data' over 'fail > fast'. These can be fixed by reducing the timeouts, or lowing the read-retry > count for a given drive or globally as a policy decision made by the system > administrator. > > It may be perfectly reasonable to ask the lower layers to 'fail fast' and have > either a hard or a soft deadline on the I/O for a subset of I/O. A hard deadline > would return ETIMEDOUT or something when it's passed and cancel the I/O. This > gives better determinism in the system, but some systems can't cancel just 1 I/O > (like SATA drives), so we have to flush the whole queue. If we get a lot of > these, performance suffers. However, for some class of drives, you know that if > it doesn't succeed in 1s after you submit it to the drive, it's unlikely to > complete successfully and it's worth the performance hit on a drive that's > already acting up. > > You could have a soft timeout, which says 'don't do any additional action after > X time has elapsed and you get word about this I/O. This is similar to the hard > timeout, but just stops retrying after the deadline has passed. This scenario is > better on the other users of the drive, assuming that the read-recovery > operations aren't starving them. It's also easier to implement, but has worse > worst case performance characteristics. > > You aren't asking to limit retries. You're really asking to the I/O subsystem to > limit, where it can, the amount of time on an I/O so you can try another one. > You're means to doing this is to tell it not to retry. That's the wrong means. > It shouldn't be listed in the API that it's a 'NO RETRY' request. It should be a > QoS request flag: fail fast. I completely agree. 'NO RETRY' was a bad name and now I see it with painful clarity. Just to clarify, I agree not only on the name, but also on everything else you said above. > Part of why I'm being so difficult is that you don't understand this and are > proposing a horrible API. It should have a different name. I completely agree. > The other reason is > that I  absolutely do not want to overload EIO. You must return a different > error back up the stack. You've show no interest in this past, which is also a > needless argument. We've given good reasons, and you've poopooed them with bad > arguments. I still honestly don't understand this. I think that bio_error and bio_flags are sufficient to properly interpret the "fail-fast EIO". And I never intended for that error to be ever propagated by any means other than in bio_error. > Also, this isn't the data I asked for. I know things can fail slowly. I was > asking for how it would improve systems running like this. As in "I implemented > it, and was able to fail over to this other drive faster" or something like > that. Actual drive failure scenarios vary widely, and optimizing for this one > failure is unwise. It may be the right optimization, but it may not. There's > lots of tricky edges in this space. Well, I implemented my quick hack (as you absolutely correctly characterized it) in response to something that I observed happening in the past and that hasn't happen to me since then. But, realistically, I do not expect myself to be able to reproduce and test every tricky failure scenario. -- Andriy Gapon