From owner-svn-src-head@freebsd.org Sat Jun 6 00:48:00 2020 Return-Path: Delivered-To: svn-src-head@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 82AF433CB4E; Sat, 6 Jun 2020 00:48:00 +0000 (UTC) (envelope-from chs@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 49f18J2zL5z4KGm; Sat, 6 Jun 2020 00:48:00 +0000 (UTC) (envelope-from chs@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 615E41995A; Sat, 6 Jun 2020 00:48:00 +0000 (UTC) (envelope-from chs@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 0560m0YU028352; Sat, 6 Jun 2020 00:48:00 GMT (envelope-from chs@FreeBSD.org) Received: (from chs@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 0560m09x028351; Sat, 6 Jun 2020 00:48:00 GMT (envelope-from chs@FreeBSD.org) Message-Id: <202006060048.0560m09x028351@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: chs set sender to chs@FreeBSD.org using -f From: Chuck Silvers Date: Sat, 6 Jun 2020 00:48:00 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r361855 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: chs X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 361855 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Jun 2020 00:48:00 -0000 Author: chs Date: Sat Jun 6 00:47:59 2020 New Revision: 361855 URL: https://svnweb.freebsd.org/changeset/base/361855 Log: Don't mark pages as valid if reading the contents from disk fails. Instead, just skip marking pages valid if the read fails. Future attempts to access such pages will notice that they are not marked valid and try to read them from disk again. Reviewed by: kib, markj Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D25138 Modified: head/sys/vm/vnode_pager.c Modified: head/sys/vm/vnode_pager.c ============================================================================== --- head/sys/vm/vnode_pager.c Sat Jun 6 00:40:02 2020 (r361854) +++ head/sys/vm/vnode_pager.c Sat Jun 6 00:47:59 2020 (r361855) @@ -1150,28 +1150,30 @@ vnode_pager_generic_getpages_done(struct buf *bp) if (mt == bogus_page) continue; - if (nextoff <= object->un_pager.vnp.vnp_size) { - /* - * Read filled up entire page. - */ - vm_page_valid(mt); - KASSERT(mt->dirty == 0, - ("%s: page %p is dirty", __func__, mt)); - KASSERT(!pmap_page_is_mapped(mt), - ("%s: page %p is mapped", __func__, mt)); - } else { - /* - * Read did not fill up entire page. - * - * Currently we do not set the entire page valid, - * we just try to clear the piece that we couldn't - * read. - */ - vm_page_set_valid_range(mt, 0, - object->un_pager.vnp.vnp_size - tfoff); - KASSERT((mt->dirty & vm_page_bits(0, - object->un_pager.vnp.vnp_size - tfoff)) == 0, - ("%s: page %p is dirty", __func__, mt)); + if (error == 0) { + if (nextoff <= object->un_pager.vnp.vnp_size) { + /* + * Read filled up entire page. + */ + vm_page_valid(mt); + KASSERT(mt->dirty == 0, + ("%s: page %p is dirty", __func__, mt)); + KASSERT(!pmap_page_is_mapped(mt), + ("%s: page %p is mapped", __func__, mt)); + } else { + /* + * Read did not fill up entire page. + * + * Currently we do not set the entire page + * valid, we just try to clear the piece that + * we couldn't read. + */ + vm_page_set_valid_range(mt, 0, + object->un_pager.vnp.vnp_size - tfoff); + KASSERT((mt->dirty & vm_page_bits(0, + object->un_pager.vnp.vnp_size - tfoff)) == + 0, ("%s: page %p is dirty", __func__, mt)); + } } if (i < bp->b_pgbefore || i >= bp->b_npages - bp->b_pgafter)