From owner-freebsd-fs@freebsd.org Wed Nov 15 06:56:24 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 911D0DC1567 for ; Wed, 15 Nov 2017 06:56:24 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7ECBB80646 for ; Wed, 15 Nov 2017 06:56:24 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id vAF6uOCp097148 for ; Wed, 15 Nov 2017 06:56:24 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 222929] ZFS ARC stats have wrong count Date: Wed, 15 Nov 2017 06:56:24 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: allanjude@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Nov 2017 06:56:24 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D222929 --- Comment #7 from Allan Jude --- I have confirmed the impact you are seeing, but also confirmed that is not a major bug. If you run: zpool iostat 1 You'll see that running your ruby script will not actually result in any re= ads from the disk. There is a small issue with the stats accounting in ZFS, where if the metad= ata being read, happens to be stored in an "Embedded Block Pointer" (so instead= of being stored as a data block, the data is embedded in the parent block, to = save an entire sector, and to save an I/O to read that sector), then it is incorrectly counted as a miss. This is because to read the embedded block pointer, it has to create a read= zio and go through most of the process of doing a read, but then ends up copying the data out of the block pointer instead of from disk. Anyway, I am investigating a quick fixes to account for the cache hit correctly, instead of as a cache miss. I am also looking to see if it would be relatively simple to optimize the c= ase and return the data more directly in arc_read() instead of creating a zio a= nd going the currently more complicated path. This path mostly exists because = it makes it possible for other functions to not need to know about the embedded block pointer feature. --=20 You are receiving this mail because: You are the assignee for the bug.=