From owner-freebsd-fs@freebsd.org Fri Aug 9 12:44:58 2019 Return-Path: Delivered-To: freebsd-fs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 6E5D7C8720 for ; Fri, 9 Aug 2019 12:44:58 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.nyi.freebsd.org (mailman.nyi.freebsd.org [IPv6:2610:1c1:1:606c::50:13]) by mx1.freebsd.org (Postfix) with ESMTP id 464lLy2F65z4Wqp for ; Fri, 9 Aug 2019 12:44:58 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.nyi.freebsd.org (Postfix) id 4CDA3C871F; Fri, 9 Aug 2019 12:44:58 +0000 (UTC) Delivered-To: fs@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 4C9DFC871E for ; Fri, 9 Aug 2019 12:44:58 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 464lLy0HlLz4Wql for ; Fri, 9 Aug 2019 12:44:58 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id C73DFDE25 for ; Fri, 9 Aug 2019 12:44:57 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id x79Civsg075949 for ; Fri, 9 Aug 2019 12:44:57 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id x79Civ53075948 for fs@FreeBSD.org; Fri, 9 Aug 2019 12:44:57 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: fs@FreeBSD.org Subject: [Bug 237807] ZFS: ZVOL writes fast, ZVOL reads abysmal... Date: Fri, 09 Aug 2019 12:44:57 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 12.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: nbe@renzel.net X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: fs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Aug 2019 12:44:58 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D237807 --- Comment #10 from Nils Beyer --- maybe I'm too supid, I don't know. I can't get the pool fast... Created the pool from scratch. Updated to latest 12-STABLE. But reads from = that pool are still abysmal. Current pool layout: ---------------------------------------------------------------------------= ----- NAME STATE READ WRITE CKSUM veeam-backups ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 da4 ONLINE 0 0 0 da5 ONLINE 0 0 0 da7 ONLINE 0 0 0 raidz1-2 ONLINE 0 0 0 da9 ONLINE 0 0 0 da14 ONLINE 0 0 0 da17 ONLINE 0 0 0 raidz1-3 ONLINE 0 0 0 da18 ONLINE 0 0 0 da21 ONLINE 0 0 0 da22 ONLINE 0 0 0 raidz1-4 ONLINE 0 0 0 da6 ONLINE 0 0 0 da15 ONLINE 0 0 0 da16 ONLINE 0 0 0 raidz1-5 ONLINE 0 0 0 da11 ONLINE 0 0 0 da8 ONLINE 0 0 0 da3 ONLINE 0 0 0 raidz1-6 ONLINE 0 0 0 da23 ONLINE 0 0 0 da20 ONLINE 0 0 0 da19 ONLINE 0 0 0 raidz1-7 ONLINE 0 0 0 da10 ONLINE 0 0 0 da12 ONLINE 0 0 0 da13 ONLINE 0 0 0 errors: No known data errors ---------------------------------------------------------------------------= ----- used bonnie++: ---------------------------------------------------------------------------= ----- Version 1.97 ------Sequential Output------ --Sequential Input- --Ran= dom- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --See= ks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec= %CP veeambackups.local 64G 141 99 471829 59 122365 23 5 8 40084 8= =20 1016 19 Latency 61947us 348ms 618ms 1634ms 105ms 190= ms Version 1.97 ------Sequential Create------ --------Random Create----= ---- veeambackups.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delet= e-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec= %CP 16 +++++ +++ +++++ +++ 17378 58 +++++ +++ +++++ +++ 32079= 99 Latency 2424us 44us 388ms 2295us 36us 91= us 1.97,1.97,veeambackups.local,1,1565375578,64G,,141,99,471829,59,122365,23,5= ,8,40084,8,1016,19,16,,,,,+++++,+++,+++++,+++,17378,58,+++++,+++,+++++,+++,= 32079,99,61947us,348ms,618ms,1634ms,105ms,190ms,2424us,44us,388ms,2295us,36= us,91us ---------------------------------------------------------------------------= ----- tested locally. No iSCSI, no NFS. "gstat" tells me that the harddisks are only 15% busy. CPU load averages: 0.51, 0.47, 0.39 ZFS recordsize is default 128k. Maybe too many top-level VDEVs? Maybe the HBA sucks for ZFS? A simple parallel DD using: ---------------------------------------------------------------------------= ----- for NR in `jot 24 0`; do dd if=3D/dev/da${NR} of=3D/dev/null bs=3D1M count=3D1k & done ---------------------------------------------------------------------------= ----- delivers 90MB/s for each of the 24 drives during the run which results in 9= 0*24 =3D 2160MB/s total. Should be plenty for the pool. I'm really out of ideas apart from trying 13-CURRENT or FreeNAS or Linux or= or or - which I'd like to avoid... Needless to say that the read performances via NFS or iSCSI are still pathe= tic which makes the current setup unusable as a ESXi datastore and makes me afr= aid of future restore jobs in TB size ranges... --=20 You are receiving this mail because: You are the assignee for the bug.=