From owner-freebsd-bugs@FreeBSD.ORG Tue Apr 21 09:11:41 2015 Return-Path: Delivered-To: freebsd-bugs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BCE30FE8 for ; Tue, 21 Apr 2015 09:11:41 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8EBAB1AA2 for ; Tue, 21 Apr 2015 09:11:41 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id t3L9Bfgg003657 for ; Tue, 21 Apr 2015 09:11:41 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 199574] trim load ssd on 100% Date: Tue, 21 Apr 2015 09:11:41 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: gs3men@gmail.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Apr 2015 09:11:41 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199574 Bug ID: 199574 Summary: trim load ssd on 100% Product: Base System Version: 10.1-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: bin Assignee: freebsd-bugs@FreeBSD.org Reporter: gs3men@gmail.com FreeBSD 10.1-RELEASE-p8 amd64 Hello. I found a strange behavior with ssd drives trim perfomance. I have zfs mirror on 2 ssd drives (ashift=12, aligment 4k). When i migrate sql server to that storage i found that ssd become always busy and having io queue lenth ~60, some count of read and write, and 128 bio_delele (trim) operations in gstat statiscs. After much tests and googling i found sysctl varianle vfs.zfs.vdev.trim_max_active with default value 64 that limit number of active trim operations. Problem appears when zfs continiously fill queue of drive by trim operation(2 times in second). If i change vfs.zfs.vdev.trim_max_active to 1000, zfs send 2000 trim operations to drive per second and drive iops and busy level become to normal state. When i set vfs.zfs.vdev.trim_max_active to low number 8 device have 16 bio_delele per second and device busy level become 100%. I try to work with another partition on same drive(i thinking that it is zfs bug) and found that such operation is also suffer so i concluded that zfs have no guilt. I try to determinate how freebsd calculate busy levels and find that it come fom geom (geom_stats_open geom_stats_snapshot_next geom_stats_snapshot_get...) So when device make 16 trim per second - it 100% busy, latency big, iops slow. When device make 2000 and more trims per second - device free, queue empty, latency great. So what is it? Bug or feature? I also check device trim performance by UFS: my ssd drive can perform about 7000 trim operations per second with block size 64k(and more with less block size). So probably zfs trim (by 128k) will perform 3000 trims, but i can't generate so much trim activity to find exact value. -- You are receiving this mail because: You are the assignee for the bug.