From owner-freebsd-bugs@FreeBSD.ORG Mon Apr 20 08:02:39 2015 Return-Path: Delivered-To: freebsd-bugs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A5DCE46F for ; Mon, 20 Apr 2015 08:02:39 +0000 (UTC) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com [209.85.217.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 35CDE9A2 for ; Mon, 20 Apr 2015 08:02:38 +0000 (UTC) Received: by lbbzk7 with SMTP id zk7so124312601lbb.0 for ; Mon, 20 Apr 2015 01:02:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=RkAodJT/tLr/SMPZeuJUbmKLmHMVHra8STg7RbI+FbQ=; b=NWcVl2u+vFjj7J5Ke2HFW2aqm6FEniwmxWzs07dvFwvK8QcR91/HKNGJcYLWwkOFUY kmhRCmFcC8dlFfpK39spy7u5l83mRpQCfKk8arYz5pj7Uk5GPYRAEik6afpQtsIw6NNw M+A7Q0b7XmdiUm0oPqBjtEGf2P3tbzQ9RJRHEEld/vwKXF63SqcFf2UDwv+a0Uy9GdRS /Jpiqp+TaYtyLGe0xI8Hz8p+nbztoX6qZ3tie10qI9vKMbVtuI7cWzL+zjEPDK/vo/nA mpE0vAAANffkxeKJEsaLUwb8wNWqZRj0i8USEMoXxtyFrFPGUTSHrQcaLr7Y+Iqd8d+G BMWQ== X-Gm-Message-State: ALoCoQmKaVLkyOoTaTwQp+goVdx70LuESOtVfF/uL8wD5oXGCRdCk4WV2PNwuvg7SVBGBZPUrN3v X-Received: by 10.152.22.229 with SMTP id h5mr14580720laf.21.1429516951342; Mon, 20 Apr 2015 01:02:31 -0700 (PDT) MIME-Version: 1.0 Received: by 10.112.209.106 with HTTP; Mon, 20 Apr 2015 01:02:11 -0700 (PDT) X-Originating-IP: [93.188.122.88] From: =?UTF-8?B?0KHQtdC80ZHQvSDQodC10LzQuNC60LjQvQ==?= Date: Mon, 20 Apr 2015 11:02:11 +0300 Message-ID: Subject: trim load ssd on 100% zfs or geom bug? To: freebsd-bugs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Apr 2015 08:02:39 -0000 Hello. I found a strange behavior with ssd drives trim perfomance. I have zfs mirror on 2 ssd drives (ashift=12, aligment 4k). When i migrate sql server to that storage i found that ssd become always busy and having io queue lenth ~60, some count of read and write, and 128 bio_delele (trim) operations in gstat statiscs. After much tests and googling i found sysctl varianle vfs.zfs.vdev.trim_max_active with default value 64 that limit number of active trim operations. Problem appears when zfs continiously fill queue of drive by trim operation(2 times in second). If i change vfs.zfs.vdev.trim_max_active to 1000, zfs send 2000 trim operations to drive per second and drive iops and busy level become to normal state. When i set vfs.zfs.vdev.trim_max_active to low number 8 device have 16 bio_delele per second and device busy level become 100%. I try to work with another partition on same drive(i thinking that it is zfs bug) and found that such operation is also suffer so i concluded that zfs have no guilt. I try to determinate how freebsd calculate busy levels and find that it data come fom geom (geom_stats_open geom_stats_snapshot_next geom_stats_snapshot_get...) So when device make 16 trim per second - it 100% busy, latency big, iops slow. When device make 2000 and more trims per second - device free, queue empty, latency great. So what is it? Bug or feature? I also check device trim performance by UFS: my ssd drive can perform about 7000 trim operations per second with block size 64k(and more with less block size). So probably zfs trim (by 128k) will perform 3000 trims, but i can't generate so much trim activity to find exact value. FreeBSD 10.1-RELEASE-p8 Regards, Semen