From owner-freebsd-stable@FreeBSD.ORG Thu Dec 18 17:46:33 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 477204A9 for ; Thu, 18 Dec 2014 17:46:33 +0000 (UTC) Received: from mail.egr.msu.edu (gribble.egr.msu.edu [35.9.37.169]) by mx1.freebsd.org (Postfix) with ESMTP id 212EB2251 for ; Thu, 18 Dec 2014 17:46:32 +0000 (UTC) Received: from gribble (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id A74E045BB2 for ; Thu, 18 Dec 2014 12:46:30 -0500 (EST) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by gribble (gribble.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cGyvAPEi2658 for ; Thu, 18 Dec 2014 12:46:30 -0500 (EST) Received: from EGR authenticated sender Message-ID: <549312F6.1070007@egr.msu.edu> Date: Thu, 18 Dec 2014 12:46:30 -0500 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-stable@freebsd.org Subject: Re: geom_multipath and zfs doesn't work References: <5493090D.2030406@fsn.hu> In-Reply-To: <5493090D.2030406@fsn.hu> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2014 17:46:33 -0000 On 12/18/2014 12:04, Nagy, Attila wrote: > Hi, > > Running stable/10@r273159 on FC disks (through isp) I can't create a zfs > pool. > > What I have is a simple device, accessible on /dev/multipath/sas0, > backed by da0 and da4: > # gmultipath status > Name Status Components > multipath/sas0 OPTIMAL da0 (ACTIVE) > da4 (READ) > > When I issue a > zpool create data /dev/multipath/sas0 > command, zpool starts to eat 100% CPU: > # procstat -k 3924 > PID TID COMM TDNAME KSTACK > 3924 100128 zpool - > > gstat shows that there is one uncompleted read IO on multipath/sas0 and > the queue length constantly grows on da0: > # gstat -b > dT: 1.030s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 124402 146 0 0 0.0 0 0 0.0 100.5 da0 > I can use these devices finely with dd. > > What's going on here? I have a hunch. Try sysctl vfs.zfs.vdev.trim_on_init=0 before zpool create?