From owner-freebsd-fs@FreeBSD.ORG Thu Mar 14 01:20:30 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4B5DB987; Thu, 14 Mar 2013 01:20:30 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id E1B22D42; Thu, 14 Mar 2013 01:20:29 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAHEkQVGDaFvO/2dsb2JhbABDiCi8PIF0dIIqAQEFIwRSGw4KAgINGQJZBognrzGSVIEjjTk0B4ItgRMDlliRAoMmIIFs X-IronPort-AV: E=Sophos;i="4.84,840,1355115600"; d="scan'208";a="21133806" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 13 Mar 2013 21:20:28 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 829BAB4023; Wed, 13 Mar 2013 21:20:28 -0400 (EDT) Date: Wed, 13 Mar 2013 21:20:28 -0400 (EDT) From: Rick Macklem To: John Baldwin Message-ID: <1040319431.3883577.1363224028494.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <201303131356.37919.jhb@freebsd.org> Subject: Re: Deadlock in the NFS client MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: Rick Macklem , fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Mar 2013 01:20:30 -0000 I wrote: > Does having a large # of nfsiod threads cause any serious problem for most > systems these days? > > I'd be tempted to recode nfs_asyncio() as above and then, instead of nfs_iodmin > and nfs_iodmax, I'd simply have: > - a fixed number of nfsiod threads (this could be a tunable, with the > understanding that it should be large for good performance) I'm probably getting ahead of myself here, since changing nfs_asyncio() may/may not fix the deadlock, but I thought I'd comment further on the above. It may be possible to add a new nfs_iod_target (the desired # of nfsiod threads) and adjust that dynamically based on the ratio of the # of times nfs_asyncio() returns: #EIO/#0 --> when there are too many EIO returns, increase nfs_iod_target --> very few EIO returns, decrease nfs_iod_target - Use nfs_iodmin, nfs_iodmax as the limits for nfs_iod_target and set nfs_iodmax much larger than it currently is, by default, with nfs_iod_target set to what nfs_iodmax is currently set to, by default. rick