From owner-freebsd-bugs@FreeBSD.ORG Sat Oct 22 13:10:09 2011 Return-Path: Delivered-To: freebsd-bugs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D15F9106564A for ; Sat, 22 Oct 2011 13:10:09 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id AA9FA8FC0A for ; Sat, 22 Oct 2011 13:10:09 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p9MDA9wH030493 for ; Sat, 22 Oct 2011 13:10:09 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p9MDA94K030491; Sat, 22 Oct 2011 13:10:09 GMT (envelope-from gnats) Resent-Date: Sat, 22 Oct 2011 13:10:09 GMT Resent-Message-Id: <201110221310.p9MDA94K030491@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-bugs@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Steven Hartland Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 549501065673 for ; Sat, 22 Oct 2011 13:04:59 +0000 (UTC) (envelope-from nobody@FreeBSD.org) Received: from red.freebsd.org (red.freebsd.org [IPv6:2001:4f8:fff6::22]) by mx1.freebsd.org (Postfix) with ESMTP id 2AAF88FC17 for ; Sat, 22 Oct 2011 13:04:59 +0000 (UTC) Received: from red.freebsd.org (localhost [127.0.0.1]) by red.freebsd.org (8.14.4/8.14.4) with ESMTP id p9MD4xTa048367 for ; Sat, 22 Oct 2011 13:04:59 GMT (envelope-from nobody@red.freebsd.org) Received: (from nobody@localhost) by red.freebsd.org (8.14.4/8.14.4/Submit) id p9MD4wUQ048366; Sat, 22 Oct 2011 13:04:58 GMT (envelope-from nobody) Message-Id: <201110221304.p9MD4wUQ048366@red.freebsd.org> Date: Sat, 22 Oct 2011 13:04:58 GMT From: Steven Hartland To: freebsd-gnats-submit@FreeBSD.org X-Send-Pr-Version: www-3.1 Cc: Subject: misc/161897: zfs parition probing causing long delay at BTX loader X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Oct 2011 13:10:10 -0000 >Number: 161897 >Category: misc >Synopsis: zfs parition probing causing long delay at BTX loader >Confidential: no >Severity: non-critical >Priority: medium >Responsible: freebsd-bugs >State: open >Quarter: >Keywords: >Date-Required: >Class: sw-bug >Submitter-Id: current-users >Arrival-Date: Sat Oct 22 13:10:09 UTC 2011 >Closed-Date: >Last-Modified: >Originator: Steven Hartland >Release: 8.2-RELEASE >Organization: Multiplay >Environment: FreeBSD loncore0.multiplay.co.uk 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Fri Mar 18 10:58:44 UTC 2011 root@bigcore0.multiplay.co.uk:/usr/obj/usr/src/sys/MULTIPLAY amd64 >Description: Installing a new machine here which has 10+ disks we're seeing BTX loader take 50+ seconds to enumerate the disks. After doing some digging I found the following thread on the forums which hinted that r198420 maybe the cause. http://forums.freebsd.org/showthread.php?t=12705 A quick change to zfs.c reverting the change to support 128 partitions back to 4 and BTX completes instantly like it used to. svn commit which introduced this delay is:- http://svnweb.freebsd.org/base?view=revision&revision=198420 the specific file in that changeset:- http://svnweb.freebsd.org/base/head/sys/boot/zfs/zfs.c?r1=198420&r2=198419&pathrev=198420 So the questions are:- 1. Can this be optimised so it doesn't have to test all of the possible 128 GPT partitions? 2. If a optimisation isn't possible or is too complex to achieve would it be better to have the partitions defined as an option which can be increased if needed as I suspect 99.99% if not 100% of users won't be making use of more than 4 partitions even with GPT, such as what the attached patch against 8.2-RELEASE achieves. >How-To-Repeat: Boot a machine with a large number of disks attached using zfs >Fix: Reduce the number of probed partitions from the GPT max of 128 back to the MBR max of 4 "by default" as done by the attached patch. Patch attached with submission follows: --- sys/boot/zfs/zfs.c.orig 2011-10-20 18:15:29.966685430 +0000 +++ sys/boot/zfs/zfs.c 2011-10-20 18:18:22.291033636 +0000 @@ -45,6 +45,12 @@ #include "zfsimpl.c" +/* + * For GPT this should be 128 but leads to 50+ second delay in BTX loader so + * we use the original 4 pre r198420 by default for the boot process + */ +#define ZFS_MAX_SLICES 4 + static int zfs_open(const char *path, struct open_file *f); static int zfs_write(struct open_file *f, void *buf, size_t size, size_t *resid); static int zfs_close(struct open_file *f); @@ -415,7 +421,7 @@ if (vdev_probe(vdev_read, (void*) (uintptr_t) fd, 0)) close(fd); - for (slice = 1; slice <= 128; slice++) { + for (slice = 1; slice <= ZFS_MAX_SLICES; slice++) { sprintf(devname, "disk%dp%d:", unit, slice); fd = open(devname, O_RDONLY); if (fd == -1) { >Release-Note: >Audit-Trail: >Unformatted: