Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 May 2010 19:36:39 +0400
From:      Alex Bakhtin <alex.bakhtin@gmail.com>
To:        pjd@freebsd.org
Cc:        freebsd-fs@freebsd.org, bug-followup@freebsd.org
Subject:   Re: kern/145339: [zfs] deadlock after detaching block device from  raidz pool
Message-ID:  <AANLkTikuzsafif7z7DbNL9PrJu4Gdcc8UucN3U3LSe5i@mail.gmail.com>
In-Reply-To: <201005130934.o4D9YiJL039462@freefall.freebsd.org>
References:  <201005130934.o4D9YiJL039462@freefall.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Pawel,

   I tested your patch in the following zfs configuration (all on
5x2TB WD20EARS drivers):

1. raidz1 on top of physical disks.
2. raidz1 on top of geli
3. raidz2 on top of physical disks.

   In all three cases it seems that the problem was fixed - I can't
crash zfs in vdev_geom when unplugging the disk.

   Unfortunately, 3 times I got a deadlock in zfs after plugging
vdevs back under load. It happens several seconds after zpool online
command. I'm not 100 percent sure that deadlocks are related to this
patch, but... I'm going to make some additional testing with patched
and not patched kernels.

2010/5/13  <pjd@freebsd.org>:
> Synopsis: [zfs] deadlock after detaching block device from raidz pool
>
> State-Changed-From-To: open->feedback
> State-Changed-By: pjd
> State-Changed-When: czw 13 maj 2010 09:33:20 UTC
> State-Changed-Why:
> Could you try this patch:
>
> =A0 =A0 =A0 =A0http://people.freebsd.org/~pjd/patches/vdev_geom.c.3.patch
>
> It is against most recent HEAD. If it is rejected on 8-STABLE, just grab
> entire vdev_geom.c from HEAD and patch this.
>
>
> Responsible-Changed-From-To: freebsd-fs->pjd
> Responsible-Changed-By: pjd
> Responsible-Changed-When: czw 13 maj 2010 09:33:20 UTC
> Responsible-Changed-Why:
> I'll take this one.
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=3D145339
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikuzsafif7z7DbNL9PrJu4Gdcc8UucN3U3LSe5i>