From owner-freebsd-geom@FreeBSD.ORG Mon May 4 11:07:50 2009 Return-Path: Delivered-To: freebsd-geom@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B45D7106566C for ; Mon, 4 May 2009 11:07:50 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 816B78FC1E for ; Mon, 4 May 2009 11:07:50 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n44B7o6c098563 for ; Mon, 4 May 2009 11:07:50 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n44B7niJ098556 for freebsd-geom@FreeBSD.org; Mon, 4 May 2009 11:07:49 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 4 May 2009 11:07:49 GMT Message-Id: <200905041107.n44B7niJ098556@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-geom@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-geom@FreeBSD.org X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2009 11:07:51 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/134113 geom [geli] Problem setting secondary GELI key o kern/134044 geom [geom] gmirror(8) overwrites fs with stale data from r o kern/133931 geom [geli] [request] intentionally wrong password to destr o kern/133604 geom [gvinum] [panic] writing to gjournal on a gvinum volum o bin/132845 geom [geom] [patch] ggated(8) does not close files opened a o kern/132273 geom glabel(8): [patch] failing on journaled partition o kern/132242 geom [gmirror] gmirror.ko fails to fully initialize o kern/131353 geom [geom] gjournal(8) kernel lock o kern/131037 geom [geli] Unable to create disklabel on .eli-Device o kern/130528 geom gjournal fsck during boot o kern/129674 geom [geom] gjournal root did not mount on boot o kern/129645 geom gjournal(8): GEOM_JOURNAL causes system to fail to boo o kern/129245 geom [geom] gcache is more suitable for suffix based provid o bin/128398 geom [patch] glabel(8): teach geom_label to recognise gpt l f kern/128276 geom [gmirror] machine lock up when gmirror module is used o kern/126902 geom [geom] geom_label: kernel panic during install boot o kern/124973 geom [gjournal] [patch] boot order affects geom_journal con o kern/124969 geom gvinum(8): gvinum raid5 plex does not detect missing s o kern/124294 geom [geom] gmirror(8) have inappropriate logic when workin o kern/124130 geom [gmirror] [usb] gmirror fails to start usb devices tha o kern/123962 geom [panic] [gjournal] gjournal (455Gb data, 8Gb journal), o kern/123630 geom [patch] [gmirror] gmirror doesnt allow the original dr o kern/123122 geom [geom] GEOM / gjournal kernel lock f kern/122415 geom [geom] UFS labels are being constantly created and rem o kern/122067 geom [geom] [panic] Geom crashed during boot o kern/121559 geom [patch] [geom] geom label class allows to create inacc o kern/121364 geom [gmirror] Removing all providers create a "zombie" mir o kern/120231 geom [geom] GEOM_CONCAT error adding second drive o kern/120044 geom [msdosfs] [geom] incorrect MSDOSFS label fries adminis o kern/120021 geom [geom] [panic] net-p2p/qbittorrent crashes system when o kern/119743 geom [geom] geom label for cds is keeped after dismount and o kern/115547 geom [geom] [patch] [request] let GEOM Eli get password fro o kern/114532 geom [geom] GEOM_MIRROR shows up in kldstat even if compile o kern/113957 geom [gmirror] gmirror is intermittently reporting a degrad o kern/113837 geom [geom] unable to access 1024 sector size storage o kern/113419 geom [geom] geom fox multipathing not failing back p bin/110705 geom gmirror(8) control utility does not exit with correct o kern/107707 geom [geom] [patch] [request] add new class geom_xbox360 to o kern/104389 geom [geom] [patch] sys/geom/geom_dump.c doesn't encode XML o kern/98034 geom [geom] dereference of NULL pointer in acd_geom_detach o kern/94632 geom [geom] Kernel output resets input while GELI asks for o kern/90582 geom [geom] [panic] Restore cause panic string (ffs_blkfree o bin/90093 geom fdisk(8) incapable of altering in-core geometry a kern/89660 geom [vinum] [patch] [panic] due to g_malloc returning null o kern/89546 geom [geom] GEOM error s kern/89102 geom [geom] [panic] panic when forced unmount FS from unplu o kern/87544 geom [gbde] mmaping large files on a gbde filesystem deadlo o kern/84556 geom [geom] [panic] GBDE-encrypted swap causes panic at shu o kern/79251 geom [2TB] newfs fails on 2.6TB gbde device o kern/79035 geom [vinum] gvinum unable to create a striped set of mirro o bin/78131 geom gbde(8) "destroy" not working. s kern/73177 geom kldload geom_* causes panic due to memory exhaustion 52 problems total. From owner-freebsd-geom@FreeBSD.ORG Wed May 6 19:40:02 2009 Return-Path: Delivered-To: freebsd-geom@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61B10106564A for ; Wed, 6 May 2009 19:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 4F8D38FC0A for ; Wed, 6 May 2009 19:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n46Je2sQ047171 for ; Wed, 6 May 2009 19:40:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n46Je2h7047170; Wed, 6 May 2009 19:40:02 GMT (envelope-from gnats) Date: Wed, 6 May 2009 19:40:02 GMT Message-Id: <200905061940.n46Je2h7047170@freefall.freebsd.org> To: freebsd-geom@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/133604: commit references a PR X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 May 2009 19:40:02 -0000 The following reply was made to PR kern/133604; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/133604: commit references a PR Date: Wed, 6 May 2009 19:34:42 +0000 (UTC) Author: lulf Date: Wed May 6 19:34:32 2009 New Revision: 191856 URL: http://svn.freebsd.org/changeset/base/191856 Log: - Split up the BIO queue into a queue for new and one for completed requests. This is necessary for two reasons: 1) In order to avoid collisions with the use of a BIOs flags set by a consumer or a provider 2) Because GV_BIO_DONE was used to mark a BIO as done, not enough flags was available, so the consumer flags of a BIO had to be misused in order to support enough flags. The new queue makes it possible to recycle the GV_BIO_DONE flag into GV_BIO_GROW. As a consequence, gvinum will now work with any other GEOM class under it or on top of it. - Use bio_pflags for storing internal flags on downgoing BIOs, as the requests appear to come from a consumer of a gvinum volume. Use bio_cflags only for cloned BIOs. - Move gv_post_bio to be used internally for maintenance requests. - Remove some cases where flags where set without need. PR: kern/133604 Modified: head/sys/geom/vinum/geom_vinum.c head/sys/geom/vinum/geom_vinum.h head/sys/geom/vinum/geom_vinum_plex.c head/sys/geom/vinum/geom_vinum_raid5.c head/sys/geom/vinum/geom_vinum_var.h Modified: head/sys/geom/vinum/geom_vinum.c ============================================================================== --- head/sys/geom/vinum/geom_vinum.c Wed May 6 19:18:19 2009 (r191855) +++ head/sys/geom/vinum/geom_vinum.c Wed May 6 19:34:32 2009 (r191856) @@ -81,18 +81,6 @@ gv_orphan(struct g_consumer *cp) } void -gv_post_bio(struct gv_softc *sc, struct bio *bp) -{ - - KASSERT(sc != NULL, ("NULL sc")); - KASSERT(bp != NULL, ("NULL bp")); - mtx_lock(&sc->bqueue_mtx); - bioq_disksort(sc->bqueue, bp); - wakeup(sc); - mtx_unlock(&sc->bqueue_mtx); -} - -void gv_start(struct bio *bp) { struct g_geom *gp; @@ -111,8 +99,10 @@ gv_start(struct bio *bp) g_io_deliver(bp, EOPNOTSUPP); return; } - - gv_post_bio(sc, bp); + mtx_lock(&sc->bqueue_mtx); + bioq_disksort(sc->bqueue_down, bp); + wakeup(sc); + mtx_unlock(&sc->bqueue_mtx); } void @@ -125,9 +115,11 @@ gv_done(struct bio *bp) gp = bp->bio_from->geom; sc = gp->softc; - bp->bio_cflags |= GV_BIO_DONE; - gv_post_bio(sc, bp); + mtx_lock(&sc->bqueue_mtx); + bioq_disksort(sc->bqueue_up, bp); + wakeup(sc); + mtx_unlock(&sc->bqueue_mtx); } int @@ -179,8 +171,12 @@ gv_init(struct g_class *mp) gp->softc = g_malloc(sizeof(struct gv_softc), M_WAITOK | M_ZERO); sc = gp->softc; sc->geom = gp; - sc->bqueue = g_malloc(sizeof(struct bio_queue_head), M_WAITOK | M_ZERO); - bioq_init(sc->bqueue); + sc->bqueue_down = g_malloc(sizeof(struct bio_queue_head), + M_WAITOK | M_ZERO); + sc->bqueue_up = g_malloc(sizeof(struct bio_queue_head), + M_WAITOK | M_ZERO); + bioq_init(sc->bqueue_down); + bioq_init(sc->bqueue_up); LIST_INIT(&sc->drives); LIST_INIT(&sc->subdisks); LIST_INIT(&sc->plexes); @@ -969,7 +965,8 @@ gv_worker(void *arg) gv_cleanup(sc); mtx_destroy(&sc->bqueue_mtx); mtx_destroy(&sc->equeue_mtx); - g_free(sc->bqueue); + g_free(sc->bqueue_down); + g_free(sc->bqueue_up); g_free(sc); kproc_exit(ENXIO); break; /* not reached */ @@ -984,38 +981,40 @@ gv_worker(void *arg) /* ... then do I/O processing. */ mtx_lock(&sc->bqueue_mtx); - bp = bioq_takefirst(sc->bqueue); + /* First do new requests. */ + bp = bioq_takefirst(sc->bqueue_down); + if (bp != NULL) { + mtx_unlock(&sc->bqueue_mtx); + /* A bio that interfered with another bio. */ + if (bp->bio_pflags & GV_BIO_ONHOLD) { + s = bp->bio_caller1; + p = s->plex_sc; + /* Is it still locked out? */ + if (gv_stripe_active(p, bp)) { + /* Park the bio on the waiting queue. */ + bioq_disksort(p->wqueue, bp); + } else { + bp->bio_pflags &= ~GV_BIO_ONHOLD; + g_io_request(bp, s->drive_sc->consumer); + } + /* A special request requireing special handling. */ + } else if (bp->bio_pflags & GV_BIO_INTERNAL) { + p = bp->bio_caller1; + gv_plex_start(p, bp); + } else { + gv_volume_start(sc, bp); + } + mtx_lock(&sc->bqueue_mtx); + } + /* Then do completed requests. */ + bp = bioq_takefirst(sc->bqueue_up); if (bp == NULL) { msleep(sc, &sc->bqueue_mtx, PRIBIO, "-", hz/10); mtx_unlock(&sc->bqueue_mtx); continue; } mtx_unlock(&sc->bqueue_mtx); - - /* A bio that is coming up from an underlying device. */ - if (bp->bio_cflags & GV_BIO_DONE) { - gv_bio_done(sc, bp); - /* A bio that interfered with another bio. */ - } else if (bp->bio_cflags & GV_BIO_ONHOLD) { - s = bp->bio_caller1; - p = s->plex_sc; - /* Is it still locked out? */ - if (gv_stripe_active(p, bp)) { - /* Park the bio on the waiting queue. */ - bioq_disksort(p->wqueue, bp); - } else { - bp->bio_cflags &= ~GV_BIO_ONHOLD; - g_io_request(bp, s->drive_sc->consumer); - } - /* A special request requireing special handling. */ - } else if (bp->bio_cflags & GV_BIO_INTERNAL || - bp->bio_pflags & GV_BIO_INTERNAL) { - p = bp->bio_caller1; - gv_plex_start(p, bp); - /* A fresh bio, scheduled it down. */ - } else { - gv_volume_start(sc, bp); - } + gv_bio_done(sc, bp); } } Modified: head/sys/geom/vinum/geom_vinum.h ============================================================================== --- head/sys/geom/vinum/geom_vinum.h Wed May 6 19:18:19 2009 (r191855) +++ head/sys/geom/vinum/geom_vinum.h Wed May 6 19:34:32 2009 (r191856) @@ -127,7 +127,6 @@ void gv_remove_event(struct gv_softc *, void gv_drive_tasted(struct gv_softc *, struct g_provider *); void gv_drive_lost(struct gv_softc *, struct gv_drive *); void gv_setup_objects(struct gv_softc *); -void gv_post_bio(struct gv_softc *, struct bio *); void gv_start(struct bio *); int gv_access(struct g_provider *, int, int, int); void gv_cleanup(struct gv_softc *); Modified: head/sys/geom/vinum/geom_vinum_plex.c ============================================================================== --- head/sys/geom/vinum/geom_vinum_plex.c Wed May 6 19:18:19 2009 (r191855) +++ head/sys/geom/vinum/geom_vinum_plex.c Wed May 6 19:34:32 2009 (r191856) @@ -48,6 +48,8 @@ static int gv_plex_offset(struct gv_plex int *, int); static int gv_plex_normal_request(struct gv_plex *, struct bio *, off_t, off_t, caddr_t); +static void gv_post_bio(struct gv_softc *, struct bio *); + void gv_plex_start(struct gv_plex *p, struct bio *bp) { @@ -111,7 +113,7 @@ gv_plex_start(struct gv_plex *p, struct */ if (cbp->bio_caller2 != NULL && gv_stripe_active(p, cbp)) { /* Park the bio on the waiting queue. */ - cbp->bio_cflags |= GV_BIO_ONHOLD; + cbp->bio_pflags |= GV_BIO_ONHOLD; bioq_disksort(p->wqueue, cbp); } else { s = cbp->bio_caller1; @@ -209,7 +211,7 @@ gv_plex_normal_request(struct gv_plex *p goto bad; err = gv_plex_offset(p, boff, bcount, &real_off, - &real_len, &sdno, (bp->bio_pflags & GV_BIO_SYNCREQ)); + &real_len, &sdno, (bp->bio_pflags & GV_BIO_GROW)); /* If the request was blocked, put it into wait. */ if (err == GV_ERR_ISBUSY) { bioq_disksort(p->rqueue, bp); @@ -239,12 +241,12 @@ gv_plex_normal_request(struct gv_plex *p /* If the subdisk is up, just continue. */ break; case GV_SD_DOWN: - if (bp->bio_cflags & GV_BIO_INTERNAL) + if (bp->bio_pflags & GV_BIO_INTERNAL) G_VINUM_DEBUG(0, "subdisk must be in the stale state in" " order to perform administrative requests"); goto bad; case GV_SD_STALE: - if (!(bp->bio_cflags & GV_BIO_SYNCREQ)) { + if (!(bp->bio_pflags & GV_BIO_SYNCREQ)) { G_VINUM_DEBUG(0, "subdisk stale, unable to perform " "regular requests"); goto bad; @@ -273,8 +275,6 @@ gv_plex_normal_request(struct gv_plex *p cbp->bio_data = addr; cbp->bio_done = gv_done; cbp->bio_caller1 = s; - if ((bp->bio_cflags & GV_BIO_SYNCREQ)) - cbp->bio_cflags |= GV_BIO_SYNCREQ; /* Store the sub-requests now and let others issue them. */ bioq_insert_tail(p->bqueue, cbp); @@ -282,8 +282,8 @@ gv_plex_normal_request(struct gv_plex *p bad: G_VINUM_LOGREQ(0, bp, "plex request failed."); /* Building the sub-request failed. If internal BIO, do not deliver. */ - if (bp->bio_cflags & GV_BIO_INTERNAL) { - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_INTERNAL) { + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); g_destroy_bio(bp); p->flags &= ~(GV_PLEX_SYNCING | GV_PLEX_REBUILDING | @@ -311,9 +311,9 @@ gv_plex_normal_done(struct gv_plex *p, s /* Just set it to length since multiple plexes will * screw things up. */ pbp->bio_completed = pbp->bio_length; - if (pbp->bio_cflags & GV_BIO_SYNCREQ) + if (pbp->bio_pflags & GV_BIO_SYNCREQ) gv_sync_complete(p, pbp); - else if (pbp->bio_pflags & GV_BIO_SYNCREQ) + else if (pbp->bio_pflags & GV_BIO_GROW) gv_grow_complete(p, pbp); else g_io_deliver(pbp, pbp->bio_error); @@ -392,7 +392,7 @@ gv_plex_raid5_done(struct gv_plex *p, st /* Handle parity data. */ if (TAILQ_EMPTY(&wp->bits)) { - if (bp->bio_parent->bio_cflags & GV_BIO_CHECK) + if (bp->bio_parent->bio_pflags & GV_BIO_CHECK) i = gv_check_parity(p, bp, wp); else i = gv_normal_parity(p, bp, wp); @@ -424,16 +424,16 @@ gv_plex_raid5_done(struct gv_plex *p, st if (pbp->bio_inbed == pbp->bio_children) { /* Hand it over for checking or delivery. */ if (pbp->bio_cmd == BIO_WRITE && - (pbp->bio_cflags & GV_BIO_CHECK)) { + (pbp->bio_pflags & GV_BIO_CHECK)) { gv_parity_complete(p, pbp); } else if (pbp->bio_cmd == BIO_WRITE && - (pbp->bio_cflags & GV_BIO_REBUILD)) { + (pbp->bio_pflags & GV_BIO_REBUILD)) { gv_rebuild_complete(p, pbp); - } else if (pbp->bio_cflags & GV_BIO_INIT) { + } else if (pbp->bio_pflags & GV_BIO_INIT) { gv_init_complete(p, pbp); - } else if (pbp->bio_cflags & GV_BIO_SYNCREQ) { - gv_sync_complete(p, pbp); } else if (pbp->bio_pflags & GV_BIO_SYNCREQ) { + gv_sync_complete(p, pbp); + } else if (pbp->bio_pflags & GV_BIO_GROW) { gv_grow_complete(p, pbp); } else { g_io_deliver(pbp, pbp->bio_error); @@ -480,7 +480,7 @@ gv_check_parity(struct gv_plex *p, struc bp->bio_parent->bio_error = EAGAIN; /* ... but we rebuild it. */ - if (bp->bio_parent->bio_cflags & GV_BIO_PARITY) { + if (bp->bio_parent->bio_pflags & GV_BIO_PARITY) { s = pbp->bio_caller1; g_io_request(pbp, s->drive_sc->consumer); finished = 0; @@ -546,6 +546,18 @@ gv_plex_flush(struct gv_plex *p) } } +static void +gv_post_bio(struct gv_softc *sc, struct bio *bp) +{ + + KASSERT(sc != NULL, ("NULL sc")); + KASSERT(bp != NULL, ("NULL bp")); + mtx_lock(&sc->bqueue_mtx); + bioq_disksort(sc->bqueue_down, bp); + wakeup(sc); + mtx_unlock(&sc->bqueue_mtx); +} + int gv_sync_request(struct gv_plex *from, struct gv_plex *to, off_t offset, off_t length, int type, caddr_t data) @@ -566,14 +578,14 @@ gv_sync_request(struct gv_plex *from, st } bp->bio_length = length; bp->bio_done = gv_done; - bp->bio_cflags |= GV_BIO_SYNCREQ; + bp->bio_pflags |= GV_BIO_SYNCREQ; bp->bio_offset = offset; bp->bio_caller1 = from; bp->bio_caller2 = to; bp->bio_cmd = type; if (data == NULL) data = g_malloc(length, M_WAITOK); - bp->bio_cflags |= GV_BIO_MALLOC; /* Free on the next run. */ + bp->bio_pflags |= GV_BIO_MALLOC; /* Free on the next run. */ bp->bio_data = data; /* Send down next. */ @@ -613,7 +625,7 @@ gv_sync_complete(struct gv_plex *to, str BIO_WRITE, bp->bio_data); /* If it was a write, read the next one. */ } else if (bp->bio_cmd == BIO_WRITE) { - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); to->synced += bp->bio_length; /* If we're finished, clean up. */ @@ -684,10 +696,10 @@ gv_grow_request(struct gv_plex *p, off_t bp->bio_caller1 = p; bp->bio_offset = offset; bp->bio_length = length; - bp->bio_pflags |= GV_BIO_SYNCREQ; /* XXX: misuse of pflags AND syncreq.*/ + bp->bio_pflags |= GV_BIO_GROW; if (data == NULL) data = g_malloc(length, M_WAITOK); - bp->bio_cflags |= GV_BIO_MALLOC; + bp->bio_pflags |= GV_BIO_MALLOC; bp->bio_data = data; gv_post_bio(sc, bp); @@ -720,7 +732,7 @@ gv_grow_complete(struct gv_plex *p, stru BIO_WRITE, bp->bio_data); /* If it was a write, read next. */ } else if (bp->bio_cmd == BIO_WRITE) { - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); /* Find the real size of the plex. */ @@ -790,7 +802,7 @@ gv_init_request(struct gv_sd *s, off_t s bp->bio_done = gv_done; bp->bio_error = 0; bp->bio_length = length; - bp->bio_cflags |= GV_BIO_INIT; + bp->bio_pflags |= GV_BIO_INIT; bp->bio_offset = start; bp->bio_caller1 = s; @@ -908,8 +920,8 @@ gv_parity_request(struct gv_plex *p, int return; } - bp->bio_cflags = flags; - bp->bio_cflags |= GV_BIO_MALLOC; + bp->bio_pflags = flags; + bp->bio_pflags |= GV_BIO_MALLOC; /* We still have more parity to build. */ bp->bio_offset = offset; @@ -927,14 +939,14 @@ gv_parity_complete(struct gv_plex *p, st int error, flags; error = bp->bio_error; - flags = bp->bio_cflags; + flags = bp->bio_pflags; flags &= ~GV_BIO_MALLOC; sc = p->vinumconf; KASSERT(sc != NULL, ("gv_parity_complete: NULL sc")); /* Clean up what we allocated. */ - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); g_destroy_bio(bp); @@ -986,14 +998,14 @@ gv_rebuild_complete(struct gv_plex *p, s off_t offset; error = bp->bio_error; - flags = bp->bio_cflags; + flags = bp->bio_pflags; offset = bp->bio_offset; flags &= ~GV_BIO_MALLOC; sc = p->vinumconf; KASSERT(sc != NULL, ("gv_rebuild_complete: NULL sc")); /* Clean up what we allocated. */ - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); g_destroy_bio(bp); Modified: head/sys/geom/vinum/geom_vinum_raid5.c ============================================================================== --- head/sys/geom/vinum/geom_vinum_raid5.c Wed May 6 19:18:19 2009 (r191855) +++ head/sys/geom/vinum/geom_vinum_raid5.c Wed May 6 19:34:32 2009 (r191856) @@ -65,9 +65,9 @@ gv_raid5_start(struct gv_plex *p, struct wp->parity = NULL; TAILQ_INIT(&wp->bits); - if (bp->bio_cflags & GV_BIO_REBUILD) + if (bp->bio_pflags & GV_BIO_REBUILD) err = gv_raid5_rebuild(p, wp, bp, addr, boff, bcount); - else if (bp->bio_cflags & GV_BIO_CHECK) + else if (bp->bio_pflags & GV_BIO_CHECK) err = gv_raid5_check(p, wp, bp, addr, boff, bcount); else err = gv_raid5_request(p, wp, bp, addr, boff, bcount, &delay); @@ -120,8 +120,8 @@ gv_raid5_start(struct gv_plex *p, struct } /* If internal, stop and reset state. */ - if (bp->bio_cflags & GV_BIO_INTERNAL) { - if (bp->bio_cflags & GV_BIO_MALLOC) + if (bp->bio_pflags & GV_BIO_INTERNAL) { + if (bp->bio_pflags & GV_BIO_MALLOC) g_free(bp->bio_data); g_destroy_bio(bp); /* Reset flags. */ @@ -277,7 +277,7 @@ gv_raid5_rebuild(struct gv_plex *p, stru return (EINVAL); case GV_SD_STALE: - if (!(bp->bio_cflags & GV_BIO_REBUILD)) + if (!(bp->bio_pflags & GV_BIO_REBUILD)) return (ENXIO); G_VINUM_DEBUG(1, "sd %s is reviving", broken->name); @@ -326,7 +326,6 @@ gv_raid5_rebuild(struct gv_plex *p, stru cbp = gv_raid5_clone_bio(bp, broken, wp, NULL, 1); if (cbp == NULL) return (ENOMEM); - cbp->bio_cflags |= GV_BIO_REBUILD; wp->parity = cbp; p->synced = boff; @@ -400,7 +399,7 @@ gv_raid5_request(struct gv_plex *p, stru /* If synchronizing request, just write it if disks are stale. */ if (original->state == GV_SD_STALE && parity->state == GV_SD_STALE && - bp->bio_cflags & GV_BIO_SYNCREQ && bp->bio_cmd == BIO_WRITE) { + bp->bio_pflags & GV_BIO_SYNCREQ && bp->bio_cmd == BIO_WRITE) { type = REQ_TYPE_NORMAL; /* Our parity stripe is missing. */ } else if (parity->state != GV_SD_UP) { Modified: head/sys/geom/vinum/geom_vinum_var.h ============================================================================== --- head/sys/geom/vinum/geom_vinum_var.h Wed May 6 19:18:19 2009 (r191855) +++ head/sys/geom/vinum/geom_vinum_var.h Wed May 6 19:34:32 2009 (r191856) @@ -108,7 +108,7 @@ #define GV_DFLT_SYNCSIZE 65536 /* Flags for BIOs, as they are processed within vinum. */ -#define GV_BIO_DONE 0x01 +#define GV_BIO_GROW 0x01 #define GV_BIO_MALLOC 0x02 #define GV_BIO_ONHOLD 0x04 #define GV_BIO_SYNCREQ 0x08 @@ -117,7 +117,7 @@ #define GV_BIO_CHECK 0x40 #define GV_BIO_PARITY 0x80 #define GV_BIO_INTERNAL \ - (GV_BIO_SYNCREQ | GV_BIO_INIT | GV_BIO_REBUILD |GV_BIO_CHECK) + (GV_BIO_SYNCREQ | GV_BIO_INIT | GV_BIO_REBUILD | GV_BIO_CHECK | GV_BIO_GROW) /* Error codes to be used within gvinum. */ #define GV_ERR_SETSTATE (-1) /* Error setting state. */ @@ -233,7 +233,10 @@ struct gv_softc { struct mtx equeue_mtx; /* Event queue lock. */ struct mtx bqueue_mtx; /* BIO queue lock. */ struct mtx config_mtx; /* Configuration lock. */ - struct bio_queue_head *bqueue; /* BIO queue. */ + struct bio_queue_head *bqueue_down; /* BIO queue incoming + requests. */ + struct bio_queue_head *bqueue_up; /* BIO queue for completed + requests. */ struct g_geom *geom; /* Pointer to our VINUM geom. */ }; #endif _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-geom@FreeBSD.ORG Wed May 6 19:46:49 2009 Return-Path: Delivered-To: freebsd-geom@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D6043106567E; Wed, 6 May 2009 19:46:49 +0000 (UTC) (envelope-from lulf@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id AB97D8FC0C; Wed, 6 May 2009 19:46:49 +0000 (UTC) (envelope-from lulf@FreeBSD.org) Received: from freefall.freebsd.org (lulf@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n46Jknis060154; Wed, 6 May 2009 19:46:49 GMT (envelope-from lulf@freefall.freebsd.org) Received: (from lulf@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n46JknuN060150; Wed, 6 May 2009 19:46:49 GMT (envelope-from lulf) Date: Wed, 6 May 2009 19:46:49 GMT Message-Id: <200905061946.n46JknuN060150@freefall.freebsd.org> To: pmc@citylink.dinoex.sub.org, lulf@FreeBSD.org, freebsd-geom@FreeBSD.org From: lulf@FreeBSD.org Cc: Subject: Re: kern/133604: [gvinum] [panic] writing to gjournal on a gvinum volume X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 May 2009 19:46:50 -0000 Synopsis: [gvinum] [panic] writing to gjournal on a gvinum volume State-Changed-From-To: open->closed State-Changed-By: lulf State-Changed-When: Wed May 6 19:45:37 UTC 2009 State-Changed-Why: - A fix was committed. A standalone fix will probably not be committed to -STABLE until a possible MFC of gvinum from HEAD. http://www.freebsd.org/cgi/query-pr.cgi?pr=133604 From owner-freebsd-geom@FreeBSD.ORG Fri May 8 19:19:11 2009 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 50A6A1065673 for ; Fri, 8 May 2009 19:19:11 +0000 (UTC) (envelope-from edwintorok@gmail.com) Received: from mail-fx0-f168.google.com (mail-fx0-f168.google.com [209.85.220.168]) by mx1.freebsd.org (Postfix) with ESMTP id CFC568FC17 for ; Fri, 8 May 2009 19:19:10 +0000 (UTC) (envelope-from edwintorok@gmail.com) Received: by fxm12 with SMTP id 12so1573442fxm.43 for ; Fri, 08 May 2009 12:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:received:message-id:date:from :user-agent:mime-version:to:subject:content-type :content-transfer-encoding; bh=cjcNTTlm7bdEC8bDeDUvSYYqa4kd6YZQzNgjXOPfkIw=; b=Bo9Eseysl6wOLcd6Ui8laIgqHNR5z/HKfeaC2jJ5Fkyq4dBif/1XP25VbrIGgyZNbo r4mMqz2IXQCC+uDDUJo0iW9nLRyjM7zgx4xyQwFZ9Ykw7fuBN9YgjrI52ZTkVPobw95w vwOiznIclNcfI6l3QhyZ0ajcmiw/fTjhqdTH8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding; b=LXV5bSeEAX+Cd/e7FZWYG7t2a0+8jIf7C5JqmznwpsxQ4kY0wxdEnIDk9BpssBod8F w48XQUOvjhTGZlJMY0QXB2FwhrPAA7uVj2lA8WVzs9/LipK+At6sJPIUUs+zJ7Ss/0/4 eIVEIdNLQ7nzuoDUoAgynfhNa1+byERJvIYeo= Received: by 10.103.49.12 with SMTP id b12mr2511157muk.98.1241808930662; Fri, 08 May 2009 11:55:30 -0700 (PDT) Received: from debian ([79.114.104.192]) by mx.google.com with ESMTPS id g1sm1452996muf.26.2009.05.08.11.55.30 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 08 May 2009 11:55:30 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by debian (Postfix) with ESMTP id 46049392F9 for ; Fri, 8 May 2009 21:55:29 +0300 (EEST) Message-ID: <4A048020.3070709@gmail.com> Date: Fri, 08 May 2009 21:55:28 +0300 From: =?ISO-8859-1?Q?T=F6r=F6k_Edwin?= User-Agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103) MIME-Version: 1.0 To: freebsd-geom@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: compatibility with Linux software RAID? X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 May 2009 19:19:11 -0000 Hi, Apologies if this has been answered before, but is there a way to access (even if just read-only) RAID arrays created by Linux software raid tools (mdadm)? I want to test the new superpages support in FreeBSD 7.2, and do some performance comparisons with Linux, problem is all my data is on a RAID10 array. I have this RAID10 array (according to /proc/mdstat Linux): md4 : active raid10 sda3[0] sdc3[5] sdf3[4] sdd3[3] sde3[2] sdb3[1] 2159617728 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] See here for how Linux's md driver lays out data: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 Does FreeBSD support reading this RAID format? Alternatively I also have a RAID1 array on same disks: md3 : active raid1 sda1[0] sdd1[5] sde1[4] sdf1[3] sdb1[2] sdc1[1] 9767424 blocks [6/6] [UUUUUU] I see that FreeBSD supports RAID1, but will it read in parallel from all 6 disks? (if not I can just mount one of them readonly) Also if gmirror does support the above, is there a way to tell it to treat the disks as readonly? Also is there support for recognizing Linux RAID superblocks (partition type 0xf Linux raid autodetect)? Best regards, --Edwin