Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 03 Sep 2019 13:38:46 -0700
From:      Ravi Pokala <rpokala@freebsd.org>
To:        Warner Losh <imp@FreeBSD.org>, <src-committers@freebsd.org>, <svn-src-all@freebsd.org>, <svn-src-head@freebsd.org>
Subject:   Re: svn commit: r351747 - head/sys/dev/nvme
Message-ID:  <F2340AD7-11C8-4650-8945-D8B2EC9A8FBB@panasas.com>
In-Reply-To: <201909031526.x83FQBbE002770@repo.freebsd.org>
References:  <201909031526.x83FQBbE002770@repo.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Warner,

    +	/*
    +	 * Per Section 7.6.2 of NVMe spec 1.4, to properly suspend, we need t=
o
    +	 * delete the hardware I/O queues, and then shutdown. This properly
    +	 * flushes any metadata the drive may have stored so it can survive
    +	 * having its power removed and prevents the unsafe shutdown count fr=
om
    +	 * incriminating. Once we delete the qpairs, we have to disable them
    +	 * before shutting down. The delay is out of paranoia in
    +	 * nvme_ctrlr_hw_reset, and is repeated here (though we should have n=
o
    +	 * pending I/O that the delay copes with).
    +	 */
    +	nvme_ctrlr_delete_qpairs(ctrlr);
    +	nvme_ctrlr_disable_qpairs(ctrlr);
    +	DELAY(100*1000);
    +	nvme_ctrlr_shutdown(ctrlr);

This seems backwards to me. From that section:

> The host should perform the following actions in sequence for a normal sh=
utdown:
>     1. Stop submitting any new I/O commands to the controller and allow a=
ny outstanding commands to complete;
>     2. If the controller implements I/O queues, then the host should dele=
te all I/O Submission Queues, using the Delete I/O Submission Queue command.=
 A result of the successful completion of the Delete I/O Submission Queue co=
mmand is that any remaining commands outstanding are aborted;
>     3. If the controller implements I/O queues, then the host should dele=
te all I/O Completion Queues, using the Delete I/O Completion Queue command;=
 and
>     4. The host should set the Shutdown Notification (CC.SHN) field to 01=
b to indicate a normal shutdown operation. The controller indicates when shu=
tdown processing is completed by updating the Shutdown Status (CSTS.SHST) fi=
eld to 10b.

You are calling nvme_ctrlr_delete_qpairs() -- which implements steps (2) an=
d (3) -- before you are calling nvme_ctrlr_disable_qpairs() -- which impleme=
nts step (1).

Or am I missing something?

Thanks,

Ravi (rpokala@)


=EF=BB=BF-----Original Message-----
From: <owner-src-committers@freebsd.org> on behalf of Warner Losh <imp@Free=
BSD.org>
Date: 2019-09-03, Tuesday at 08:26
To: <src-committers@freebsd.org>, <svn-src-all@freebsd.org>, <svn-src-head@=
freebsd.org>
Subject: svn commit: r351747 - head/sys/dev/nvme

    Author: imp
    Date: Tue Sep  3 15:26:11 2019
    New Revision: 351747
    URL: https://svnweb.freebsd.org/changeset/base/351747
   =20
    Log:
      Implement nvme suspend / resume for pci attachment
     =20
      When we suspend, we need to properly shutdown the NVME controller. Th=
e
      controller may go into D3 state (or may have the power removed), and
      to properly flush the metadata to non-volatile RAM, we must complete =
a
      normal shutdown. This consists of deleting the I/O queues and setting
      the shutodown bit. We have to do some extra stuff to make sure we
      reset the software state of the queues as well.
     =20
      On resume, we have to reset the card twice, for reasons described in
      the attach funcion. Once we've done that, we can restart the card. If
      any of this fails, we'll fail the NVMe card, just like we do when a
      reset fails.
     =20
      Set is_resetting for the duration of the suspend / resume. This keeps
      the reset taskqueue from running a concurrent reset, and also is
      needed to prevent any hw completions from queueing more I/O to the
      card. Pass resetting flag to nvme_ctrlr_start. It doesn't need to get
      that from the global state of the ctrlr. Wait for any pending reset t=
o
      finish. All queued I/O will get sent to the hardware as part of
      nvme_ctrlr_start(), though the upper layers shouldn't send any
      down. Disabling the qpairs is the other failsafe to ensure all I/O is
      queued.
     =20
      Rename nvme_ctrlr_destory_qpairs to nvme_ctrlr_delete_qpairs to avoid
      confusion with all the other destroy functions.  It just removes the
      queues in hardware, while the other _destroy_ functions tear down
      driver data structures.
     =20
      Split parts of the hardware reset function up so that I can
      do part of the reset in suspsend. Split out the software disabling
      of the qpairs into nvme_ctrlr_disable_qpairs.
     =20
      Finally, fix a couple of spelling errors in comments related to
      this.
     =20
      Relnotes: Yes
      MFC After: 1 week
      Reviewed by: scottl@ (prior version)
      Differential Revision: https://reviews.freebsd.org/D21493
   =20
    Modified:
      head/sys/dev/nvme/nvme.c
      head/sys/dev/nvme/nvme_ctrlr.c
      head/sys/dev/nvme/nvme_pci.c
      head/sys/dev/nvme/nvme_private.h
   =20
    Modified: head/sys/dev/nvme/nvme.c
    =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D
    --- head/sys/dev/nvme/nvme.c	Tue Sep  3 14:55:19 2019	(r351746)
    +++ head/sys/dev/nvme/nvme.c	Tue Sep  3 15:26:11 2019	(r351747)
    @@ -137,9 +137,10 @@ nvme_attach(device_t dev)
     	}
    =20
     	/*
    -	 * Reset controller twice to ensure we do a transition from cc.en=3D=3D1
    -	 *  to cc.en=3D=3D0.  This is because we don't really know what status
    -	 *  the controller was left in when boot handed off to OS.
    +	 * Reset controller twice to ensure we do a transition from cc.en=3D=3D1 =
to
    +	 * cc.en=3D=3D0.  This is because we don't really know what status the
    +	 * controller was left in when boot handed off to OS.  Linux doesn't =
do
    +	 * this, however. If we adopt that policy, see also nvme_ctrlr_resume=
().
     	 */
     	status =3D nvme_ctrlr_hw_reset(ctrlr);
     	if (status !=3D 0) {
   =20
    Modified: head/sys/dev/nvme/nvme_ctrlr.c
    =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D
    --- head/sys/dev/nvme/nvme_ctrlr.c	Tue Sep  3 14:55:19 2019	(r351746)
    +++ head/sys/dev/nvme/nvme_ctrlr.c	Tue Sep  3 15:26:11 2019	(r351747)
    @@ -118,8 +118,8 @@ nvme_ctrlr_construct_io_qpairs(struct nvme_controll=
er=20
    =20
     	/*
     	 * Our best estimate for the maximum number of I/Os that we should
    -	 * noramlly have in flight at one time. This should be viewed as a hi=
nt,
    -	 * not a hard limit and will need to be revisitted when the upper lay=
ers
    +	 * normally have in flight at one time. This should be viewed as a hi=
nt,
    +	 * not a hard limit and will need to be revisited when the upper laye=
rs
     	 * of the storage system grows multi-queue support.
     	 */
     	ctrlr->max_hw_pend_io =3D num_trackers * ctrlr->num_io_queues * 3 / 4;
    @@ -344,10 +344,10 @@ nvme_ctrlr_enable(struct nvme_controller *ctrlr)
     	return (nvme_ctrlr_wait_for_ready(ctrlr, 1));
     }
    =20
    -int
    -nvme_ctrlr_hw_reset(struct nvme_controller *ctrlr)
    +static void
    +nvme_ctrlr_disable_qpairs(struct nvme_controller *ctrlr)
     {
    -	int i, err;
    +	int i;
    =20
     	nvme_admin_qpair_disable(&ctrlr->adminq);
     	/*
    @@ -359,7 +359,15 @@ nvme_ctrlr_hw_reset(struct nvme_controller *ctrlr)
     		for (i =3D 0; i < ctrlr->num_io_queues; i++)
     			nvme_io_qpair_disable(&ctrlr->ioq[i]);
     	}
    +}
    =20
    +int
    +nvme_ctrlr_hw_reset(struct nvme_controller *ctrlr)
    +{
    +	int err;
    +
    +	nvme_ctrlr_disable_qpairs(ctrlr);
    +
     	DELAY(100*1000);
    =20
     	err =3D nvme_ctrlr_disable(ctrlr);
    @@ -481,7 +489,7 @@ nvme_ctrlr_create_qpairs(struct nvme_controller *ct=
rlr
     }
    =20
     static int
    -nvme_ctrlr_destroy_qpairs(struct nvme_controller *ctrlr)
    +nvme_ctrlr_delete_qpairs(struct nvme_controller *ctrlr)
     {
     	struct nvme_completion_poll_status	status;
     	struct nvme_qpair			*qpair;
    @@ -820,7 +828,7 @@ nvme_ctrlr_configure_int_coalescing(struct nvme_con=
tro
     }
    =20
     static void
    -nvme_ctrlr_start(void *ctrlr_arg)
    +nvme_ctrlr_start(void *ctrlr_arg, bool resetting)
     {
     	struct nvme_controller *ctrlr =3D ctrlr_arg;
     	uint32_t old_num_io_queues;
    @@ -833,7 +841,7 @@ nvme_ctrlr_start(void *ctrlr_arg)
     	 *  the number of I/O queues supported, so cannot reset
     	 *  the adminq again here.
     	 */
    -	if (ctrlr->is_resetting)
    +	if (resetting)
     		nvme_qpair_reset(&ctrlr->adminq);
    =20
     	for (i =3D 0; i < ctrlr->num_io_queues; i++)
    @@ -854,7 +862,7 @@ nvme_ctrlr_start(void *ctrlr_arg)
     	 *  explicit specify how many queues it will use.  This value should
     	 *  never change between resets, so panic if somehow that does happen=
.
     	 */
    -	if (ctrlr->is_resetting) {
    +	if (resetting) {
     		old_num_io_queues =3D ctrlr->num_io_queues;
     		if (nvme_ctrlr_set_num_qpairs(ctrlr) !=3D 0) {
     			nvme_ctrlr_fail(ctrlr);
    @@ -894,7 +902,7 @@ nvme_ctrlr_start_config_hook(void *arg)
    =20
     	if (nvme_ctrlr_set_num_qpairs(ctrlr) =3D=3D 0 &&
     	    nvme_ctrlr_construct_io_qpairs(ctrlr) =3D=3D 0)
    -		nvme_ctrlr_start(ctrlr);
    +		nvme_ctrlr_start(ctrlr, false);
     	else
     		nvme_ctrlr_fail(ctrlr);
    =20
    @@ -923,7 +931,7 @@ nvme_ctrlr_reset_task(void *arg, int pending)
     	 */
     	pause("nvmereset", hz / 10);
     	if (status =3D=3D 0)
    -		nvme_ctrlr_start(ctrlr);
    +		nvme_ctrlr_start(ctrlr, true);
     	else
     		nvme_ctrlr_fail(ctrlr);
    =20
    @@ -946,7 +954,7 @@ nvme_ctrlr_poll(struct nvme_controller *ctrlr)
     }
    =20
     /*
    - * Poll the single-vector intertrupt case: num_io_queues will be 1 and
    + * Poll the single-vector interrupt case: num_io_queues will be 1 and
      * there's only a single vector. While we're polling, we mask further
      * interrupts in the controller.
      */
    @@ -1012,7 +1020,7 @@ nvme_ctrlr_passthrough_cmd(struct nvme_controller=
 *ctr
     		if (is_user_buffer) {
     			/*
     			 * Ensure the user buffer is wired for the duration of
    -			 *  this passthrough command.
    +			 *  this pass-through command.
     			 */
     			PHOLD(curproc);
     			buf =3D uma_zalloc(pbuf_zone, M_WAITOK);
    @@ -1031,7 +1039,7 @@ nvme_ctrlr_passthrough_cmd(struct nvme_controller=
 *ctr
     	} else
     		req =3D nvme_allocate_request_null(nvme_pt_done, pt);
    =20
    -	/* Assume userspace already converted to little-endian */
    +	/* Assume user space already converted to little-endian */
     	req->cmd.opc =3D pt->cmd.opc;
     	req->cmd.fuse =3D pt->cmd.fuse;
     	req->cmd.rsvd2 =3D pt->cmd.rsvd2;
    @@ -1206,7 +1214,7 @@ nvme_ctrlr_destruct(struct nvme_controller *ctrlr=
, dev
    =20
     	if (ctrlr->is_initialized) {
     		if (!gone)
    -			nvme_ctrlr_destroy_qpairs(ctrlr);
    +			nvme_ctrlr_delete_qpairs(ctrlr);
     		for (i =3D 0; i < ctrlr->num_io_queues; i++)
     			nvme_io_qpair_destroy(&ctrlr->ioq[i]);
     		free(ctrlr->ioq, M_NVME);
    @@ -1305,4 +1313,88 @@ nvme_ctrlr_get_data(struct nvme_controller *ctrl=
r)
     {
    =20
     	return (&ctrlr->cdata);
    +}
    +
    +int
    +nvme_ctrlr_suspend(struct nvme_controller *ctrlr)
    +{
    +	int to =3D hz;
    +
    +	/*
    +	 * Can't touch failed controllers, so it's already suspended.
    +	 */
    +	if (ctrlr->is_failed)
    +		return (0);
    +
    +	/*
    +	 * We don't want the reset taskqueue running, since it does similar
    +	 * things, so prevent it from running after we start. Wait for any re=
set
    +	 * that may have been started to complete. The reset process we follo=
w
    +	 * will ensure that any new I/O will queue and be given to the hardwa=
re
    +	 * after we resume (though there should be none).
    +	 */
    +	while (atomic_cmpset_32(&ctrlr->is_resetting, 0, 1) =3D=3D 0 && to-- > 0)
    +		pause("nvmesusp", 1);
    +	if (to <=3D 0) {
    +		nvme_printf(ctrlr,
    +		    "Competing reset task didn't finish. Try again later.\n");
    +		return (EWOULDBLOCK);
    +	}
    +
    +	/*
    +	 * Per Section 7.6.2 of NVMe spec 1.4, to properly suspend, we need t=
o
    +	 * delete the hardware I/O queues, and then shutdown. This properly
    +	 * flushes any metadata the drive may have stored so it can survive
    +	 * having its power removed and prevents the unsafe shutdown count fr=
om
    +	 * incriminating. Once we delete the qpairs, we have to disable them
    +	 * before shutting down. The delay is out of paranoia in
    +	 * nvme_ctrlr_hw_reset, and is repeated here (though we should have n=
o
    +	 * pending I/O that the delay copes with).
    +	 */
    +	nvme_ctrlr_delete_qpairs(ctrlr);
    +	nvme_ctrlr_disable_qpairs(ctrlr);
    +	DELAY(100*1000);
    +	nvme_ctrlr_shutdown(ctrlr);
    +
    +	return (0);
    +}
    +
    +int
    +nvme_ctrlr_resume(struct nvme_controller *ctrlr)
    +{
    +
    +	/*
    +	 * Can't touch failed controllers, so nothing to do to resume.
    +	 */
    +	if (ctrlr->is_failed)
    +		return (0);
    +
    +	/*
    +	 * Have to reset the hardware twice, just like we do on attach. See
    +	 * nmve_attach() for why.
    +	 */
    +	if (nvme_ctrlr_hw_reset(ctrlr) !=3D 0)
    +		goto fail;
    +	if (nvme_ctrlr_hw_reset(ctrlr) !=3D 0)
    +		goto fail;
    +
    +	/*
    +	 * Now that we're reset the hardware, we can restart the controller. =
Any
    +	 * I/O that was pending is requeued. Any admin commands are aborted w=
ith
    +	 * an error. Once we've restarted, take the controller out of reset.
    +	 */
    +	nvme_ctrlr_start(ctrlr, true);
    +	atomic_cmpset_32(&ctrlr->is_resetting, 1, 0);
    +
    +	return (0);
    +fail:
    +	/*
    +	 * Since we can't bring the controller out of reset, announce and fai=
l
    +	 * the controller. However, we have to return success for the resume
    +	 * itself, due to questionable APIs.
    +	 */
    +	nvme_printf(ctrlr, "Failed to reset on resume, failing.\n");
    +	nvme_ctrlr_fail(ctrlr);
    +	atomic_cmpset_32(&ctrlr->is_resetting, 1, 0);
    +	return (0);
     }
   =20
    Modified: head/sys/dev/nvme/nvme_pci.c
    =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D
    --- head/sys/dev/nvme/nvme_pci.c	Tue Sep  3 14:55:19 2019	(r351746)
    +++ head/sys/dev/nvme/nvme_pci.c	Tue Sep  3 15:26:11 2019	(r351747)
    @@ -43,6 +43,8 @@ __FBSDID("$FreeBSD$");
     static int    nvme_pci_probe(device_t);
     static int    nvme_pci_attach(device_t);
     static int    nvme_pci_detach(device_t);
    +static int    nvme_pci_suspend(device_t);
    +static int    nvme_pci_resume(device_t);
    =20
     static void nvme_ctrlr_setup_interrupts(struct nvme_controller *ctrlr)=
;
    =20
    @@ -51,6 +53,8 @@ static device_method_t nvme_pci_methods[] =3D {
     	DEVMETHOD(device_probe,     nvme_pci_probe),
     	DEVMETHOD(device_attach,    nvme_pci_attach),
     	DEVMETHOD(device_detach,    nvme_pci_detach),
    +	DEVMETHOD(device_suspend,   nvme_pci_suspend),
    +	DEVMETHOD(device_resume,    nvme_pci_resume),
     	DEVMETHOD(device_shutdown,  nvme_shutdown),
     	{ 0, 0 }
     };
    @@ -331,4 +335,22 @@ nvme_ctrlr_setup_interrupts(struct nvme_controller=
 *ct
     	}
    =20
     	ctrlr->msix_enabled =3D 1;
    +}
    +
    +static int
    +nvme_pci_suspend(device_t dev)
    +{
    +	struct nvme_controller	*ctrlr;
    +
    +	ctrlr =3D DEVICE2SOFTC(dev);
    +	return (nvme_ctrlr_suspend(ctrlr));
    +}
    +
    +static int
    +nvme_pci_resume(device_t dev)
    +{
    +	struct nvme_controller	*ctrlr;
    +
    +	ctrlr =3D DEVICE2SOFTC(dev);
    +	return (nvme_ctrlr_resume(ctrlr));
     }
   =20
    Modified: head/sys/dev/nvme/nvme_private.h
    =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D
    --- head/sys/dev/nvme/nvme_private.h	Tue Sep  3 14:55:19 2019	(r351746)
    +++ head/sys/dev/nvme/nvme_private.h	Tue Sep  3 15:26:11 2019	(r351747)
    @@ -556,4 +556,7 @@ void	nvme_notify_ns(struct nvme_controller *ctrlr, =
int
     void	nvme_ctrlr_intx_handler(void *arg);
     void	nvme_ctrlr_poll(struct nvme_controller *ctrlr);
    =20
    +int	nvme_ctrlr_suspend(struct nvme_controller *ctrlr);
    +int	nvme_ctrlr_resume(struct nvme_controller *ctrlr);
    +
     #endif /* __NVME_PRIVATE_H__ */
   =20
   =20





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F2340AD7-11C8-4650-8945-D8B2EC9A8FBB>