DPDK patches and discussions
 help / color / mirror / Atom feed
From: Taekyung Kim <kim.tae.kyung@navercorp.com>
To: "Xia, Chenbo" <chenbo.xia@intel.com>
Cc: "Pei, Andy" <andy.pei@intel.com>, "dev@dpdk.org" <dev@dpdk.org>,
	"stable@dpdk.org" <stable@dpdk.org>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"Wang, Xiao W" <xiao.w.wang@intel.com>
Subject: Re: [PATCH v3] vdpa/ifc: fix update_datapath error handling
Date: Tue, 8 Nov 2022 17:27:08 +0900	[thread overview]
Message-ID: <Y2oS3FdoVJqS9ghW@dev-tkkim-git-send-email-ncl.nfra.io> (raw)
In-Reply-To: <SN6PR11MB35048739D7B43169651F34229C3F9@SN6PR11MB3504.namprd11.prod.outlook.com>

On Tue, Nov 08, 2022 at 07:56:18AM +0000, Xia, Chenbo wrote:
> > -----Original Message-----
> > From: Pei, Andy <andy.pei@intel.com>
> > Sent: Tuesday, November 8, 2022 3:39 PM
> > To: Xia, Chenbo <chenbo.xia@intel.com>; Taekyung Kim
> > <kim.tae.kyung@navercorp.com>; dev@dpdk.org
> > Cc: stable@dpdk.org; maxime.coquelin@redhat.com; Wang, Xiao W
> > <xiao.w.wang@intel.com>
> > Subject: RE: [PATCH v3] vdpa/ifc: fix update_datapath error handling
> > 
> > Hi
> > 
> > See my reply inline.
> > 
> > > -----Original Message-----
> > > From: Xia, Chenbo <chenbo.xia@intel.com>
> > > Sent: Tuesday, November 8, 2022 9:47 AM
> > > To: Taekyung Kim <kim.tae.kyung@navercorp.com>; dev@dpdk.org
> > > Cc: stable@dpdk.org; maxime.coquelin@redhat.com; Wang, Xiao W
> > > <xiao.w.wang@intel.com>
> > > Subject: RE: [PATCH v3] vdpa/ifc: fix update_datapath error handling
> > >
> > > > -----Original Message-----
> > > > From: Taekyung Kim <kim.tae.kyung@navercorp.com>
> > > > Sent: Monday, November 7, 2022 5:00 PM
> > > > To: dev@dpdk.org
> > > > Cc: stable@dpdk.org; maxime.coquelin@redhat.com; Xia, Chenbo
> > > > <chenbo.xia@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> > > > kim.tae.kyung@navercorp.com
> > > > Subject: [PATCH v3] vdpa/ifc: fix update_datapath error handling
> > > >
> > > > Stop and return the error code when update_datapath fails.
> > > > update_datapath prepares resources for the vdpa device.
> > > > The driver should not perform any further actions if update_datapath
> > > > returns an error.
> > > >
> > > > Fixes: a3f8150eac6d ("net/ifcvf: add ifcvf vDPA driver")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Taekyung Kim <kim.tae.kyung@navercorp.com>
> > > > ---
> > > > v3:
> > > > * Fix coding style
> > > >
> > > > v2:
> > > > * Revert the prepared resources before returning an error
> > > > * Rebase to 22.11 rc2
> > > > * Add fixes and cc for backport
> > > >
> > > > ---
> > > >  drivers/vdpa/ifc/ifcvf_vdpa.c | 26 ++++++++++++++++++++++----
> > > >  1 file changed, 22 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > > > b/drivers/vdpa/ifc/ifcvf_vdpa.c index 8dfd49336e..0396d49122 100644
> > > > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > > > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> > > > @@ -1098,7 +1098,12 @@ ifcvf_dev_config(int vid)
> > > >  	internal = list->internal;
> > > >  	internal->vid = vid;
> > > >  	rte_atomic32_set(&internal->dev_attached, 1);
> > > > -	update_datapath(internal);
> > > > +	if (update_datapath(internal) < 0) {
> > > > +		DRV_LOG(ERR, "failed to update datapath for vDPA device %s",
> > > > +			vdev->device->name);
> > > > +		rte_atomic32_set(&internal->dev_attached, 0);
> > > > +		return -1;
> > > > +	}
> > > >
> > > >  	hw = &internal->hw;
> > > >  	for (i = 0; i < hw->nr_vring; i++) { @@ -1146,7 +1151,12 @@
> > > > ifcvf_dev_close(int vid)
> > > >  		internal->sw_fallback_running = false;
> > > >  	} else {
> > > >  		rte_atomic32_set(&internal->dev_attached, 0);
> > > > -		update_datapath(internal);
> > > > +		if (update_datapath(internal) < 0) {
> > > > +			DRV_LOG(ERR, "failed to update datapath for vDPA
> > > > device %s",
> > > > +				vdev->device->name);
> > > > +			internal->configured = 0;
> > > > +			return -1;
> > > > +		}
> > > >  	}
> > > >
> > > >  	internal->configured = 0;
> > > > @@ -1752,7 +1762,14 @@ ifcvf_pci_probe(struct rte_pci_driver *pci_drv
> > > > __rte_unused,
> > > >  	}
> > > >
> > > >  	rte_atomic32_set(&internal->started, 1);
> > > > -	update_datapath(internal);
> > > > +	if (update_datapath(internal) < 0) {
> > > > +		DRV_LOG(ERR, "failed to update datapath %s", pci_dev->name);
> > > > +		rte_atomic32_set(&internal->started, 0);
> > > > +		pthread_mutex_lock(&internal_list_lock);
> > > > +		TAILQ_REMOVE(&internal_list, list, next);
> > > > +		pthread_mutex_unlock(&internal_list_lock);
> > > > +		goto error;
> > > > +	}
> > > >
> > 
> > Is it necessary to unregister vdpa device?
> 
> Good catch, yes it's needed.
> 
> Kim, please add the unregistration.
> 
> Thanks,
> Chenbo

Hi Andy and Chenbo,

Thanks for your comments.
I forgot to add `rte_vdpa_unregister_device(internal->vdev)`.
I will send a new patch soon.

By the way, it seems that deallocation for `ifcvf_vfio_setup(internal)`
is also ommitted in `ifcvf_pci_probe(...)`.
I will submit another commit to split `error:` into `error2:` and `error1:`,
which calls `rte_pci_unmap_device(...)` and `rte_vfio_container_destroy(...)`.

Thanks,
Taekyung

> 
> > 
> > > >  	rte_kvargs_free(kvlist);
> > > >  	return 0;
> > > > @@ -1781,7 +1798,8 @@ ifcvf_pci_remove(struct rte_pci_device *pci_dev)
> > > >
> > > >  	internal = list->internal;
> > > >  	rte_atomic32_set(&internal->started, 0);
> > > > -	update_datapath(internal);
> > > > +	if (update_datapath(internal) < 0)
> > > > +		DRV_LOG(ERR, "failed to update datapath %s", pci_dev->name);
> > > >
> > > >  	rte_pci_unmap_device(internal->pdev);
> > > >  	rte_vfio_container_destroy(internal->vfio_container_fd);
> > > > --
> > > > 2.34.1
> > >
> > > Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

  reply	other threads:[~2022-11-10  9:38 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-18  7:22 [PATCH] " Taekyung Kim
2022-11-02  9:12 ` Maxime Coquelin
2022-11-07  5:34   ` [PATCH v2] " Taekyung Kim
2022-11-07  8:59     ` [PATCH v3] " Taekyung Kim
2022-11-08  1:46       ` Xia, Chenbo
2022-11-08  7:30         ` Taekyung Kim
2022-11-08  7:39         ` Pei, Andy
2022-11-08  7:56           ` Xia, Chenbo
2022-11-08  8:27             ` Taekyung Kim [this message]
2022-11-08  8:56             ` [PATCH v4] " Taekyung Kim
2022-11-08 13:49               ` Maxime Coquelin
2022-11-09 10:45                 ` Taekyung Kim
2022-11-09  2:39               ` Pei, Andy
2022-11-09 10:47                 ` Taekyung Kim
2022-11-10  1:53               ` Xia, Chenbo
2022-11-10  4:02                 ` Taekyung Kim
2022-11-10  9:20                   ` Maxime Coquelin
2022-11-10  9:34                     ` Ali Alnubani
2022-11-10  9:38                       ` David Marchand
2022-11-10  9:45                         ` Taekyung Kim
2022-11-10  9:42                       ` Maxime Coquelin
2022-11-10  4:09                 ` [PATCH v5] " Taekyung Kim
2022-11-10  6:12                   ` Xia, Chenbo
2022-11-10  7:02                   ` Xia, Chenbo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y2oS3FdoVJqS9ghW@dev-tkkim-git-send-email-ncl.nfra.io \
    --to=kim.tae.kyung@navercorp.com \
    --cc=andy.pei@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).