DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xia, Chenbo" <chenbo.xia@intel.com>
To: "Wang, YuanX" <yuanx.wang@intel.com>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Hu, Jiayu" <jiayu.hu@intel.com>,
	"He, Xingguang" <xingguang.he@intel.com>,
	"Jiang, Cheng1" <cheng1.jiang@intel.com>,
	"Ling, WeiX" <weix.ling@intel.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH v2] net/vhost: fix deadlock on vring state change
Date: Fri, 1 Jul 2022 12:31:48 +0000	[thread overview]
Message-ID: <SN6PR11MB3504DFC54002569E127115849CBD9@SN6PR11MB3504.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20220627055125.1541652-1-yuanx.wang@intel.com>

> -----Original Message-----
> From: Wang, YuanX <yuanx.wang@intel.com>
> Sent: Monday, June 27, 2022 1:51 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>;
> dev@dpdk.org
> Cc: Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang <xingguang.he@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Ling, WeiX <weix.ling@intel.com>;
> Wang, YuanX <yuanx.wang@intel.com>; stable@dpdk.org
> Subject: [PATCH v2] net/vhost: fix deadlock on vring state change
> 
> If vring state changes after pmd starts working, the locked vring
> notifies pmd, thus calling update_queuing_status(), the latter
> will wait for pmd to finish accessing vring, while pmd is also
> waiting for vring to be unlocked, thus causing deadlock.
> 
> Actually, update_queuing_status() only needs to wait while
> destroy/stopping the device, but not in other cases.
> 
> This patch adds a flag for whether or not to wait to fix this issue.
> 
> Fixes: 1ce3c7fe149f ("net/vhost: emulate device start/stop behavior")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
> ---
> V2: rewrite the commit log.
> ---
>  drivers/net/vhost/rte_eth_vhost.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index d75d256040..7e512d94bf 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -741,7 +741,7 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
>  }
> 
>  static void
> -update_queuing_status(struct rte_eth_dev *dev)
> +update_queuing_status(struct rte_eth_dev *dev, bool wait_queuing)
>  {
>  	struct pmd_internal *internal = dev->data->dev_private;
>  	struct vhost_queue *vq;
> @@ -767,7 +767,7 @@ update_queuing_status(struct rte_eth_dev *dev)
>  			rte_atomic32_set(&vq->allow_queuing, 1);
>  		else
>  			rte_atomic32_set(&vq->allow_queuing, 0);
> -		while (rte_atomic32_read(&vq->while_queuing))
> +		while (wait_queuing && rte_atomic32_read(&vq->while_queuing))
>  			rte_pause();
>  	}
> 
> @@ -779,7 +779,7 @@ update_queuing_status(struct rte_eth_dev *dev)
>  			rte_atomic32_set(&vq->allow_queuing, 1);
>  		else
>  			rte_atomic32_set(&vq->allow_queuing, 0);
> -		while (rte_atomic32_read(&vq->while_queuing))
> +		while (wait_queuing && rte_atomic32_read(&vq->while_queuing))
>  			rte_pause();
>  	}
>  }
> @@ -868,7 +868,7 @@ new_device(int vid)
>  	vhost_dev_csum_configure(eth_dev);
> 
>  	rte_atomic32_set(&internal->dev_attached, 1);
> -	update_queuing_status(eth_dev);
> +	update_queuing_status(eth_dev, false);
> 
>  	VHOST_LOG(INFO, "Vhost device %d created\n", vid);
> 
> @@ -898,7 +898,7 @@ destroy_device(int vid)
>  	internal = eth_dev->data->dev_private;
> 
>  	rte_atomic32_set(&internal->dev_attached, 0);
> -	update_queuing_status(eth_dev);
> +	update_queuing_status(eth_dev, true);
> 
>  	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> 
> @@ -1008,7 +1008,7 @@ vring_state_changed(int vid, uint16_t vring, int
> enable)
>  	state->max_vring = RTE_MAX(vring, state->max_vring);
>  	rte_spinlock_unlock(&state->lock);
> 
> -	update_queuing_status(eth_dev);
> +	update_queuing_status(eth_dev, false);
> 
>  	VHOST_LOG(INFO, "vring%u is %s\n",
>  			vring, enable ? "enabled" : "disabled");
> @@ -1197,7 +1197,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
>  	}
> 
>  	rte_atomic32_set(&internal->started, 1);
> -	update_queuing_status(eth_dev);
> +	update_queuing_status(eth_dev, false);
> 
>  	return 0;
>  }
> @@ -1209,7 +1209,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
> 
>  	dev->data->dev_started = 0;
>  	rte_atomic32_set(&internal->started, 0);
> -	update_queuing_status(dev);
> +	update_queuing_status(dev, true);
> 
>  	return 0;
>  }
> --
> 2.25.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

  reply	other threads:[~2022-07-01 12:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-01 14:25 [PATCH] net/vhost: add flag to control wait queuing Yuan Wang
2022-06-02  8:32 ` Ling, WeiX
2022-06-27  5:51 ` [PATCH v2] net/vhost: fix deadlock on vring state change Yuan Wang
2022-07-01 12:31   ` Xia, Chenbo [this message]
2022-07-01 13:58   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR11MB3504DFC54002569E127115849CBD9@SN6PR11MB3504.namprd11.prod.outlook.com \
    --to=chenbo.xia@intel.com \
    --cc=cheng1.jiang@intel.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=weix.ling@intel.com \
    --cc=xingguang.he@intel.com \
    --cc=yuanx.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).