DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Hao Chen <chenh@yusur.tech>, dev@dpdk.org
Cc: david.marchand@redhat.com, zy@yusur.tech, huangml@yusur.tech,
	stable@dpdk.org, Chenbo Xia <chenbox@nvidia.com>,
	Xiao Wang <xiao.w.wang@intel.com>
Subject: Re: [PATCH v2] vhost: fix deadlock during software live migration of VDPA in a nested virtualization environment
Date: Mon, 5 Feb 2024 11:00:50 +0100	[thread overview]
Message-ID: <1df1c558-c101-405a-bdc2-cf342a3771ce@redhat.com> (raw)
In-Reply-To: <20240122032746.2749586-1-chenh@yusur.tech>

Hi Hao,

On 1/22/24 04:27, Hao Chen wrote:
> In a nested virtualization environment, running dpdk-vdpa in QEMU-L1 for
> software live migration will result in a deadlock between dpdke-vdpa and
> QEMU-L2 processes.
> 'rte_vdpa_relay_vring_used'->
> '__vhost_iova_to_vva'->
> 'vhost_user_iotlb_rd_unlock(vq)'->
> 'vhost_user_iotlb_miss'-> send vhost message 'VHOST_USER_SLAVE_IOTLB_MSG'
> to QEMU-L2's vdpa socket,
> then call 'vhost_user_iotlb_rd_lock(vq)' to hold the read lock `iotlb_lock`.
> But there is no place to release this read lock.
> 
> QEMU-L2 get the 'VHOST_USER_SLAVE_IOTLB_MSG',
> then call 'vhost_user_send_device_iotlb_msg' to send 'VHOST_USER_IOTLB_MSG'
> messages to dpdk-vdpa.
> Dpdk-vdpa will call vhost_user_iotlb_msg->
> vhost_user_iotlb_cache_insert, here, will obtain the write lock
> `iotlb_lock`, but the read lock `iotlb_lock` has not been released and
> will block here.
> 
> This patch add lock and unlock function to fix the deadlock.
> 
> Fixes: b13ad2decc83 ("vhost: provide helpers for virtio ring relay")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Hao Chen <chenh@yusur.tech>
> ---
> Changes v1 ... v2:
> - protect the vhost_alloc_copy_ind_table() call too.
> 
>   lib/vhost/vdpa.c | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
> index 9776fc07a9..a1dd5a753b 100644
> --- a/lib/vhost/vdpa.c
> +++ b/lib/vhost/vdpa.c
> @@ -19,6 +19,7 @@
>   #include "rte_vdpa.h"
>   #include "vdpa_driver.h"
>   #include "vhost.h"
> +#include "iotlb.h"
>   
>   /** Double linked list of vDPA devices. */
>   TAILQ_HEAD(vdpa_device_list, rte_vdpa_device);
> @@ -147,7 +148,6 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
>   
>   int
>   rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
> -	__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
>   {
>   	struct virtio_net *dev = get_device(vid);
>   	uint16_t idx, idx_m, desc_id;
> @@ -193,17 +193,21 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
>   			if (unlikely(nr_descs > vq->size))
>   				return -1;
>   
> +			vhost_user_iotlb_rd_lock(vq);
>   			desc_ring = (struct vring_desc *)(uintptr_t)
>   				vhost_iova_to_vva(dev, vq,
>   						vq->desc[desc_id].addr, &dlen,
>   						VHOST_ACCESS_RO);
> +			vhost_user_iotlb_rd_unlock(vq);
>   			if (unlikely(!desc_ring))
>   				return -1;
>   
>   			if (unlikely(dlen < vq->desc[desc_id].len)) {
> +				vhost_user_iotlb_rd_lock(vq);
>   				idesc = vhost_alloc_copy_ind_table(dev, vq,
>   						vq->desc[desc_id].addr,
>   						vq->desc[desc_id].len);
> +				vhost_user_iotlb_rd_unlock(vq);
>   				if (unlikely(!idesc))
>   					return -1;
>   
> @@ -220,9 +224,12 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
>   			if (unlikely(nr_descs-- == 0))
>   				goto fail;
>   			desc = desc_ring[desc_id];
> -			if (desc.flags & VRING_DESC_F_WRITE)
> +			if (desc.flags & VRING_DESC_F_WRITE) {
> +				vhost_user_iotlb_rd_lock(vq);
>   				vhost_log_write_iova(dev, vq, desc.addr,
>   						     desc.len);
> +				vhost_user_iotlb_rd_unlock(vq);
> +			}
>   			desc_id = desc.next;
>   		} while (desc.flags & VRING_DESC_F_NEXT);
>   

Thanks for the fix, looks good to me.
There's one minor checkpatch issue I'll fix while applying.

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


  reply	other threads:[~2024-02-05 10:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-18 10:33 [PATCH] " Hao Chen
2024-01-18 14:46 ` David Marchand
2024-01-19  6:36   ` Hao Chen
2024-01-22  3:27 ` [PATCH v2] " Hao Chen
2024-02-05 10:00   ` Maxime Coquelin [this message]
2024-02-06 14:57   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1df1c558-c101-405a-bdc2-cf342a3771ce@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbox@nvidia.com \
    --cc=chenh@yusur.tech \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=huangml@yusur.tech \
    --cc=stable@dpdk.org \
    --cc=xiao.w.wang@intel.com \
    --cc=zy@yusur.tech \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).