DPDK patches and discussions
 help / color / mirror / Atom feed
From: David Marchand <david.marchand@redhat.com>
To: dev@dpdk.org
Cc: stable@dpdk.org, Maxime Coquelin <maxime.coquelin@redhat.com>,
	Chenbo Xia <chenbo.xia@intel.com>
Subject: [PATCH v3 1/4] vhost: fix vq use after free on NUMA reallocation
Date: Mon, 25 Jul 2022 22:32:03 +0200	[thread overview]
Message-ID: <20220725203206.427083-2-david.marchand@redhat.com> (raw)
In-Reply-To: <20220725203206.427083-1-david.marchand@redhat.com>

translate_ring_addresses (via numa_realloc) may change a virtio device and
virtio queue.
The virtqueue object must be refreshed before accessing the lock.

Fixes: 04c27cb673b9 ("vhost: fix unsafe vring addresses modifications")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 lib/vhost/vhost_user.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 4ad28bac45..91d40e32fc 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -2596,6 +2596,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev,
 			if (is_vring_iotlb(dev, vq, imsg)) {
 				rte_spinlock_lock(&vq->access_lock);
 				*pdev = dev = translate_ring_addresses(dev, i);
+				vq = dev->virtqueue[i];
 				rte_spinlock_unlock(&vq->access_lock);
 			}
 		}
-- 
2.36.1


  reply	other threads:[~2022-07-25 20:32 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-22 13:53 [PATCH 1/2] vhost: keep a reference to virtqueue index David Marchand
2022-07-22 13:53 ` [PATCH 2/2] vhost: stop using mempool for IOTLB David Marchand
2022-07-22 14:00 ` [PATCH 1/2] vhost: keep a reference to virtqueue index David Marchand
2022-07-22 15:13 ` David Marchand
2022-07-25  7:11 ` [PATCH v2 " David Marchand
2022-07-25  7:11   ` [PATCH v2 2/2] vhost: stop using mempool for IOTLB cache David Marchand
2022-07-25 20:32 ` [PATCH v3 0/4] vHost IOTLB cache rework David Marchand
2022-07-25 20:32   ` David Marchand [this message]
2022-07-26  7:55     ` [PATCH v3 1/4] vhost: fix vq use after free on NUMA reallocation Maxime Coquelin
2022-09-13 15:02       ` Maxime Coquelin
2022-09-14  1:05         ` Xia, Chenbo
2022-09-14  7:14           ` Maxime Coquelin
2022-09-14  9:15             ` Thomas Monjalon
2022-09-14  9:34               ` Maxime Coquelin
2022-09-14  9:45                 ` Thomas Monjalon
2022-09-14 11:48                   ` Maxime Coquelin
2022-07-25 20:32   ` [PATCH v3 2/4] vhost: make NUMA reallocation code more robust David Marchand
2022-07-26  8:39     ` Maxime Coquelin
2022-07-25 20:32   ` [PATCH v3 3/4] vhost: keep a reference to virtqueue index David Marchand
2022-07-26  8:52     ` Maxime Coquelin
2022-07-26 10:00       ` David Marchand
2022-07-25 20:32   ` [PATCH v3 4/4] vhost: stop using mempool for IOTLB cache David Marchand
2022-07-26  9:26     ` Maxime Coquelin
2023-02-17  7:42       ` Yuan, DukaiX
2022-09-15 16:02   ` [PATCH v3 0/4] vHost IOTLB cache rework Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220725203206.427083-2-david.marchand@redhat.com \
    --to=david.marchand@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).