* [PATCH] net/vhost: fix access to freed memory
@ 2022-03-11 16:35 Yuan Wang
2022-03-14 8:22 ` Ling, WeiX
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Yuan Wang @ 2022-03-11 16:35 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia; +Cc: dev, jiayu.hu, weix.ling, yuanx.wang
This patch fixes heap-use-after-free reported by ASan.
It is possible for the rte_vhost_dequeue_burst() to access the vq
is freed when numa_realloc() gets called in the device running state.
The control plane will set the vq->access_lock to protected the vq
from the data plane. Unfortunately the lock will fail at the moment
the vq is freed, allowing the rte_vhost_dequeue_burst() to access
the fields of the vq, which will trigger a heap-use-after-free error.
In the case of multiple queues, the vhost pmd can access other queues
that are not ready when the first queue is ready, which makes no sense
and also allows numa_realloc() and rte_vhost_dequeue_burst() access to
vq to happen at the same time. By controlling vq->allow_queuing we can make
the pmd access only the queues that are ready.
Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior")
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
drivers/net/vhost/rte_eth_vhost.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..8a6595504a 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -720,6 +720,7 @@ update_queuing_status(struct rte_eth_dev *dev)
{
struct pmd_internal *internal = dev->data->dev_private;
struct vhost_queue *vq;
+ struct rte_vhost_vring_state *state;
unsigned int i;
int allow_queuing = 1;
@@ -730,12 +731,17 @@ update_queuing_status(struct rte_eth_dev *dev)
rte_atomic32_read(&internal->dev_attached) == 0)
allow_queuing = 0;
+ state = vring_states[dev->data->port_id];
+
/* Wait until rx/tx_pkt_burst stops accessing vhost device */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
vq = dev->data->rx_queues[i];
if (vq == NULL)
continue;
- rte_atomic32_set(&vq->allow_queuing, allow_queuing);
+ if (allow_queuing && state->cur[vq->virtqueue_id])
+ rte_atomic32_set(&vq->allow_queuing, 1);
+ else
+ rte_atomic32_set(&vq->allow_queuing, 0);
while (rte_atomic32_read(&vq->while_queuing))
rte_pause();
}
@@ -744,7 +750,10 @@ update_queuing_status(struct rte_eth_dev *dev)
vq = dev->data->tx_queues[i];
if (vq == NULL)
continue;
- rte_atomic32_set(&vq->allow_queuing, allow_queuing);
+ if (allow_queuing && state->cur[vq->virtqueue_id])
+ rte_atomic32_set(&vq->allow_queuing, 1);
+ else
+ rte_atomic32_set(&vq->allow_queuing, 0);
while (rte_atomic32_read(&vq->while_queuing))
rte_pause();
}
@@ -967,6 +976,8 @@ vring_state_changed(int vid, uint16_t vring, int enable)
state->max_vring = RTE_MAX(vring, state->max_vring);
rte_spinlock_unlock(&state->lock);
+ update_queuing_status(eth_dev);
+
VHOST_LOG(INFO, "vring%u is %s\n",
vring, enable ? "enabled" : "disabled");
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: [PATCH] net/vhost: fix access to freed memory
2022-03-11 16:35 [PATCH] net/vhost: fix access to freed memory Yuan Wang
@ 2022-03-14 8:22 ` Ling, WeiX
2022-05-05 14:09 ` Maxime Coquelin
2022-05-05 19:53 ` Maxime Coquelin
2 siblings, 0 replies; 4+ messages in thread
From: Ling, WeiX @ 2022-03-14 8:22 UTC (permalink / raw)
To: Wang, YuanX, maxime.coquelin, Xia, Chenbo; +Cc: dev, Hu, Jiayu
> -----Original Message-----
> From: Wang, YuanX <yuanx.wang@intel.com>
> Sent: Saturday, March 12, 2022 12:35 AM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ling, WeiX
> <weix.ling@intel.com>; Wang, YuanX <yuanx.wang@intel.com>
> Subject: [PATCH] net/vhost: fix access to freed memory
>
> This patch fixes heap-use-after-free reported by ASan.
>
> It is possible for the rte_vhost_dequeue_burst() to access the vq is freed
> when numa_realloc() gets called in the device running state.
> The control plane will set the vq->access_lock to protected the vq from the
> data plane. Unfortunately the lock will fail at the moment the vq is freed,
> allowing the rte_vhost_dequeue_burst() to access the fields of the vq, which
> will trigger a heap-use-after-free error.
>
> In the case of multiple queues, the vhost pmd can access other queues that
> are not ready when the first queue is ready, which makes no sense and also
> allows numa_realloc() and rte_vhost_dequeue_burst() access to vq to
> happen at the same time. By controlling vq->allow_queuing we can make the
> pmd access only the queues that are ready.
>
> Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior")
>
> Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] net/vhost: fix access to freed memory
2022-03-11 16:35 [PATCH] net/vhost: fix access to freed memory Yuan Wang
2022-03-14 8:22 ` Ling, WeiX
@ 2022-05-05 14:09 ` Maxime Coquelin
2022-05-05 19:53 ` Maxime Coquelin
2 siblings, 0 replies; 4+ messages in thread
From: Maxime Coquelin @ 2022-05-05 14:09 UTC (permalink / raw)
To: Yuan Wang, chenbo.xia; +Cc: dev, jiayu.hu, weix.ling
Hi Yuan,
On 3/11/22 17:35, Yuan Wang wrote:
> This patch fixes heap-use-after-free reported by ASan.
>
> It is possible for the rte_vhost_dequeue_burst() to access the vq
> is freed when numa_realloc() gets called in the device running state.
> The control plane will set the vq->access_lock to protected the vq
> from the data plane. Unfortunately the lock will fail at the moment
> the vq is freed, allowing the rte_vhost_dequeue_burst() to access
> the fields of the vq, which will trigger a heap-use-after-free error.
>
> In the case of multiple queues, the vhost pmd can access other queues
> that are not ready when the first queue is ready, which makes no sense
> and also allows numa_realloc() and rte_vhost_dequeue_burst() access to
> vq to happen at the same time. By controlling vq->allow_queuing we can make
> the pmd access only the queues that are ready.
>
> Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior")
>
> Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
> ---
> drivers/net/vhost/rte_eth_vhost.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
It is indeed better for the Vhost PMD to not access virtqueues that
aren't ready.
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] net/vhost: fix access to freed memory
2022-03-11 16:35 [PATCH] net/vhost: fix access to freed memory Yuan Wang
2022-03-14 8:22 ` Ling, WeiX
2022-05-05 14:09 ` Maxime Coquelin
@ 2022-05-05 19:53 ` Maxime Coquelin
2 siblings, 0 replies; 4+ messages in thread
From: Maxime Coquelin @ 2022-05-05 19:53 UTC (permalink / raw)
To: Yuan Wang, chenbo.xia; +Cc: dev, jiayu.hu, weix.ling
On 3/11/22 17:35, Yuan Wang wrote:
> This patch fixes heap-use-after-free reported by ASan.
>
> It is possible for the rte_vhost_dequeue_burst() to access the vq
> is freed when numa_realloc() gets called in the device running state.
> The control plane will set the vq->access_lock to protected the vq
> from the data plane. Unfortunately the lock will fail at the moment
> the vq is freed, allowing the rte_vhost_dequeue_burst() to access
> the fields of the vq, which will trigger a heap-use-after-free error.
>
> In the case of multiple queues, the vhost pmd can access other queues
> that are not ready when the first queue is ready, which makes no sense
> and also allows numa_realloc() and rte_vhost_dequeue_burst() access to
> vq to happen at the same time. By controlling vq->allow_queuing we can make
> the pmd access only the queues that are ready.
>
> Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior")
>
> Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
> ---
> drivers/net/vhost/rte_eth_vhost.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
Applied to dpdk-next-virtio/main.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-05-05 19:53 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-11 16:35 [PATCH] net/vhost: fix access to freed memory Yuan Wang
2022-03-14 8:22 ` Ling, WeiX
2022-05-05 14:09 ` Maxime Coquelin
2022-05-05 19:53 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).