* [dpdk-dev] [PATCH 1/7] vhost: fix missing memory table NUMA realloc
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 2/7] vhost: fix missing guest pages " Maxime Coquelin
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin, stable
When the guest allocates virtqueues on a different NUMA node
than the one the Vhost metadata are allocated, both the Vhost
device struct and the virtqueues struct are reallocated.
However, reallocating the Vhost memory table was missing, which
likely causes iat least one cross-NUMA accesses for every burst
of packets.
This patch reallocates this table on the same NUMA node as the
other metadata.
Fixes: 552e8fd3d2b4 ("vhost: simplify memory regions handling")
Cc: stable@dpdk.org
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost_user.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 8f0eba6412..031e3bfa2f 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -557,6 +557,9 @@ numa_realloc(struct virtio_net *dev, int index)
goto out;
}
if (oldnode != newnode) {
+ struct rte_vhost_memory *old_mem;
+ ssize_t mem_size;
+
VHOST_LOG_CONFIG(INFO,
"reallocate dev from %d to %d node\n",
oldnode, newnode);
@@ -568,6 +571,18 @@ numa_realloc(struct virtio_net *dev, int index)
memcpy(dev, old_dev, sizeof(*dev));
rte_free(old_dev);
+
+ mem_size = sizeof(struct rte_vhost_memory) +
+ sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
+ old_mem = dev->mem;
+ dev->mem = rte_malloc_socket(NULL, mem_size, 0, newnode);
+ if (!dev->mem) {
+ dev->mem = old_mem;
+ goto out;
+ }
+
+ memcpy(dev->mem, old_mem, mem_size);
+ rte_free(old_mem);
}
out:
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 2/7] vhost: fix missing guest pages table NUMA realloc
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 1/7] vhost: fix missing memory table NUMA realloc Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 3/7] vhost: fix missing cache logging " Maxime Coquelin
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin, stable
When the guest allocates virtqueues on a different NUMA node
than the one the Vhost metadata are allocated, both the Vhost
device struct and the virtqueues struct are reallocated.
However, reallocating the guest pages table was missing, which
likely causes at least one cross-NUMA accesses for every burst
of packets.
This patch reallocates this table on the same NUMA node as the
other metadata.
Fixes: e246896178e6 ("vhost: get guest/host physical address mappings")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost_user.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 031e3bfa2f..cbfdf1b4d8 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -558,7 +558,8 @@ numa_realloc(struct virtio_net *dev, int index)
}
if (oldnode != newnode) {
struct rte_vhost_memory *old_mem;
- ssize_t mem_size;
+ struct guest_page *old_gp;
+ ssize_t mem_size, gp_size;
VHOST_LOG_CONFIG(INFO,
"reallocate dev from %d to %d node\n",
@@ -583,6 +584,17 @@ numa_realloc(struct virtio_net *dev, int index)
memcpy(dev->mem, old_mem, mem_size);
rte_free(old_mem);
+
+ gp_size = dev->max_guest_pages * sizeof(*dev->guest_pages);
+ old_gp = dev->guest_pages;
+ dev->guest_pages = rte_malloc_socket(NULL, gp_size, RTE_CACHE_LINE_SIZE, newnode);
+ if (!dev->guest_pages) {
+ dev->guest_pages = old_gp;
+ goto out;
+ }
+
+ memcpy(dev->guest_pages, old_gp, gp_size);
+ rte_free(old_gp);
}
out:
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 3/7] vhost: fix missing cache logging NUMA realloc
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 1/7] vhost: fix missing memory table NUMA realloc Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 2/7] vhost: fix missing guest pages " Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 4/7] vhost: fix NUMA reallocation with multiqueue Maxime Coquelin
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin
When the guest allocates virtqueues on a different NUMA node
than the one the Vhost metadata are allocated, both the Vhost
device struct and the virtqueues struct are reallocated.
However, reallocating the log cache on the new NUMA node was
not done. This patch fixes this by reallocating it if it has
been allocated already, which means a live-migration is
on-going.
Fixes: 1818a63147fb ("vhost: move dirty logging cache out of virtqueue")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost_user.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index cbfdf1b4d8..0e9e26ebe0 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -545,6 +545,16 @@ numa_realloc(struct virtio_net *dev, int index)
vq->batch_copy_elems = new_batch_copy_elems;
}
+ if (vq->log_cache) {
+ struct log_cache_entry *log_cache;
+
+ log_cache = rte_realloc_socket(vq->log_cache,
+ sizeof(struct log_cache_entry) * VHOST_LOG_CACHE_NR,
+ 0, newnode);
+ if (log_cache)
+ vq->log_cache = log_cache;
+ }
+
rte_free(old_vq);
}
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 4/7] vhost: fix NUMA reallocation with multiqueue
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
` (2 preceding siblings ...)
2021-06-17 10:58 ` [dpdk-dev] [PATCH 3/7] vhost: fix missing cache logging " Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 5/7] vhost: improve NUMA reallocation Maxime Coquelin
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin, stable
Since the Vhost-user device initialization has been reworked,
enabling the application to start using the device as soon as
the first queue pair is ready, NUMA reallocation no more
happened on queue pairs other than the first one since
numa_realloc() was returning early if the device was running.
This patch fixes this issue by only preventing the device
metadata to be allocated if the device is running. For the
virtqueues, a vring state change notification is sent to
notify the application of its disablement. Since the callback
is supposed to be blocking, it is safe to reallocate it
afterwards.
Fixes: d0fcc38f5fa4 ("vhost: improve device readiness notifications")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost_user.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 0e9e26ebe0..6e7b327ef8 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -488,9 +488,6 @@ numa_realloc(struct virtio_net *dev, int index)
struct batch_copy_elem *new_batch_copy_elems;
int ret;
- if (dev->flags & VIRTIO_DEV_RUNNING)
- return dev;
-
old_dev = dev;
vq = old_vq = dev->virtqueue[index];
@@ -506,6 +503,11 @@ numa_realloc(struct virtio_net *dev, int index)
return dev;
}
if (oldnode != newnode) {
+ if (vq->ready) {
+ vq->ready = false;
+ vhost_user_notify_queue_state(dev, index, 0);
+ }
+
VHOST_LOG_CONFIG(INFO,
"reallocate vq from %d to %d node\n", oldnode, newnode);
vq = rte_malloc_socket(NULL, sizeof(*vq), 0, newnode);
@@ -558,6 +560,9 @@ numa_realloc(struct virtio_net *dev, int index)
rte_free(old_vq);
}
+ if (dev->flags & VIRTIO_DEV_RUNNING)
+ goto out;
+
/* check if we need to reallocate dev */
ret = get_mempolicy(&oldnode, NULL, 0, old_dev,
MPOL_F_NODE | MPOL_F_ADDR);
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 5/7] vhost: improve NUMA reallocation
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
` (3 preceding siblings ...)
2021-06-17 10:58 ` [dpdk-dev] [PATCH 4/7] vhost: fix NUMA reallocation with multiqueue Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 6/7] vhost: allocate all data on same node as virtqueue Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 7/7] vhost: convert inflight data to DPDK allocation API Maxime Coquelin
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin
This patch improves the numa_realloc() function by making use
of rte_realloc_socket(), which takes care of the memory copy
and freeing of the old data.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost_user.c | 190 +++++++++++++++++------------------------
1 file changed, 80 insertions(+), 110 deletions(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 6e7b327ef8..50830f0870 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -480,144 +480,114 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
static struct virtio_net*
numa_realloc(struct virtio_net *dev, int index)
{
- int oldnode, newnode;
+ int node;
struct virtio_net *old_dev;
- struct vhost_virtqueue *old_vq, *vq;
- struct vring_used_elem *new_shadow_used_split;
- struct vring_used_elem_packed *new_shadow_used_packed;
- struct batch_copy_elem *new_batch_copy_elems;
+ struct vhost_virtqueue *vq;
+ struct batch_copy_elem *bce;
+ struct guest_page *gp;
+ struct rte_vhost_memory *mem;
+ size_t mem_size;
int ret;
old_dev = dev;
- vq = old_vq = dev->virtqueue[index];
-
- ret = get_mempolicy(&newnode, NULL, 0, old_vq->desc,
- MPOL_F_NODE | MPOL_F_ADDR);
+ vq = dev->virtqueue[index];
- /* check if we need to reallocate vq */
- ret |= get_mempolicy(&oldnode, NULL, 0, old_vq,
- MPOL_F_NODE | MPOL_F_ADDR);
+ ret = get_mempolicy(&node, NULL, 0, vq->desc, MPOL_F_NODE | MPOL_F_ADDR);
if (ret) {
- VHOST_LOG_CONFIG(ERR,
- "Unable to get vq numa information.\n");
+ VHOST_LOG_CONFIG(ERR, "Unable to get virtqueue %d numa information.\n", index);
return dev;
}
- if (oldnode != newnode) {
- if (vq->ready) {
- vq->ready = false;
- vhost_user_notify_queue_state(dev, index, 0);
- }
- VHOST_LOG_CONFIG(INFO,
- "reallocate vq from %d to %d node\n", oldnode, newnode);
- vq = rte_malloc_socket(NULL, sizeof(*vq), 0, newnode);
- if (!vq)
- return dev;
+ if (vq->ready) {
+ vq->ready = false;
+ vhost_user_notify_queue_state(dev, index, 0);
+ }
+
+ vq = rte_realloc_socket(vq, sizeof(*vq), 0, node);
+ if (!vq) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc virtqueue %d on node %d\n",
+ index, node);
+ return dev;
+ }
- memcpy(vq, old_vq, sizeof(*vq));
+ if (vq != dev->virtqueue[index]) {
+ VHOST_LOG_CONFIG(INFO, "reallocated virtqueue on node %d\n", node);
+ dev->virtqueue[index] = vq;
+ vhost_user_iotlb_init(dev, index);
+ }
- if (vq_is_packed(dev)) {
- new_shadow_used_packed = rte_malloc_socket(NULL,
- vq->size *
- sizeof(struct vring_used_elem_packed),
- RTE_CACHE_LINE_SIZE,
- newnode);
- if (new_shadow_used_packed) {
- rte_free(vq->shadow_used_packed);
- vq->shadow_used_packed = new_shadow_used_packed;
- }
- } else {
- new_shadow_used_split = rte_malloc_socket(NULL,
- vq->size *
- sizeof(struct vring_used_elem),
- RTE_CACHE_LINE_SIZE,
- newnode);
- if (new_shadow_used_split) {
- rte_free(vq->shadow_used_split);
- vq->shadow_used_split = new_shadow_used_split;
- }
- }
+ if (vq_is_packed(dev)) {
+ struct vring_used_elem_packed *sup;
- new_batch_copy_elems = rte_malloc_socket(NULL,
- vq->size * sizeof(struct batch_copy_elem),
- RTE_CACHE_LINE_SIZE,
- newnode);
- if (new_batch_copy_elems) {
- rte_free(vq->batch_copy_elems);
- vq->batch_copy_elems = new_batch_copy_elems;
+ sup = rte_realloc_socket(vq->shadow_used_packed, vq->size * sizeof(*sup),
+ RTE_CACHE_LINE_SIZE, node);
+ if (!sup) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc shadow packed on node %d\n", node);
+ return dev;
}
+ vq->shadow_used_packed = sup;
- if (vq->log_cache) {
- struct log_cache_entry *log_cache;
+ } else {
+ struct vring_used_elem *sus;
- log_cache = rte_realloc_socket(vq->log_cache,
- sizeof(struct log_cache_entry) * VHOST_LOG_CACHE_NR,
- 0, newnode);
- if (log_cache)
- vq->log_cache = log_cache;
+ sus = rte_realloc_socket(vq->shadow_used_split, vq->size * sizeof(*sus),
+ RTE_CACHE_LINE_SIZE, node);
+ if (!sus) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc shadow split on node %d\n", node);
+ return dev;
}
-
- rte_free(old_vq);
+ vq->shadow_used_split = sus;
}
- if (dev->flags & VIRTIO_DEV_RUNNING)
- goto out;
-
- /* check if we need to reallocate dev */
- ret = get_mempolicy(&oldnode, NULL, 0, old_dev,
- MPOL_F_NODE | MPOL_F_ADDR);
- if (ret) {
- VHOST_LOG_CONFIG(ERR,
- "Unable to get dev numa information.\n");
- goto out;
+ bce = rte_realloc_socket(vq->batch_copy_elems, vq->size * sizeof(*bce),
+ RTE_CACHE_LINE_SIZE, node);
+ if (!bce) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc batch copy elem on node %d\n", node);
+ return dev;
}
- if (oldnode != newnode) {
- struct rte_vhost_memory *old_mem;
- struct guest_page *old_gp;
- ssize_t mem_size, gp_size;
-
- VHOST_LOG_CONFIG(INFO,
- "reallocate dev from %d to %d node\n",
- oldnode, newnode);
- dev = rte_malloc_socket(NULL, sizeof(*dev), 0, newnode);
- if (!dev) {
- dev = old_dev;
- goto out;
- }
+ vq->batch_copy_elems = bce;
- memcpy(dev, old_dev, sizeof(*dev));
- rte_free(old_dev);
+ if (vq->log_cache) {
+ struct log_cache_entry *lc;
- mem_size = sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
- old_mem = dev->mem;
- dev->mem = rte_malloc_socket(NULL, mem_size, 0, newnode);
- if (!dev->mem) {
- dev->mem = old_mem;
- goto out;
+ lc = rte_realloc_socket(vq->log_cache, sizeof(*lc) * VHOST_LOG_CACHE_NR, 0, node);
+ if (!lc) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc log cache on node %d\n", node);
+ return dev;
}
+ vq->log_cache = lc;
+ }
- memcpy(dev->mem, old_mem, mem_size);
- rte_free(old_mem);
+ if (dev->flags & VIRTIO_DEV_RUNNING)
+ return dev;
- gp_size = dev->max_guest_pages * sizeof(*dev->guest_pages);
- old_gp = dev->guest_pages;
- dev->guest_pages = rte_malloc_socket(NULL, gp_size, RTE_CACHE_LINE_SIZE, newnode);
- if (!dev->guest_pages) {
- dev->guest_pages = old_gp;
- goto out;
- }
+ dev = rte_realloc_socket(old_dev, sizeof(*dev), 0, node);
+ if (!dev) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc dev on node %d\n", node);
+ return old_dev;
+ }
- memcpy(dev->guest_pages, old_gp, gp_size);
- rte_free(old_gp);
+ if (dev != old_dev) {
+ VHOST_LOG_CONFIG(INFO, "reallocated device on node %d\n", node);
+ vhost_devices[dev->vid] = dev;
}
-out:
- dev->virtqueue[index] = vq;
- vhost_devices[dev->vid] = dev;
+ mem_size = sizeof(struct rte_vhost_memory) +
+ sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
+ mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
+ if (!mem) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc mem table on node %d\n", node);
+ return dev;
+ }
+ dev->mem = mem;
- if (old_vq != vq)
- vhost_user_iotlb_init(dev, index);
+ gp = rte_realloc_socket(dev->guest_pages, dev->max_guest_pages * sizeof(*gp),
+ RTE_CACHE_LINE_SIZE, node);
+ if (!gp) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc guest pages on node %d\n", node);
+ return dev;
+ }
+ dev->guest_pages = gp;
return dev;
}
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 6/7] vhost: allocate all data on same node as virtqueue
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
` (4 preceding siblings ...)
2021-06-17 10:58 ` [dpdk-dev] [PATCH 5/7] vhost: improve NUMA reallocation Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
2021-06-17 10:58 ` [dpdk-dev] [PATCH 7/7] vhost: convert inflight data to DPDK allocation API Maxime Coquelin
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin
This patch saves the NUMA node the virtqueue is allocated
on at init time, in order to allocate all other data on the
same node.
While most of the data are allocated before numa_realloc()
is called and so the data will be reallocated properly, some
data like the log cache are most likely allocated after.
For the virtio device metadata, we decide to allocate them
on the same node as the VQ 0.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 34 ++++++++++++++++------------------
lib/vhost/vhost.h | 1 +
lib/vhost/vhost_user.c | 41 ++++++++++++++++++++++++++++-------------
3 files changed, 45 insertions(+), 31 deletions(-)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index c96f6335c8..0000cd3297 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -261,7 +261,7 @@ vhost_alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
uint64_t src, dst;
uint64_t len, remain = desc_len;
- idesc = rte_malloc(__func__, desc_len, 0);
+ idesc = rte_malloc_socket(__func__, desc_len, 0, vq->numa_node);
if (unlikely(!idesc))
return NULL;
@@ -549,6 +549,7 @@ static void
init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
{
struct vhost_virtqueue *vq;
+ int numa_node = SOCKET_ID_ANY;
if (vring_idx >= VHOST_MAX_VRING) {
VHOST_LOG_CONFIG(ERR,
@@ -570,6 +571,15 @@ init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
vq->notif_enable = VIRTIO_UNINITIALIZED_NOTIF;
+#ifdef RTE_LIBRTE_VHOST_NUMA
+ if (get_mempolicy(&numa_node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) {
+ VHOST_LOG_CONFIG(ERR, "(%d) failed to query numa node: %s\n",
+ dev->vid, rte_strerror(errno));
+ numa_node = SOCKET_ID_ANY;
+ }
+#endif
+ vq->numa_node = numa_node;
+
vhost_user_iotlb_init(dev, vring_idx);
}
@@ -1616,7 +1626,6 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
struct vhost_virtqueue *vq;
struct virtio_net *dev = get_device(vid);
struct rte_vhost_async_features f;
- int node;
if (dev == NULL || ops == NULL)
return -1;
@@ -1651,20 +1660,9 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
goto reg_out;
}
-#ifdef RTE_LIBRTE_VHOST_NUMA
- if (get_mempolicy(&node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) {
- VHOST_LOG_CONFIG(ERR,
- "unable to get numa information in async register. "
- "allocating async buffer memory on the caller thread node\n");
- node = SOCKET_ID_ANY;
- }
-#else
- node = SOCKET_ID_ANY;
-#endif
-
vq->async_pkts_info = rte_malloc_socket(NULL,
vq->size * sizeof(struct async_inflight_info),
- RTE_CACHE_LINE_SIZE, node);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->async_pkts_info) {
vhost_free_async_mem(vq);
VHOST_LOG_CONFIG(ERR,
@@ -1675,7 +1673,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
vq->it_pool = rte_malloc_socket(NULL,
VHOST_MAX_ASYNC_IT * sizeof(struct rte_vhost_iov_iter),
- RTE_CACHE_LINE_SIZE, node);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->it_pool) {
vhost_free_async_mem(vq);
VHOST_LOG_CONFIG(ERR,
@@ -1686,7 +1684,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
vq->vec_pool = rte_malloc_socket(NULL,
VHOST_MAX_ASYNC_VEC * sizeof(struct iovec),
- RTE_CACHE_LINE_SIZE, node);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->vec_pool) {
vhost_free_async_mem(vq);
VHOST_LOG_CONFIG(ERR,
@@ -1698,7 +1696,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
if (vq_is_packed(dev)) {
vq->async_buffers_packed = rte_malloc_socket(NULL,
vq->size * sizeof(struct vring_used_elem_packed),
- RTE_CACHE_LINE_SIZE, node);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->async_buffers_packed) {
vhost_free_async_mem(vq);
VHOST_LOG_CONFIG(ERR,
@@ -1709,7 +1707,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
} else {
vq->async_descs_split = rte_malloc_socket(NULL,
vq->size * sizeof(struct vring_used_elem),
- RTE_CACHE_LINE_SIZE, node);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->async_descs_split) {
vhost_free_async_mem(vq);
VHOST_LOG_CONFIG(ERR,
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 8078ddff79..8ffe387556 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -164,6 +164,7 @@ struct vhost_virtqueue {
uint16_t batch_copy_nb_elems;
struct batch_copy_elem *batch_copy_elems;
+ int numa_node;
bool used_wrap_counter;
bool avail_wrap_counter;
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 50830f0870..81615cc145 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -433,10 +433,10 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
if (vq_is_packed(dev)) {
if (vq->shadow_used_packed)
rte_free(vq->shadow_used_packed);
- vq->shadow_used_packed = rte_malloc(NULL,
+ vq->shadow_used_packed = rte_malloc_socket(NULL,
vq->size *
sizeof(struct vring_used_elem_packed),
- RTE_CACHE_LINE_SIZE);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->shadow_used_packed) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for shadow used ring.\n");
@@ -447,9 +447,9 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
if (vq->shadow_used_split)
rte_free(vq->shadow_used_split);
- vq->shadow_used_split = rte_malloc(NULL,
+ vq->shadow_used_split = rte_malloc_socket(NULL,
vq->size * sizeof(struct vring_used_elem),
- RTE_CACHE_LINE_SIZE);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->shadow_used_split) {
VHOST_LOG_CONFIG(ERR,
@@ -460,9 +460,9 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
if (vq->batch_copy_elems)
rte_free(vq->batch_copy_elems);
- vq->batch_copy_elems = rte_malloc(NULL,
+ vq->batch_copy_elems = rte_malloc_socket(NULL,
vq->size * sizeof(struct batch_copy_elem),
- RTE_CACHE_LINE_SIZE);
+ RTE_CACHE_LINE_SIZE, vq->numa_node);
if (!vq->batch_copy_elems) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for batching copy.\n");
@@ -498,6 +498,9 @@ numa_realloc(struct virtio_net *dev, int index)
return dev;
}
+ if (node == vq->numa_node)
+ goto out_dev_realloc;
+
if (vq->ready) {
vq->ready = false;
vhost_user_notify_queue_state(dev, index, 0);
@@ -558,6 +561,10 @@ numa_realloc(struct virtio_net *dev, int index)
vq->log_cache = lc;
}
+ vq->numa_node = node;
+
+out_dev_realloc:
+
if (dev->flags & VIRTIO_DEV_RUNNING)
return dev;
@@ -1205,7 +1212,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg,
struct virtio_net *dev = *pdev;
struct VhostUserMemory *memory = &msg->payload.memory;
struct rte_vhost_mem_region *reg;
-
+ int numa_node = SOCKET_ID_ANY;
uint64_t mmap_offset;
uint32_t i;
@@ -1245,13 +1252,21 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg,
for (i = 0; i < dev->nr_vring; i++)
vhost_user_iotlb_flush_all(dev->virtqueue[i]);
+ /*
+ * If VQ 0 has already been allocated, try to allocate on the same
+ * NUMA node. It can be reallocated later in numa_realloc().
+ */
+ if (dev->nr_vring > 0)
+ numa_node = dev->virtqueue[0]->numa_node;
+
dev->nr_guest_pages = 0;
if (dev->guest_pages == NULL) {
dev->max_guest_pages = 8;
- dev->guest_pages = rte_zmalloc(NULL,
+ dev->guest_pages = rte_zmalloc_socket(NULL,
dev->max_guest_pages *
sizeof(struct guest_page),
- RTE_CACHE_LINE_SIZE);
+ RTE_CACHE_LINE_SIZE,
+ numa_node);
if (dev->guest_pages == NULL) {
VHOST_LOG_CONFIG(ERR,
"(%d) failed to allocate memory "
@@ -1261,8 +1276,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg,
}
}
- dev->mem = rte_zmalloc("vhost-mem-table", sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * memory->nregions, 0);
+ dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
+ sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node);
if (dev->mem == NULL) {
VHOST_LOG_CONFIG(ERR,
"(%d) failed to allocate memory for dev->mem\n",
@@ -2185,9 +2200,9 @@ vhost_user_set_log_base(struct virtio_net **pdev, struct VhostUserMsg *msg,
rte_free(vq->log_cache);
vq->log_cache = NULL;
vq->log_cache_nb_elem = 0;
- vq->log_cache = rte_zmalloc("vq log cache",
+ vq->log_cache = rte_malloc_socket("vq log cache",
sizeof(struct log_cache_entry) * VHOST_LOG_CACHE_NR,
- 0);
+ 0, vq->numa_node);
/*
* If log cache alloc fail, don't fail migration, but no
* caching will be done, which will impact performance
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 7/7] vhost: convert inflight data to DPDK allocation API
2021-06-17 10:58 [dpdk-dev] [PATCH 0/7] vhost: Fix and improve NUMA reallocation Maxime Coquelin
` (5 preceding siblings ...)
2021-06-17 10:58 ` [dpdk-dev] [PATCH 6/7] vhost: allocate all data on same node as virtqueue Maxime Coquelin
@ 2021-06-17 10:58 ` Maxime Coquelin
6 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2021-06-17 10:58 UTC (permalink / raw)
To: dev, david.marchand, chenbo.xia; +Cc: Maxime Coquelin
Inflight metadata are allocated using glibc's calloc.
This patch converts them to rte_zmalloc_socket to take
care of the NUMA affinity.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 4 +--
lib/vhost/vhost_user.c | 67 +++++++++++++++++++++++++++++++++++-------
2 files changed, 58 insertions(+), 13 deletions(-)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 0000cd3297..53a470f547 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -312,10 +312,10 @@ cleanup_vq_inflight(struct virtio_net *dev, struct vhost_virtqueue *vq)
if (vq->resubmit_inflight) {
if (vq->resubmit_inflight->resubmit_list) {
- free(vq->resubmit_inflight->resubmit_list);
+ rte_free(vq->resubmit_inflight->resubmit_list);
vq->resubmit_inflight->resubmit_list = NULL;
}
- free(vq->resubmit_inflight);
+ rte_free(vq->resubmit_inflight);
vq->resubmit_inflight = NULL;
}
}
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 81615cc145..d972073690 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -188,7 +188,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
dev->inflight_info->fd = -1;
}
- free(dev->inflight_info);
+ rte_free(dev->inflight_info);
dev->inflight_info = NULL;
}
@@ -561,6 +561,31 @@ numa_realloc(struct virtio_net *dev, int index)
vq->log_cache = lc;
}
+ if (vq->resubmit_inflight) {
+ struct rte_vhost_resubmit_info *ri;
+
+ ri = rte_realloc_socket(vq->resubmit_inflight, sizeof(*ri), 0, node);
+ if (!ri) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc resubmit inflight on node %d\n",
+ node);
+ return dev;
+ }
+ vq->resubmit_inflight = ri;
+
+ if (ri->resubmit_list) {
+ struct rte_vhost_resubmit_desc *rd;
+
+ rd = rte_realloc_socket(ri->resubmit_list, sizeof(*rd) * ri->resubmit_num,
+ 0, node);
+ if (!rd) {
+ VHOST_LOG_CONFIG(ERR, "Failed to realloc resubmit list on node %d\n",
+ node);
+ return dev;
+ }
+ ri->resubmit_list = rd;
+ }
+ }
+
vq->numa_node = node;
out_dev_realloc:
@@ -1483,6 +1508,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
uint16_t num_queues, queue_size;
struct virtio_net *dev = *pdev;
int fd, i, j;
+ int numa_node = SOCKET_ID_ANY;
void *addr;
if (msg->size != sizeof(msg->payload.inflight)) {
@@ -1492,9 +1518,16 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_ERR;
}
+ /*
+ * If VQ 0 has already been allocated, try to allocate on the same
+ * NUMA node. It can be reallocated later in numa_realloc().
+ */
+ if (dev->nr_vring > 0)
+ numa_node = dev->virtqueue[0]->numa_node;
+
if (dev->inflight_info == NULL) {
- dev->inflight_info = calloc(1,
- sizeof(struct inflight_mem_info));
+ dev->inflight_info = rte_zmalloc_socket("inflight_info",
+ sizeof(struct inflight_mem_info), 0, numa_node);
if (!dev->inflight_info) {
VHOST_LOG_CONFIG(ERR,
"failed to alloc dev inflight area\n");
@@ -1577,6 +1610,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, VhostUserMsg *msg,
struct vhost_virtqueue *vq;
void *addr;
int fd, i;
+ int numa_node = SOCKET_ID_ANY;
fd = msg->fds[0];
if (msg->size != sizeof(msg->payload.inflight) || fd < 0) {
@@ -1610,9 +1644,16 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, VhostUserMsg *msg,
"set_inflight_fd pervq_inflight_size: %d\n",
pervq_inflight_size);
+ /*
+ * If VQ 0 has already been allocated, try to allocate on the same
+ * NUMA node. It can be reallocated later in numa_realloc().
+ */
+ if (dev->nr_vring > 0)
+ numa_node = dev->virtqueue[0]->numa_node;
+
if (!dev->inflight_info) {
- dev->inflight_info = calloc(1,
- sizeof(struct inflight_mem_info));
+ dev->inflight_info = rte_zmalloc_socket("inflight_info",
+ sizeof(struct inflight_mem_info), 0, numa_node);
if (dev->inflight_info == NULL) {
VHOST_LOG_CONFIG(ERR,
"failed to alloc dev inflight area\n");
@@ -1771,15 +1812,17 @@ vhost_check_queue_inflights_split(struct virtio_net *dev,
vq->last_avail_idx += resubmit_num;
if (resubmit_num) {
- resubmit = calloc(1, sizeof(struct rte_vhost_resubmit_info));
+ resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info),
+ 0, vq->numa_node);
if (!resubmit) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for resubmit info.\n");
return RTE_VHOST_MSG_RESULT_ERR;
}
- resubmit->resubmit_list = calloc(resubmit_num,
- sizeof(struct rte_vhost_resubmit_desc));
+ resubmit->resubmit_list = rte_zmalloc_socket("resubmit_list",
+ resubmit_num * sizeof(struct rte_vhost_resubmit_desc),
+ 0, vq->numa_node);
if (!resubmit->resubmit_list) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for inflight desc.\n");
@@ -1865,15 +1908,17 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev,
}
if (resubmit_num) {
- resubmit = calloc(1, sizeof(struct rte_vhost_resubmit_info));
+ resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info),
+ 0, vq->numa_node);
if (resubmit == NULL) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for resubmit info.\n");
return RTE_VHOST_MSG_RESULT_ERR;
}
- resubmit->resubmit_list = calloc(resubmit_num,
- sizeof(struct rte_vhost_resubmit_desc));
+ resubmit->resubmit_list = rte_zmalloc_socket("resubmit_list",
+ resubmit_num * sizeof(struct rte_vhost_resubmit_desc),
+ 0, vq->numa_node);
if (resubmit->resubmit_list == NULL) {
VHOST_LOG_CONFIG(ERR,
"failed to allocate memory for resubmit desc.\n");
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread