* [PATCH 0/3] vhost_user: configure memory slots
@ 2025-08-12 2:33 Pravin M Bathija
2025-08-12 2:33 ` [PATCH 1/3] mailmap: add user Pravin M Bathija
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Pravin M Bathija @ 2025-08-12 2:33 UTC (permalink / raw)
To: dev; +Cc: pravin.bathija, pravin.m.bathija.dev
- add protocol feature support add and removal of memory
regions/slots dynamically.
- this patch set has been tested with multiple instances of fio
connected via libblkio to spdk target backed unix domain socket.
- please apply this this patch-set to dpdk mainline as well as
version 25.03.0.
v2:
- fixed warnings with e-mail address
- fixed warnings with typos
- there is a warning about __rte_packed_begin not being in
pair with __rte_packed_end which on reading the code turns
out to be false. It is an anomaly of how git-format
generates the patch and how check-patch checks it.
Pravin M Bathija (3):
mailmap: add user
vhost_user: configure memory slots
vhost_user: support for memory regions
.mailmap | 1 +
lib/vhost/rte_vhost.h | 4 +
lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++------
lib/vhost/vhost_user.h | 10 ++
4 files changed, 296 insertions(+), 44 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] mailmap: add user
2025-08-12 2:33 [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
@ 2025-08-12 2:33 ` Pravin M Bathija
2025-08-19 11:30 ` Thomas Monjalon
2025-08-12 2:33 ` [PATCH 2/3] vhost_user: configure memory slots Pravin M Bathija
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Pravin M Bathija @ 2025-08-12 2:33 UTC (permalink / raw)
To: dev; +Cc: pravin.bathija, pravin.m.bathija.dev
add new user to mailmap
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
.mailmap | 1 +
1 file changed, 1 insertion(+)
diff --git a/.mailmap b/.mailmap
index 34a99f93a1..6b24ee3a30 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1262,6 +1262,7 @@ Prathisna Padmasanan <prathisna.padmasanan@intel.com>
Praveen Kaligineedi <pkaligineedi@google.com>
Praveen Shetty <praveen.shetty@intel.com>
Pravin Pathak <pravin.pathak.dev@gmail.com> <pravin.pathak@intel.com>
+Pravin M Bathija <pravin.bathija@dell.com> <pravin.m.bathija.dev@gmail.com>
Prince Takkar <ptakkar@marvell.com>
Priyalee Kushwaha <priyalee.kushwaha@intel.com>
Priyanka Jain <priyanka.jain@nxp.com>
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] vhost_user: configure memory slots
2025-08-12 2:33 [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
2025-08-12 2:33 ` [PATCH 1/3] mailmap: add user Pravin M Bathija
@ 2025-08-12 2:33 ` Pravin M Bathija
2025-08-12 2:33 ` [PATCH 3/3] vhost_user: support for memory regions Pravin M Bathija
2025-08-19 11:36 ` [PATCH 0/3] vhost_user: configure memory slots Thomas Monjalon
3 siblings, 0 replies; 11+ messages in thread
From: Pravin M Bathija @ 2025-08-12 2:33 UTC (permalink / raw)
To: dev; +Cc: pravin.bathija, pravin.m.bathija.dev
header definition to configure memory slots/regions
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/rte_vhost.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index 2f7c4c0080..a7f9700538 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -109,6 +109,10 @@ extern "C" {
#define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12
#endif
+#ifndef VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
+#define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15
+#endif
+
#ifndef VHOST_USER_PROTOCOL_F_STATUS
#define VHOST_USER_PROTOCOL_F_STATUS 16
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/3] vhost_user: support for memory regions
2025-08-12 2:33 [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
2025-08-12 2:33 ` [PATCH 1/3] mailmap: add user Pravin M Bathija
2025-08-12 2:33 ` [PATCH 2/3] vhost_user: configure memory slots Pravin M Bathija
@ 2025-08-12 2:33 ` Pravin M Bathija
2025-08-19 11:36 ` [PATCH 0/3] vhost_user: configure memory slots Thomas Monjalon
3 siblings, 0 replies; 11+ messages in thread
From: Pravin M Bathija @ 2025-08-12 2:33 UTC (permalink / raw)
To: dev; +Cc: pravin.bathija, pravin.m.bathija.dev
- modify data structures and add functions to support
add and remove memory regions/slots
- define VHOST_MEMORY_MAX_NREGIONS & modify function
vhost_user_set_mem_table accordingly
- dynamically add new memory slots via vhost_user_add_mem_reg
- remove unused memory slots via vhost_user_rem_mem_reg
- define data structure VhostUserSingleMemReg for single
memory region
- modify data structures VhostUserRequest & VhostUserMsg
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++------
lib/vhost/vhost_user.h | 10 ++
2 files changed, 291 insertions(+), 44 deletions(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index b73dec6a22..6367f54b97 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -74,6 +74,9 @@ VHOST_MESSAGE_HANDLER(VHOST_USER_SET_FEATURES, vhost_user_set_features, false, t
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_OWNER, vhost_user_set_owner, false, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_RESET_OWNER, vhost_user_reset_owner, false, false) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_MEM_TABLE, vhost_user_set_mem_table, true, true) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_GET_MAX_MEM_SLOTS, vhost_user_get_max_mem_slots, false, false) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_ADD_MEM_REG, vhost_user_add_mem_reg, true, true) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG, vhost_user_rem_mem_reg, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_BASE, vhost_user_set_log_base, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_FD, vhost_user_set_log_fd, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_VRING_NUM, vhost_user_set_vring_num, false, true) \
@@ -228,7 +231,17 @@ async_dma_map(struct virtio_net *dev, bool do_map)
}
static void
-free_mem_region(struct virtio_net *dev)
+free_mem_region(struct rte_vhost_mem_region *reg)
+{
+ if (reg != NULL && reg->host_user_addr) {
+ munmap(reg->mmap_addr, reg->mmap_size);
+ close(reg->fd);
+ memset(reg, 0, sizeof(struct rte_vhost_mem_region));
+ }
+}
+
+static void
+free_all_mem_regions(struct virtio_net *dev)
{
uint32_t i;
struct rte_vhost_mem_region *reg;
@@ -239,12 +252,10 @@ free_mem_region(struct virtio_net *dev)
if (dev->async_copy && rte_vfio_is_enabled("vfio"))
async_dma_map(dev, false);
- for (i = 0; i < dev->mem->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
reg = &dev->mem->regions[i];
- if (reg->host_user_addr) {
- munmap(reg->mmap_addr, reg->mmap_size);
- close(reg->fd);
- }
+ if (reg->mmap_addr)
+ free_mem_region(reg);
}
}
@@ -258,7 +269,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
vdpa_dev->ops->dev_cleanup(dev->vid);
if (dev->mem) {
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
}
@@ -707,7 +718,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
vhost_devices[dev->vid] = dev;
mem_size = sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
+ sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS;
mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
if (!mem) {
VHOST_CONFIG_LOG(dev->ifname, ERR,
@@ -811,8 +822,10 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
uint32_t i;
uintptr_t hua = (uintptr_t)ptr;
- for (i = 0; i < mem->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
r = &mem->regions[i];
+ if (r->host_user_addr == 0)
+ continue;
if (hua >= r->host_user_addr &&
hua < r->host_user_addr + r->size) {
return get_blk_size(r->fd);
@@ -1250,9 +1263,13 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
* retrieve the region offset when handling userfaults.
*/
memory = &ctx->msg.payload.memory;
- for (i = 0; i < memory->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
+ int reg_msg_index = 0;
reg = &dev->mem->regions[i];
- memory->regions[i].userspace_addr = reg->host_user_addr;
+ if (reg->host_user_addr == 0)
+ continue;
+ memory->regions[reg_msg_index].userspace_addr = reg->host_user_addr;
+ reg_msg_index++;
}
/* Send the addresses back to qemu */
@@ -1279,8 +1296,10 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
}
/* Now userfault register and we can use the memory */
- for (i = 0; i < memory->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
reg = &dev->mem->regions[i];
+ if (reg->host_user_addr == 0)
+ continue;
if (vhost_user_postcopy_region_register(dev, reg) < 0)
return -1;
}
@@ -1385,6 +1404,46 @@ vhost_user_mmap_region(struct virtio_net *dev,
return 0;
}
+static int
+vhost_user_initialize_memory(struct virtio_net **pdev)
+{
+ struct virtio_net *dev = *pdev;
+ int numa_node = SOCKET_ID_ANY;
+
+ /*
+ * If VQ 0 has already been allocated, try to allocate on the same
+ * NUMA node. It can be reallocated later in numa_realloc().
+ */
+ if (dev->nr_vring > 0)
+ numa_node = dev->virtqueue[0]->numa_node;
+
+ dev->nr_guest_pages = 0;
+ if (dev->guest_pages == NULL) {
+ dev->max_guest_pages = 8;
+ dev->guest_pages = rte_zmalloc_socket(NULL,
+ dev->max_guest_pages *
+ sizeof(struct guest_page),
+ RTE_CACHE_LINE_SIZE,
+ numa_node);
+ if (dev->guest_pages == NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "failed to allocate memory for dev->guest_pages");
+ return -1;
+ }
+ }
+
+ dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
+ sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS, 0, numa_node);
+ if (dev->mem == NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
+ rte_free(dev->guest_pages);
+ dev->guest_pages = NULL;
+ return -1;
+ }
+
+ return 0;
+}
+
static int
vhost_user_set_mem_table(struct virtio_net **pdev,
struct vhu_msg_context *ctx,
@@ -1393,7 +1452,6 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
struct virtio_net *dev = *pdev;
struct VhostUserMemory *memory = &ctx->msg.payload.memory;
struct rte_vhost_mem_region *reg;
- int numa_node = SOCKET_ID_ANY;
uint64_t mmap_offset;
uint32_t i;
bool async_notify = false;
@@ -1438,39 +1496,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_flush_all(dev);
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
}
- /*
- * If VQ 0 has already been allocated, try to allocate on the same
- * NUMA node. It can be reallocated later in numa_realloc().
- */
- if (dev->nr_vring > 0)
- numa_node = dev->virtqueue[0]->numa_node;
-
- dev->nr_guest_pages = 0;
- if (dev->guest_pages == NULL) {
- dev->max_guest_pages = 8;
- dev->guest_pages = rte_zmalloc_socket(NULL,
- dev->max_guest_pages *
- sizeof(struct guest_page),
- RTE_CACHE_LINE_SIZE,
- numa_node);
- if (dev->guest_pages == NULL) {
- VHOST_CONFIG_LOG(dev->ifname, ERR,
- "failed to allocate memory for dev->guest_pages");
- goto close_msg_fds;
- }
- }
-
- dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node);
- if (dev->mem == NULL) {
- VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
- goto free_guest_pages;
- }
+ if (vhost_user_initialize_memory(pdev) < 0)
+ goto close_msg_fds;
for (i = 0; i < memory->nregions; i++) {
reg = &dev->mem->regions[i];
@@ -1534,11 +1566,182 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_OK;
free_mem_table:
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
+ rte_free(dev->guest_pages);
+ dev->guest_pages = NULL;
+close_msg_fds:
+ close_msg_fds(ctx);
+ return RTE_VHOST_MSG_RESULT_ERR;
+}
+
+
+static int
+vhost_user_get_max_mem_slots(struct virtio_net **pdev __rte_unused,
+ struct vhu_msg_context *ctx,
+ int main_fd __rte_unused)
+{
+ uint32_t max_mem_slots = VHOST_MEMORY_MAX_NREGIONS;
+
+ ctx->msg.payload.u64 = (uint64_t)max_mem_slots;
+ ctx->msg.size = sizeof(ctx->msg.payload.u64);
+ ctx->fd_num = 0;
-free_guest_pages:
+ return RTE_VHOST_MSG_RESULT_REPLY;
+}
+
+static int
+vhost_user_add_mem_reg(struct virtio_net **pdev,
+ struct vhu_msg_context *ctx,
+ int main_fd __rte_unused)
+{
+ struct virtio_net *dev = *pdev;
+ struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;
+ uint32_t i;
+
+ /* make sure new region will fit */
+ if (dev->mem != NULL && dev->mem->nregions >= VHOST_MEMORY_MAX_NREGIONS) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "too many memory regions already (%u)",
+ dev->mem->nregions);
+ goto close_msg_fds;
+ }
+
+ /* make sure supplied memory fd present */
+ if (ctx->fd_num != 1) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "fd count makes no sense (%u)",
+ ctx->fd_num);
+ goto close_msg_fds;
+ }
+
+ /* Make sure no overlap in guest virtual address space */
+ if (dev->mem != NULL && dev->mem->nregions > 0) {
+ for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
+ struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
+
+ if (current_region->mmap_size == 0)
+ continue;
+
+ uint64_t current_region_guest_start = current_region->guest_user_addr;
+ uint64_t current_region_guest_end = current_region_guest_start
+ + current_region->mmap_size - 1;
+ uint64_t proposed_region_guest_start = region->userspace_addr;
+ uint64_t proposed_region_guest_end = proposed_region_guest_start
+ + region->memory_size - 1;
+ bool overlap = false;
+
+ bool current_region_guest_start_overlap =
+ current_region_guest_start >= proposed_region_guest_start
+ && current_region_guest_start <= proposed_region_guest_end;
+ bool current_region_guest_end_overlap =
+ current_region_guest_end >= proposed_region_guest_start
+ && current_region_guest_end <= proposed_region_guest_end;
+ bool proposed_region_guest_start_overlap =
+ proposed_region_guest_start >= current_region_guest_start
+ && proposed_region_guest_start <= current_region_guest_end;
+ bool proposed_region_guest_end_overlap =
+ proposed_region_guest_end >= current_region_guest_start
+ && proposed_region_guest_end <= current_region_guest_end;
+
+ overlap = current_region_guest_start_overlap
+ || current_region_guest_end_overlap
+ || proposed_region_guest_start_overlap
+ || proposed_region_guest_end_overlap;
+
+ if (overlap) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "requested memory region overlaps with another region");
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tRequested region address:0x%" PRIx64,
+ region->userspace_addr);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tRequested region size:0x%" PRIx64,
+ region->memory_size);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tOverlapping region address:0x%" PRIx64,
+ current_region->guest_user_addr);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tOverlapping region size:0x%" PRIx64,
+ current_region->mmap_size);
+ goto close_msg_fds;
+ }
+
+ }
+ }
+
+ /* convert first region add to normal memory table set */
+ if (dev->mem == NULL) {
+ if (vhost_user_initialize_memory(pdev) < 0)
+ goto close_msg_fds;
+ }
+
+ /* find a new region and set it like memory table set does */
+ struct rte_vhost_mem_region *reg = NULL;
+ uint64_t mmap_offset;
+
+ for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
+ if (dev->mem->regions[i].guest_user_addr == 0) {
+ reg = &dev->mem->regions[i];
+ break;
+ }
+ }
+ if (reg == NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "no free memory region");
+ goto close_msg_fds;
+ }
+
+ reg->guest_phys_addr = region->guest_phys_addr;
+ reg->guest_user_addr = region->userspace_addr;
+ reg->size = region->memory_size;
+ reg->fd = ctx->fds[0];
+
+ mmap_offset = region->mmap_offset;
+
+ if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region");
+ goto close_msg_fds;
+ }
+
+ dev->mem->nregions++;
+
+ if (dev->async_copy && rte_vfio_is_enabled("vfio"))
+ async_dma_map(dev, true);
+
+ if (vhost_user_postcopy_register(dev, main_fd, ctx) < 0)
+ goto free_mem_table;
+
+ for (i = 0; i < dev->nr_vring; i++) {
+ struct vhost_virtqueue *vq = dev->virtqueue[i];
+
+ if (!vq)
+ continue;
+
+ if (vq->desc || vq->avail || vq->used) {
+ /* vhost_user_lock_all_queue_pairs locked all qps */
+ VHOST_USER_ASSERT_LOCK(dev, vq, VHOST_USER_SET_MEM_TABLE);
+
+ /*
+ * If the memory table got updated, the ring addresses
+ * need to be translated again as virtual addresses have
+ * changed.
+ */
+ vring_invalidate(dev, vq);
+
+ translate_ring_addresses(&dev, &vq);
+ *pdev = dev;
+ }
+ }
+
+ dump_guest_pages(dev);
+
+ return RTE_VHOST_MSG_RESULT_OK;
+
+free_mem_table:
+ free_all_mem_regions(dev);
+ rte_free(dev->mem);
+ dev->mem = NULL;
rte_free(dev->guest_pages);
dev->guest_pages = NULL;
close_msg_fds:
@@ -1546,6 +1749,40 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_ERR;
}
+static int
+vhost_user_rem_mem_reg(struct virtio_net **pdev __rte_unused,
+ struct vhu_msg_context *ctx __rte_unused,
+ int main_fd __rte_unused)
+{
+ struct virtio_net *dev = *pdev;
+ struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;
+
+ if (dev->mem != NULL && dev->mem->nregions > 0) {
+ for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
+ struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
+
+ if (current_region->guest_user_addr == 0)
+ continue;
+
+ /*
+ * According to the vhost-user specification:
+ * The memory region to be removed is identified by its guest address,
+ * user address and size. The mmap offset is ignored.
+ */
+ if (region->userspace_addr == current_region->guest_user_addr
+ && region->guest_phys_addr == current_region->guest_phys_addr
+ && region->memory_size == current_region->size) {
+ free_mem_region(current_region);
+ dev->mem->nregions--;
+ return RTE_VHOST_MSG_RESULT_OK;
+ }
+ }
+ }
+
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to find region");
+ return RTE_VHOST_MSG_RESULT_ERR;
+}
+
static bool
vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
{
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index ef486545ba..5a0e747b58 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -32,6 +32,7 @@
(1ULL << VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD) | \
(1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \
(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS) | \
(1ULL << VHOST_USER_PROTOCOL_F_STATUS))
typedef enum VhostUserRequest {
@@ -67,6 +68,9 @@ typedef enum VhostUserRequest {
VHOST_USER_POSTCOPY_END = 30,
VHOST_USER_GET_INFLIGHT_FD = 31,
VHOST_USER_SET_INFLIGHT_FD = 32,
+ VHOST_USER_GET_MAX_MEM_SLOTS = 36,
+ VHOST_USER_ADD_MEM_REG = 37,
+ VHOST_USER_REM_MEM_REG = 38,
VHOST_USER_SET_STATUS = 39,
VHOST_USER_GET_STATUS = 40,
} VhostUserRequest;
@@ -91,6 +95,11 @@ typedef struct VhostUserMemory {
VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS];
} VhostUserMemory;
+typedef struct VhostUserSingleMemReg {
+ uint64_t padding;
+ VhostUserMemoryRegion region;
+} VhostUserSingleMemReg;
+
typedef struct VhostUserLog {
uint64_t mmap_size;
uint64_t mmap_offset;
@@ -186,6 +195,7 @@ typedef struct __rte_packed_begin VhostUserMsg {
struct vhost_vring_state state;
struct vhost_vring_addr addr;
VhostUserMemory memory;
+ VhostUserSingleMemReg memory_single;
VhostUserLog log;
struct vhost_iotlb_msg iotlb;
VhostUserCryptoSessionParam crypto_session;
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mailmap: add user
2025-08-12 2:33 ` [PATCH 1/3] mailmap: add user Pravin M Bathija
@ 2025-08-19 11:30 ` Thomas Monjalon
0 siblings, 0 replies; 11+ messages in thread
From: Thomas Monjalon @ 2025-08-19 11:30 UTC (permalink / raw)
To: Pravin M Bathija; +Cc: dev, pravin.m.bathija.dev, Maxime Coquelin
12/08/2025 04:33, Pravin M Bathija:
> add new user to mailmap
>
> Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
This patch can be squashed with the next one.
> --- a/.mailmap
> +++ b/.mailmap
> +Pravin M Bathija <pravin.bathija@dell.com> <pravin.m.bathija.dev@gmail.com>
If you intend to use the first address for your contributions,
no need to fill the gmail one here.
We try to keep only a single address if possible.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] vhost_user: configure memory slots
2025-08-12 2:33 [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
` (2 preceding siblings ...)
2025-08-12 2:33 ` [PATCH 3/3] vhost_user: support for memory regions Pravin M Bathija
@ 2025-08-19 11:36 ` Thomas Monjalon
2025-08-22 2:36 ` Bathija, Pravin
2025-08-22 2:48 ` Bathija, Pravin
3 siblings, 2 replies; 11+ messages in thread
From: Thomas Monjalon @ 2025-08-19 11:36 UTC (permalink / raw)
To: Pravin M Bathija
Cc: dev, pravin.m.bathija.dev, Maxime Coquelin, Chenbo Xia,
David Marchand, Bruce Richardson
Comments inline
12/08/2025 04:33, Pravin M Bathija:
> - add protocol feature support add and removal of memory
> regions/slots dynamically.
> - this patch set has been tested with multiple instances of fio
What is "fio"?
> connected via libblkio to spdk target backed unix domain socket.
> - please apply this this patch-set to dpdk mainline as well as
> version 25.03.0.
As mentionned previously, we don't plan to maintain 25.03.
I understand you need it for SPDK,
but I think SPDK should rely only on LTS branches.
Is there a specific reason why 25.03 is required in SPDK?
> v2:
> - fixed warnings with e-mail address
> - fixed warnings with typos
> - there is a warning about __rte_packed_begin not being in
> pair with __rte_packed_end which on reading the code turns
> out to be false. It is an anomaly of how git-format
> generates the patch and how check-patch checks it.
When sending a v2, please use -v2 so it is reflected in the email subject.
> Pravin M Bathija (3):
> mailmap: add user
> vhost_user: configure memory slots
> vhost_user: support for memory regions
>
> .mailmap | 1 +
> lib/vhost/rte_vhost.h | 4 +
> lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++------
> lib/vhost/vhost_user.h | 10 ++
> 4 files changed, 296 insertions(+), 44 deletions(-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH 0/3] vhost_user: configure memory slots
2025-08-19 11:36 ` [PATCH 0/3] vhost_user: configure memory slots Thomas Monjalon
@ 2025-08-22 2:36 ` Bathija, Pravin
2025-08-22 2:48 ` Bathija, Pravin
1 sibling, 0 replies; 11+ messages in thread
From: Bathija, Pravin @ 2025-08-22 2:36 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, pravin.m.bathija.dev, Maxime Coquelin, Chenbo Xia,
David Marchand, Bruce Richardson
Comments inline:
Internal Use - Confidential
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Tuesday, August 19, 2025 4:36 AM
To: Bathija, Pravin <Pravin.Bathija@dell.com>
Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
Subject: Re: [PATCH 0/3] vhost_user: configure memory slots
[EXTERNAL EMAIL]
Comments inline
12/08/2025 04:33, Pravin M Bathija:
> - add protocol feature support add and removal of memory regions/slots
> dynamically.
> - this patch set has been tested with multiple instances of fio
What is "fio"?
Fio is an open-source tool to test various I/O work-loads on disks and file systems.
https://github.com/axboe/fio
> connected via libblkio to spdk target backed unix domain socket.
> - please apply this this patch-set to dpdk mainline as well as version
> 25.03.0.
As mentionned previously, we don't plan to maintain 25.03.
I understand you need it for SPDK,
but I think SPDK should rely only on LTS branches.
Is there a specific reason why 25.03 is required in SPDK?
> v2:
> - fixed warnings with e-mail address
> - fixed warnings with typos
> - there is a warning about __rte_packed_begin not being in pair with
> __rte_packed_end which on reading the code turns out to be false. It
> is an anomaly of how git-format generates the patch and how
> check-patch checks it.
When sending a v2, please use -v2 so it is reflected in the email subject.
> Pravin M Bathija (3):
> mailmap: add user
> vhost_user: configure memory slots
> vhost_user: support for memory regions
>
> .mailmap | 1 +
> lib/vhost/rte_vhost.h | 4 +
> lib/vhost/vhost_user.c | 325
> +++++++++++++++++++++++++++++++++++------
> lib/vhost/vhost_user.h | 10 ++
> 4 files changed, 296 insertions(+), 44 deletions(-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH 0/3] vhost_user: configure memory slots
2025-08-19 11:36 ` [PATCH 0/3] vhost_user: configure memory slots Thomas Monjalon
2025-08-22 2:36 ` Bathija, Pravin
@ 2025-08-22 2:48 ` Bathija, Pravin
2025-08-22 7:33 ` Bathija, Pravin
1 sibling, 1 reply; 11+ messages in thread
From: Bathija, Pravin @ 2025-08-22 2:48 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, pravin.m.bathija.dev, Maxime Coquelin, Chenbo Xia,
David Marchand, Bruce Richardson
Comments inline:
Internal Use - Confidential
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Tuesday, August 19, 2025 4:36 AM
To: Bathija, Pravin <Pravin.Bathija@dell.com>
Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
Subject: Re: [PATCH 0/3] vhost_user: configure memory slots
[EXTERNAL EMAIL]
Comments inline
12/08/2025 04:33, Pravin M Bathija:
> - add protocol feature support add and removal of memory regions/slots
> dynamically.
> - this patch set has been tested with multiple instances of fio
What is "fio"?
Fio is an open-source tool to test I/O workloads on disks and file systems.
> connected via libblkio to spdk target backed unix domain socket.
> - please apply this this patch-set to dpdk mainline as well as version
> 25.03.0.
As mentionned previously, we don't plan to maintain 25.03.
I understand you need it for SPDK,
but I think SPDK should rely only on LTS branches.
Is there a specific reason why 25.03 is required in SPDK?
When we do sub-module init on spdk master, it uses dpdk version 25.03 from it's own sub-repo.
Our changes for vhost have already been pushed to spdk master. We have also pushed the same dpdk
changes in the spdk sub-repo "dpdk" which is version 25.03. I was just ensuring that there is uniformity
in the 25.03 codebase, whether it is on the spdk sub-repo or 25.03 within dpdk repo.
> v2:
> - fixed warnings with e-mail address
> - fixed warnings with typos
> - there is a warning about __rte_packed_begin not being in pair with
> __rte_packed_end which on reading the code turns out to be false. It
> is an anomaly of how git-format generates the patch and how
> check-patch checks it.
When sending a v2, please use -v2 so it is reflected in the email subject.
I will remember this for next time. Thank you for pointing that out.
> Pravin M Bathija (3):
> mailmap: add user
> vhost_user: configure memory slots
> vhost_user: support for memory regions
>
> .mailmap | 1 +
> lib/vhost/rte_vhost.h | 4 +
> lib/vhost/vhost_user.c | 325
> +++++++++++++++++++++++++++++++++++------
> lib/vhost/vhost_user.h | 10 ++
> 4 files changed, 296 insertions(+), 44 deletions(-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH 0/3] vhost_user: configure memory slots
2025-08-22 2:48 ` Bathija, Pravin
@ 2025-08-22 7:33 ` Bathija, Pravin
2025-08-25 8:49 ` Maxime Coquelin
0 siblings, 1 reply; 11+ messages in thread
From: Bathija, Pravin @ 2025-08-22 7:33 UTC (permalink / raw)
To: Maxime Coquelin, Bruce Richardson, David Marchand, Chenbo Xia
Cc: dev, pravin.m.bathija.dev, Thomas Monjalon
Dear Maxime, Bruce, David, Chenbo,
I would like to follow up on the status of the patches and when we should expect these to be merged into the mainline.
Please let me know if I can help answer any questions you might have.
Regards,
Pravin
Internal Use - Confidential
-----Original Message-----
From: Bathija, Pravin
Sent: Thursday, August 21, 2025 7:48 PM
To: Thomas Monjalon <thomas@monjalon.net>
Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
Subject: RE: [PATCH 0/3] vhost_user: configure memory slots
Comments inline:
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Tuesday, August 19, 2025 4:36 AM
To: Bathija, Pravin <Pravin.Bathija@dell.com>
Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
Subject: Re: [PATCH 0/3] vhost_user: configure memory slots
[EXTERNAL EMAIL]
Comments inline
12/08/2025 04:33, Pravin M Bathija:
> - add protocol feature support add and removal of memory regions/slots
> dynamically.
> - this patch set has been tested with multiple instances of fio
What is "fio"?
Fio is an open-source tool to test I/O workloads on disks and file systems.
> connected via libblkio to spdk target backed unix domain socket.
> - please apply this this patch-set to dpdk mainline as well as version
> 25.03.0.
As mentionned previously, we don't plan to maintain 25.03.
I understand you need it for SPDK,
but I think SPDK should rely only on LTS branches.
Is there a specific reason why 25.03 is required in SPDK?
When we do sub-module init on spdk master, it uses dpdk version 25.03 from it's own sub-repo.
Our changes for vhost have already been pushed to spdk master. We have also pushed the same dpdk
changes in the spdk sub-repo "dpdk" which is version 25.03. I was just ensuring that there is uniformity
in the 25.03 codebase, whether it is on the spdk sub-repo or 25.03 within dpdk repo.
> v2:
> - fixed warnings with e-mail address
> - fixed warnings with typos
> - there is a warning about __rte_packed_begin not being in pair with
> __rte_packed_end which on reading the code turns out to be false. It
> is an anomaly of how git-format generates the patch and how
> check-patch checks it.
When sending a v2, please use -v2 so it is reflected in the email subject.
I will remember this for next time. Thank you for pointing that out.
> Pravin M Bathija (3):
> mailmap: add user
> vhost_user: configure memory slots
> vhost_user: support for memory regions
>
> .mailmap | 1 +
> lib/vhost/rte_vhost.h | 4 +
> lib/vhost/vhost_user.c | 325
> +++++++++++++++++++++++++++++++++++------
> lib/vhost/vhost_user.h | 10 ++
> 4 files changed, 296 insertions(+), 44 deletions(-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] vhost_user: configure memory slots
2025-08-22 7:33 ` Bathija, Pravin
@ 2025-08-25 8:49 ` Maxime Coquelin
0 siblings, 0 replies; 11+ messages in thread
From: Maxime Coquelin @ 2025-08-25 8:49 UTC (permalink / raw)
To: Bathija, Pravin, Bruce Richardson, David Marchand, Chenbo Xia
Cc: dev, pravin.m.bathija.dev, Thomas Monjalon
Hi,
Please try to fix your email client so that we can follow the replies
inlined.
On 8/22/25 9:33 AM, Bathija, Pravin wrote:
> Dear Maxime, Bruce, David, Chenbo,
>
> I would like to follow up on the status of the patches and when we should expect these to be merged into the mainline.
>
> Please let me know if I can help answer any questions you might have.
>
> Regards,
> Pravin
>
>
> Internal Use - Confidential
> -----Original Message-----
> From: Bathija, Pravin
> Sent: Thursday, August 21, 2025 7:48 PM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
> Subject: RE: [PATCH 0/3] vhost_user: configure memory slots
>
> Comments inline:
>
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, August 19, 2025 4:36 AM
> To: Bathija, Pravin <Pravin.Bathija@dell.com>
> Cc: dev@dpdk.org; pravin.m.bathija.dev@gmail.com; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbox@nvidia.com>; David Marchand <david.marchand@redhat.com>; Bruce Richardson <bruce.richardson@intel.com>
> Subject: Re: [PATCH 0/3] vhost_user: configure memory slots
>
>
> [EXTERNAL EMAIL]
>
> Comments inline
>
> 12/08/2025 04:33, Pravin M Bathija:
>> - add protocol feature support add and removal of memory regions/slots
>> dynamically.
>> - this patch set has been tested with multiple instances of fio
>
> What is "fio"?
> Fio is an open-source tool to test I/O workloads on disks and file systems.
>
>> connected via libblkio to spdk target backed unix domain socket.
>> - please apply this this patch-set to dpdk mainline as well as version
>> 25.03.0.
>
> As mentionned previously, we don't plan to maintain 25.03.
> I understand you need it for SPDK,
> but I think SPDK should rely only on LTS branches.
> Is there a specific reason why 25.03 is required in SPDK?
>
> When we do sub-module init on spdk master, it uses dpdk version 25.03 from it's own sub-repo.
> Our changes for vhost have already been pushed to spdk master. We have also pushed the same dpdk
> changes in the spdk sub-repo "dpdk" which is version 25.03. I was just ensuring that there is uniformity
> in the 25.03 codebase, whether it is on the spdk sub-repo or 25.03 within dpdk repo.
Ok, still, it would be better for SPDK to uses LTS as we have no plans
to add extra stable branches.
Also, your series may be considered as a feature and not a bug fix, so
even if 25.03 would be a stable branch (which it is not), it would not
be backported against it.
>
>
>> v2:
>> - fixed warnings with e-mail address
>> - fixed warnings with typos
>> - there is a warning about __rte_packed_begin not being in pair with
>> __rte_packed_end which on reading the code turns out to be false. It
>> is an anomaly of how git-format generates the patch and how
>> check-patch checks it.
>
> When sending a v2, please use -v2 so it is reflected in the email subject.
> I will remember this for next time. Thank you for pointing that out.
>
>> Pravin M Bathija (3):
>> mailmap: add user
>> vhost_user: configure memory slots
>> vhost_user: support for memory regions
>>
>> .mailmap | 1 +
>> lib/vhost/rte_vhost.h | 4 +
>> lib/vhost/vhost_user.c | 325
>> +++++++++++++++++++++++++++++++++++------
>> lib/vhost/vhost_user.h | 10 ++
>> 4 files changed, 296 insertions(+), 44 deletions(-)
>
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 0/3] vhost_user: configure memory slots
@ 2025-08-08 4:29 Pravin M Bathija
0 siblings, 0 replies; 11+ messages in thread
From: Pravin M Bathija @ 2025-08-08 4:29 UTC (permalink / raw)
To: dev; +Cc: pravin.bathija, pravin.m.bathija.dev, boris.glimcher
add protocol feature support add and removal of memory
regions/slots dynamically.
this patch set has been tested with multiple instances of fio
connected via libblkio to spdk target backed unix domain socket.
please apply this this patch-set to dpdk mainline as well as
version 25.03.0.
Pravin M Bathija (3):
mailmap: add user
vhost_user: configure memory slots
vhost_user: support for memory regions
.mailmap | 1 +
lib/vhost/rte_vhost.h | 4 +
lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++------
lib/vhost/vhost_user.h | 10 ++
4 files changed, 296 insertions(+), 44 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-08-25 8:49 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-12 2:33 [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
2025-08-12 2:33 ` [PATCH 1/3] mailmap: add user Pravin M Bathija
2025-08-19 11:30 ` Thomas Monjalon
2025-08-12 2:33 ` [PATCH 2/3] vhost_user: configure memory slots Pravin M Bathija
2025-08-12 2:33 ` [PATCH 3/3] vhost_user: support for memory regions Pravin M Bathija
2025-08-19 11:36 ` [PATCH 0/3] vhost_user: configure memory slots Thomas Monjalon
2025-08-22 2:36 ` Bathija, Pravin
2025-08-22 2:48 ` Bathija, Pravin
2025-08-22 7:33 ` Bathija, Pravin
2025-08-25 8:49 ` Maxime Coquelin
-- strict thread matches above, loose matches on Subject: below --
2025-08-08 4:29 Pravin M Bathija
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).