DPDK patches and discussions
 help / color / mirror / Atom feed
From: fengchengwen <fengchengwen@huawei.com>
To: Pravin M Bathija <pravin.bathija@dell.com>, <dev@dpdk.org>
Cc: <pravin.m.bathija.dev@gmail.com>
Subject: Re: [PATCH v3 3/5] vhost_user: Function defs for add/rem mem regions
Date: Tue, 4 Nov 2025 15:48:51 +0800	[thread overview]
Message-ID: <3ae04cac-b80b-4cc4-900e-41e7669ada40@huawei.com> (raw)
In-Reply-To: <20251104042142.2787631-4-pravin.bathija@dell.com>

On 11/4/2025 12:21 PM, Pravin M Bathija wrote:
> These changes cover the function definition for add/remove memory
> region calls which are invoked on receiving vhost user message from
> vhost user front-end (e.g. Qemu). The vhost-user front end sw sends
> an add memory region message which includes the guest physical address
> range(GPA), memory size, front-end's virtual address and offset within
> tha file-descriptor mapping. The back-end(dpdk) uses this information
> to create a mapping within it's own address space using mmap on the
> fd + offset.  This added memory region serves as sshared memory to pass
> data back and forth between vhost-user front and back ends. Similarly in
> the case of remove memory region, the said memory region is unmapped from
> the back-end(dpdk). In our case, in addition to testing with qemu front-end,
> the testing has also been performed with libblkio front-end and spdk/dpdk
> back-end. We did I/O using libblkio based device driver, to spdk based drives.
> There are also changes for set mem table and new definition for get memory slots.
> The message vhost set memory table essentially is how the vhost-user front-end
> (qemu or libblkio) tells vhost-user back-end (dpdk) about all of it's guest
> memory regions.  This allows the back-end to translate guest physical addresses
> to back-end virtual addresses and perform direct I/O to guest memory. Our changes
> optimize the set memory table call to use common support functions. Message get
> memory slots is how the vhost-user front-end queries the vhost-user back-end about
> the number of memory slots available to be registered by the back-end.

Too much useless information in the commit log, this features is a de facto standard,
and it supported from early qemu version (6.2 as I know), so I think we could simplfy
the commit log.

> 
> Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
> ---
>  lib/vhost/vhost_user.c | 253 +++++++++++++++++++++++++++++++++++------
>  1 file changed, 221 insertions(+), 32 deletions(-)
> 
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 4bfb13fb98..168432e7d1 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -71,6 +71,9 @@ VHOST_MESSAGE_HANDLER(VHOST_USER_SET_FEATURES, vhost_user_set_features, false, t
>  VHOST_MESSAGE_HANDLER(VHOST_USER_SET_OWNER, vhost_user_set_owner, false, true) \
>  VHOST_MESSAGE_HANDLER(VHOST_USER_RESET_OWNER, vhost_user_reset_owner, false, false) \
>  VHOST_MESSAGE_HANDLER(VHOST_USER_SET_MEM_TABLE, vhost_user_set_mem_table, true, true) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_GET_MAX_MEM_SLOTS, vhost_user_get_max_mem_slots, false, false) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_ADD_MEM_REG, vhost_user_add_mem_reg, true, true) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG, vhost_user_rem_mem_reg, false, true) \
>  VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_BASE, vhost_user_set_log_base, true, true) \
>  VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_FD, vhost_user_set_log_fd, true, true) \
>  VHOST_MESSAGE_HANDLER(VHOST_USER_SET_VRING_NUM, vhost_user_set_vring_num, false, true) \
> @@ -1390,7 +1393,6 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>  	struct virtio_net *dev = *pdev;
>  	struct VhostUserMemory *memory = &ctx->msg.payload.memory;
>  	struct rte_vhost_mem_region *reg;
> -	int numa_node = SOCKET_ID_ANY;
>  	uint64_t mmap_offset;
>  	uint32_t i;
>  	bool async_notify = false;
> @@ -1435,39 +1437,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>  		if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
>  			vhost_user_iotlb_flush_all(dev);
>  
> -		free_mem_region(dev);
> +		free_all_mem_regions(dev);
>  		rte_free(dev->mem);
>  		dev->mem = NULL;
>  	}
>  
> -	/*
> -	 * If VQ 0 has already been allocated, try to allocate on the same
> -	 * NUMA node. It can be reallocated later in numa_realloc().
> -	 */
> -	if (dev->nr_vring > 0)
> -		numa_node = dev->virtqueue[0]->numa_node;
> -
> -	dev->nr_guest_pages = 0;
> -	if (dev->guest_pages == NULL) {
> -		dev->max_guest_pages = 8;
> -		dev->guest_pages = rte_zmalloc_socket(NULL,
> -					dev->max_guest_pages *
> -					sizeof(struct guest_page),
> -					RTE_CACHE_LINE_SIZE,
> -					numa_node);
> -		if (dev->guest_pages == NULL) {
> -			VHOST_CONFIG_LOG(dev->ifname, ERR,
> -				"failed to allocate memory for dev->guest_pages");
> -			goto close_msg_fds;
> -		}
> -	}
> -
> -	dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
> -		sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node);
> -	if (dev->mem == NULL) {
> -		VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
> -		goto free_guest_pages;
> -	}
> +	if (vhost_user_initialize_memory(pdev) < 0)
> +		goto close_msg_fds;
>  
>  	for (i = 0; i < memory->nregions; i++) {
>  		reg = &dev->mem->regions[i];
> @@ -1531,11 +1507,182 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>  	return RTE_VHOST_MSG_RESULT_OK;
>  
>  free_mem_table:
> -	free_mem_region(dev);
> +	free_all_mem_regions(dev);
>  	rte_free(dev->mem);
>  	dev->mem = NULL;
> +	rte_free(dev->guest_pages);
> +	dev->guest_pages = NULL;
> +close_msg_fds:
> +	close_msg_fds(ctx);
> +	return RTE_VHOST_MSG_RESULT_ERR;
> +}
> +
>  
> -free_guest_pages:
> +static int
> +vhost_user_get_max_mem_slots(struct virtio_net **pdev __rte_unused,
> +			struct vhu_msg_context *ctx,
> +			int main_fd __rte_unused)
> +{
> +	uint32_t max_mem_slots = VHOST_MEMORY_MAX_NREGIONS;
> +
> +	ctx->msg.payload.u64 = (uint64_t)max_mem_slots;
> +	ctx->msg.size = sizeof(ctx->msg.payload.u64);
> +	ctx->fd_num = 0;
> +
> +	return RTE_VHOST_MSG_RESULT_REPLY;
> +}
> +
> +static int
> +vhost_user_add_mem_reg(struct virtio_net **pdev,
> +			struct vhu_msg_context *ctx,
> +			int main_fd __rte_unused)
> +{
> +	struct virtio_net *dev = *pdev;
> +	struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;

Please put long line in front.

> +	uint32_t i;
> +
> +	/* make sure new region will fit */
> +	if (dev->mem != NULL && dev->mem->nregions >= VHOST_MEMORY_MAX_NREGIONS) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR,
> +			"too many memory regions already (%u)",
> +			dev->mem->nregions);

Currently support 100 char per line. So maybe we could put the log oneline. the same for below log.

> +		goto close_msg_fds;
> +	}
> +
> +	/* make sure supplied memory fd present */
> +	if (ctx->fd_num != 1) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR,
> +			"fd count makes no sense (%u)",
> +			ctx->fd_num);
> +		goto close_msg_fds;
> +	}
> +
> +	/* Make sure no overlap in guest virtual address space */
> +	if (dev->mem != NULL && dev->mem->nregions > 0)	{
> +		for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +			struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
> +
> +			if (current_region->mmap_size == 0)
> +				continue;
> +
> +			uint64_t current_region_guest_start = current_region->guest_user_addr;
> +			uint64_t current_region_guest_end = current_region_guest_start
> +								+ current_region->mmap_size - 1;
> +			uint64_t proposed_region_guest_start = region->userspace_addr;
> +			uint64_t proposed_region_guest_end = proposed_region_guest_start
> +								+ region->memory_size - 1;
> +			bool overlap = false;
> +
> +			bool curent_region_guest_start_overlap =
> +				current_region_guest_start >= proposed_region_guest_start
> +				&& current_region_guest_start <= proposed_region_guest_end;

the && should put tail of line according DPDK coding style.

> +			bool curent_region_guest_end_overlap =
> +				current_region_guest_end >= proposed_region_guest_start
> +				&& current_region_guest_end <= proposed_region_guest_end;
> +			bool proposed_region_guest_start_overlap =
> +				proposed_region_guest_start >= current_region_guest_start
> +				&& proposed_region_guest_start <= current_region_guest_end;
> +			bool proposed_region_guest_end_overlap =
> +				proposed_region_guest_end >= current_region_guest_start
> +				&& proposed_region_guest_end <= current_region_guest_end;
> +
> +			overlap = curent_region_guest_start_overlap
> +				|| curent_region_guest_end_overlap
> +				|| proposed_region_guest_start_overlap
> +				|| proposed_region_guest_end_overlap;
> +
> +			if (overlap) {
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"requested memory region overlaps with another region");
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tRequested region address:0x%" PRIx64,
> +					region->userspace_addr);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tRequested region size:0x%" PRIx64,
> +					region->memory_size);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tOverlapping region address:0x%" PRIx64,
> +					current_region->guest_user_addr);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tOverlapping region size:0x%" PRIx64,
> +					current_region->mmap_size);
> +				goto close_msg_fds;

how about add sub function to wrap the check?

> +			}
> +
> +		}
> +	}
> +
> +	/* convert first region add to normal memory table set */
> +	if (dev->mem == NULL) {
> +		if (vhost_user_initialize_memory(pdev) < 0)
> +			goto close_msg_fds;
> +	}

I think we could put this judgement at the beginning of function, so that we could
drop so many judgement for 'if (dev->mem == NULL)'

> +
> +	/* find a new region and set it like memory table set does */
> +	struct rte_vhost_mem_region *reg = NULL;
> +	uint64_t mmap_offset;

Suggest place them at the beginning of function.

> +
> +	for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +		if (dev->mem->regions[i].guest_user_addr == 0) {
> +			reg = &dev->mem->regions[i];
> +			break;
> +		}
> +	}
> +	if (reg == NULL) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR, "no free memory region");
> +		goto close_msg_fds;
> +	}
> +
> +	reg->guest_phys_addr = region->guest_phys_addr;
> +	reg->guest_user_addr = region->userspace_addr;
> +	reg->size            = region->memory_size;
> +	reg->fd              = ctx->fds[0];
> +
> +	mmap_offset = region->mmap_offset;

no need to define mmap_offset which only used once.

> +
> +	if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region");
> +		goto close_msg_fds;
> +	}
> +
> +	dev->mem->nregions++;
> +
> +	if (dev->async_copy && rte_vfio_is_enabled("vfio"))
> +		async_dma_map(dev, true);

it will map all region, we should map only this region.

> +
> +	if (vhost_user_postcopy_register(dev, main_fd, ctx) < 0)
> +		goto free_mem_table;
> +
> +	for (i = 0; i < dev->nr_vring; i++) {
> +		struct vhost_virtqueue *vq = dev->virtqueue[i];
> +
> +		if (!vq)
> +			continue;
> +
> +		if (vq->desc || vq->avail || vq->used) {
> +			/* vhost_user_lock_all_queue_pairs locked all qps */
> +			VHOST_USER_ASSERT_LOCK(dev, vq, VHOST_USER_ADD_MEM_REG);
> +
> +			/*
> +			 * If the memory table got updated, the ring addresses
> +			 * need to be translated again as virtual addresses have
> +			 * changed.
> +			 */
> +			vring_invalidate(dev, vq);
> +
> +			translate_ring_addresses(&dev, &vq);
> +			*pdev = dev;
> +		}
> +	}
> +
> +	dump_guest_pages(dev);
> +
> +	return RTE_VHOST_MSG_RESULT_OK;
> +
> +free_mem_table:
> +	free_all_mem_regions(dev);
> +	rte_free(dev->mem);
> +	dev->mem = NULL;
>  	rte_free(dev->guest_pages);
>  	dev->guest_pages = NULL;
>  close_msg_fds:
> @@ -1543,6 +1690,48 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>  	return RTE_VHOST_MSG_RESULT_ERR;
>  }
>  
> +static int
> +vhost_user_rem_mem_reg(struct virtio_net **pdev __rte_unused,
> +			struct vhu_msg_context *ctx __rte_unused,
> +			int main_fd __rte_unused)
> +{
> +	struct virtio_net *dev = *pdev;
> +	struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;
> +
> +	if ((dev->mem) && (dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) {
> +		struct rte_vdpa_device *vdpa_dev = dev->vdpa_dev;
> +
> +		if (vdpa_dev && vdpa_dev->ops->dev_close)
> +			vdpa_dev->ops->dev_close(dev->vid);
> +		dev->flags &= ~VIRTIO_DEV_VDPA_CONFIGURED;

The DPDK's vdpa devices will not report the new feature, so this will be dead code I think.

> +	}
> +
> +	if (dev->mem != NULL && dev->mem->nregions > 0) {
> +		for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {

suggest i defined in beginning of func.

> +			struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
> +
> +			if (current_region->guest_user_addr == 0)
> +				continue;
> +
> +			/*
> +			 * According to the vhost-user specification:
> +			 * The memory region to be removed is identified by its guest address,
> +			 * user address and size. The mmap offset is ignored.
> +			 */
> +			if (region->userspace_addr == current_region->guest_user_addr
> +				&& region->guest_phys_addr == current_region->guest_phys_addr
> +				&& region->memory_size == current_region->size) {
> +				free_mem_region(current_region);

I checked the 4/5 commit, and we should do async_dma unmap this region here.

Beside, should we invalidate vring here?

> +				dev->mem->nregions--;
> +				return RTE_VHOST_MSG_RESULT_OK;
> +			}
> +		}
> +	}
> +
> +	VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to find region");
> +	return RTE_VHOST_MSG_RESULT_ERR;
> +}
> +
>  static bool
>  vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  {


  reply	other threads:[~2025-11-04  7:48 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-04  4:21 [PATCH v3 0/5] Support add/remove memory region & get-max-slots Pravin M Bathija
2025-11-04  4:21 ` [PATCH v3 1/5] vhost: add user to mailmap and define to vhost hdr Pravin M Bathija
2025-11-04  7:15   ` fengchengwen
2025-11-04  4:21 ` [PATCH v3 2/5] vhost_user: header defines for add/rem mem region Pravin M Bathija
2025-11-04  7:18   ` fengchengwen
2025-11-04  4:21 ` [PATCH v3 3/5] vhost_user: Function defs for add/rem mem regions Pravin M Bathija
2025-11-04  7:48   ` fengchengwen [this message]
2025-11-04  4:21 ` [PATCH v3 4/5] vhost_user: support function defines for back-end Pravin M Bathija
2025-11-04  8:05   ` fengchengwen
2025-11-04  4:21 ` [PATCH v3 5/5] vhost_user: Increase number of memory regions Pravin M Bathija
2025-11-04  8:12   ` fengchengwen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3ae04cac-b80b-4cc4-900e-41e7669ada40@huawei.com \
    --to=fengchengwen@huawei.com \
    --cc=dev@dpdk.org \
    --cc=pravin.bathija@dell.com \
    --cc=pravin.m.bathija.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).