DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Bathija, Pravin" <Pravin.Bathija@dell.com>
To: fengchengwen <fengchengwen@huawei.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "pravin.m.bathija.dev@gmail.com" <pravin.m.bathija.dev@gmail.com>
Subject: RE: [PATCH v3 4/5] vhost_user: support function defines for back-end
Date: Tue, 11 Nov 2025 11:31:23 +0000	[thread overview]
Message-ID: <SJ0PR19MB4606743990AD1AFE8EC06217F5CFA@SJ0PR19MB4606.namprd19.prod.outlook.com> (raw)
In-Reply-To: <051163f2-7b6e-49db-83cf-7f6f366c448a@huawei.com>

Responses inline.


Internal Use - Confidential
> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: Tuesday, November 4, 2025 12:06 AM
> To: Bathija, Pravin <Pravin.Bathija@dell.com>; dev@dpdk.org
> Cc: pravin.m.bathija.dev@gmail.com
> Subject: Re: [PATCH v3 4/5] vhost_user: support function defines for back-end
>
>
> [EXTERNAL EMAIL]
>
> On 11/4/2025 12:21 PM, Pravin M Bathija wrote:
> > Here we define support functions which are called from the various
> > vhost-user back-end message functions like set memory table, get
> > memory slots, add memory region, remove memory region.  These are
> > essetially common functions to initialize memory, unmap a set of
> > memory regions, perform register copy and align memory addresses.
> >
> > Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
> > ---
> >  lib/vhost/vhost_user.c | 80
> > +++++++++++++++++++++++++++++++++++-------
> >  1 file changed, 68 insertions(+), 12 deletions(-)
> >
> > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index
> > 168432e7d1..9a85f2fc92 100644
> > --- a/lib/vhost/vhost_user.c
> > +++ b/lib/vhost/vhost_user.c
> > @@ -228,7 +228,17 @@ async_dma_map(struct virtio_net *dev, bool
> > do_map)  }
> >
> >  static void
> > -free_mem_region(struct virtio_net *dev)
> > +free_mem_region(struct rte_vhost_mem_region *reg) {
> > +   if (reg != NULL && reg->host_user_addr) {
> > +           munmap(reg->mmap_addr, reg->mmap_size);
> > +           close(reg->fd);
> > +           memset(reg, 0, sizeof(struct rte_vhost_mem_region));
> > +   }
> > +}
> > +
> > +static void
> > +free_all_mem_regions(struct virtio_net *dev)
> >  {
> >     uint32_t i;
> >     struct rte_vhost_mem_region *reg;
> > @@ -239,12 +249,10 @@ free_mem_region(struct virtio_net *dev)
> >     if (dev->async_copy && rte_vfio_is_enabled("vfio"))
> >             async_dma_map(dev, false);
> >
> > -   for (i = 0; i < dev->mem->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             reg = &dev->mem->regions[i];
> > -           if (reg->host_user_addr) {
> > -                   munmap(reg->mmap_addr, reg->mmap_size);
> > -                   close(reg->fd);
> > -           }
> > +           if (reg->mmap_addr)
> > +                   free_mem_region(reg);
> >     }
> >  }
> >
> > @@ -258,7 +266,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
> >             vdpa_dev->ops->dev_cleanup(dev->vid);
> >
> >     if (dev->mem) {
> > -           free_mem_region(dev);
> > +           free_all_mem_regions(dev);
> >             rte_free(dev->mem);
> >             dev->mem = NULL;
> >     }
> > @@ -707,7 +715,7 @@ numa_realloc(struct virtio_net **pdev, struct
> vhost_virtqueue **pvq)
> >     vhost_devices[dev->vid] = dev;
> >
> >     mem_size = sizeof(struct rte_vhost_memory) +
> > -           sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
> > +           sizeof(struct rte_vhost_mem_region) *
> VHOST_MEMORY_MAX_NREGIONS;
> >     mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
> >     if (!mem) {
> >             VHOST_CONFIG_LOG(dev->ifname, ERR,
> > @@ -811,8 +819,10 @@ hua_to_alignment(struct rte_vhost_memory
> *mem, void *ptr)
> >     uint32_t i;
> >     uintptr_t hua = (uintptr_t)ptr;
> >
> > -   for (i = 0; i < mem->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             r = &mem->regions[i];
> > +           if (r->host_user_addr == 0)
> > +                   continue;
> >             if (hua >= r->host_user_addr &&
> >                     hua < r->host_user_addr + r->size) {
> >                     return get_blk_size(r->fd);
> > @@ -1250,9 +1260,13 @@ vhost_user_postcopy_register(struct virtio_net
> *dev, int main_fd,
> >      * retrieve the region offset when handling userfaults.
> >      */
> >     memory = &ctx->msg.payload.memory;
> > -   for (i = 0; i < memory->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
>
> I think the using MAX_NREGIONS are most for convienent, but it will impact
> the performance, because the rte_vhost_va_from_guest_pa() should iter the
> entire array.
>

Replaced VHOST_MEMORY_MAX_NREGIONS with memory->nregions. Please review v4.

> I think we should keep the original impl: make sure the nregions entry of
> memory-region is always valid.
>
> Beside, where is the modification for rte_vhost_va_from_guest_pa()???

Could you please provide more detail ? rte_vhost_va_from_guest_pa() was never called from here before.

>
> > +           int reg_msg_index = 0;
> >             reg = &dev->mem->regions[i];
> > -           memory->regions[i].userspace_addr = reg->host_user_addr;
> > +           if (reg->host_user_addr == 0)
> > +                   continue;
> > +           memory->regions[reg_msg_index].userspace_addr = reg-
> >host_user_addr;
> > +           reg_msg_index++;
> >     }
> >
> >     /* Send the addresses back to qemu */ @@ -1279,8 +1293,10 @@
> > vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
> >     }
> >
> >     /* Now userfault register and we can use the memory */
> > -   for (i = 0; i < memory->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             reg = &dev->mem->regions[i];
> > +           if (reg->host_user_addr == 0)
> > +                   continue;
> >             if (vhost_user_postcopy_region_register(dev, reg) < 0)
> >                     return -1;
> >     }
> > @@ -1385,6 +1401,46 @@ vhost_user_mmap_region(struct virtio_net *dev,
> >     return 0;
> >  }
> >
> > +static int
> > +vhost_user_initialize_memory(struct virtio_net **pdev)
>
> This function should be part of 3/5, else the 3/5 will compile fail

I have moved support functions to patch-3. Please review v4.

>
> > +{
> > +   struct virtio_net *dev = *pdev;
> > +   int numa_node = SOCKET_ID_ANY;
> > +
> > +   /*
> > +    * If VQ 0 has already been allocated, try to allocate on the same
> > +    * NUMA node. It can be reallocated later in numa_realloc().
> > +    */
> > +   if (dev->nr_vring > 0)
> > +           numa_node = dev->virtqueue[0]->numa_node;
> > +
> > +   dev->nr_guest_pages = 0;
> > +   if (dev->guest_pages == NULL) {
> > +           dev->max_guest_pages = 8;
>
> It should be VHOST_MEMORY_MAX_NREGIONS

Done. Please review v4.

>
> > +           dev->guest_pages = rte_zmalloc_socket(NULL,
> > +                                   dev->max_guest_pages *
> > +                                   sizeof(struct guest_page),
> > +                                   RTE_CACHE_LINE_SIZE,
> > +                                   numa_node);
> > +           if (dev->guest_pages == NULL) {
> > +                   VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                           "failed to allocate memory for dev-
> >guest_pages");
> > +                   return -1;
> > +           }
> > +   }
> > +
> > +   dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct
> rte_vhost_memory) +
> > +           sizeof(struct rte_vhost_mem_region) *
> VHOST_MEMORY_MAX_NREGIONS, 0, numa_node);
> > +   if (dev->mem == NULL) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate
> memory for dev->mem");
> > +           rte_free(dev->guest_pages);
> > +           dev->guest_pages = NULL;
> > +           return -1;
> > +   }
> > +
> > +   return 0;
> > +}
> > +
> >  static int
> >  vhost_user_set_mem_table(struct virtio_net **pdev,
> >                     struct vhu_msg_context *ctx,


  reply	other threads:[~2025-11-11 11:31 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-04  4:21 [PATCH v3 0/5] Support add/remove memory region & get-max-slots Pravin M Bathija
2025-11-04  4:21 ` [PATCH v3 1/5] vhost: add user to mailmap and define to vhost hdr Pravin M Bathija
2025-11-04  7:15   ` fengchengwen
2025-11-11 11:14     ` Bathija, Pravin
2025-11-04  4:21 ` [PATCH v3 2/5] vhost_user: header defines for add/rem mem region Pravin M Bathija
2025-11-04  7:18   ` fengchengwen
2025-11-04  4:21 ` [PATCH v3 3/5] vhost_user: Function defs for add/rem mem regions Pravin M Bathija
2025-11-04  7:48   ` fengchengwen
2025-11-11 11:28     ` Bathija, Pravin
2025-11-04  4:21 ` [PATCH v3 4/5] vhost_user: support function defines for back-end Pravin M Bathija
2025-11-04  8:05   ` fengchengwen
2025-11-11 11:31     ` Bathija, Pravin [this message]
2025-11-12  1:21       ` fengchengwen
2025-11-04  4:21 ` [PATCH v3 5/5] vhost_user: Increase number of memory regions Pravin M Bathija
2025-11-04  8:12   ` fengchengwen
2025-11-11 11:34     ` Bathija, Pravin
2025-11-12  1:26       ` fengchengwen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR19MB4606743990AD1AFE8EC06217F5CFA@SJ0PR19MB4606.namprd19.prod.outlook.com \
    --to=pravin.bathija@dell.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=pravin.m.bathija.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).