From: David Marchand <david.marchand@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: dev <dev@dpdk.org>, "Xia, Chenbo" <chenbo.xia@intel.com>,
Adrian Moreno Zapata <amorenoz@redhat.com>,
Olivier Matz <olivier.matz@6wind.com>,
bnemeth@redhat.com
Subject: Re: [dpdk-dev] [PATCH v2 2/3] vhost: move dirty logging cache out of the virtqueue
Date: Tue, 16 Mar 2021 14:13:18 +0100 [thread overview]
Message-ID: <CAJFAV8xkwJQw=sgomxyEeKLHdAK9usTtZ7yuqraMpEi-fSDa3g@mail.gmail.com> (raw)
In-Reply-To: <20210316124153.503928-3-maxime.coquelin@redhat.com>
On Tue, Mar 16, 2021 at 1:42 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch moves the per-virtqueue's dirty logging cache
> out of the virtqueue struct, by allocating it dynamically
> only when live-migration is enabled.
>
> It saves 8 cachelines in vhost_virtqueue struct.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/librte_vhost/vhost.c | 14 ++++++++++++++
> lib/librte_vhost/vhost.h | 2 +-
> lib/librte_vhost/vhost_user.c | 25 +++++++++++++++++++++++++
> 3 files changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 5a7c0c6cff..c3490ce897 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -145,6 +145,10 @@ __vhost_log_cache_sync(struct virtio_net *dev, struct vhost_virtqueue *vq)
> if (unlikely(!dev->log_base))
> return;
>
> + /* No cache, nothing to sync */
> + if (unlikely(!vq->log_cache))
> + return;
> +
> rte_atomic_thread_fence(__ATOMIC_RELEASE);
>
> log_base = (unsigned long *)(uintptr_t)dev->log_base;
> @@ -177,6 +181,14 @@ vhost_log_cache_page(struct virtio_net *dev, struct vhost_virtqueue *vq,
> uint32_t offset = page / (sizeof(unsigned long) << 3);
> int i;
>
> + if (unlikely(!vq->log_cache)) {
> + /* No logging cache allocated, write dirty log map directly */
> + rte_smp_wmb();
We try not to reintroduce full barriers (checkpatch caught this).
> + vhost_log_page((uint8_t *)(uintptr_t)dev->log_base, page);
> +
> + return;
> + }
> +
> for (i = 0; i < vq->log_cache_nb_elem; i++) {
> struct log_cache_entry *elem = vq->log_cache + i;
>
> @@ -354,6 +366,8 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq)
> }
> rte_free(vq->batch_copy_elems);
> rte_mempool_free(vq->iotlb_pool);
> + if (vq->log_cache)
> + rte_free(vq->log_cache);
No if() needed.
> rte_free(vq);
> }
>
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 717f410548..3a71dfeed9 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -183,7 +183,7 @@ struct vhost_virtqueue {
> bool used_wrap_counter;
> bool avail_wrap_counter;
>
> - struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR];
> + struct log_cache_entry *log_cache;
> uint16_t log_cache_nb_elem;
>
> rte_rwlock_t iotlb_lock;
> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> index a60bb945ad..0f452d6ff3 100644
> --- a/lib/librte_vhost/vhost_user.c
> +++ b/lib/librte_vhost/vhost_user.c
> @@ -2022,6 +2022,11 @@ vhost_user_get_vring_base(struct virtio_net **pdev,
> rte_free(vq->batch_copy_elems);
> vq->batch_copy_elems = NULL;
>
> + if (vq->log_cache) {
> + rte_free(vq->log_cache);
> + vq->log_cache = NULL;
> + }
Idem.
> +
> msg->size = sizeof(msg->payload.state);
> msg->fd_num = 0;
>
> @@ -2121,6 +2126,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, struct VhostUserMsg *msg,
> int fd = msg->fds[0];
> uint64_t size, off;
> void *addr;
> + uint32_t i;
>
> if (validate_msg_fds(msg, 1) != 0)
> return RTE_VHOST_MSG_RESULT_ERR;
> @@ -2174,6 +2180,25 @@ vhost_user_set_log_base(struct virtio_net **pdev, struct VhostUserMsg *msg,
> dev->log_base = dev->log_addr + off;
> dev->log_size = size;
>
> + for (i = 0; i < dev->nr_vring; i++) {
> + struct vhost_virtqueue *vq = dev->virtqueue[i];
> +
> + if (vq->log_cache) {
> + rte_free(vq->log_cache);
> + vq->log_cache = NULL;
> + }
Idem.
> + vq->log_cache_nb_elem = 0;
> + vq->log_cache = rte_zmalloc("vq log cache",
> + sizeof(struct log_cache_entry) * VHOST_LOG_CACHE_NR,
> + 0);
> + /*
> + * If log cache alloc fail, don't fail migration, but no
> + * caching will be done, which will impact performance
> + */
> + if (!vq->log_cache)
> + VHOST_LOG_CONFIG(ERR, "Failed to allocate VQ logging cache\n");
> + }
> +
> /*
> * The spec is not clear about it (yet), but QEMU doesn't expect
> * any payload in the reply.
> --
> 2.29.2
>
--
David Marchand
next prev parent reply other threads:[~2021-03-16 13:13 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-16 12:41 [dpdk-dev] [PATCH v2 0/3] vhost: make virtqueue cache-friendly Maxime Coquelin
2021-03-16 12:41 ` [dpdk-dev] [PATCH v2 1/3] vhost: remove unused Vhost virtqueue field Maxime Coquelin
2021-03-16 12:41 ` [dpdk-dev] [PATCH v2 2/3] vhost: move dirty logging cache out of the virtqueue Maxime Coquelin
2021-03-16 13:13 ` David Marchand [this message]
2021-03-17 10:20 ` Maxime Coquelin
2021-03-16 12:41 ` [dpdk-dev] [PATCH v2 3/3] vhost: optimize vhost virtqueue struct Maxime Coquelin
2021-03-16 13:38 ` David Marchand
2021-03-17 10:26 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJFAV8xkwJQw=sgomxyEeKLHdAK9usTtZ7yuqraMpEi-fSDa3g@mail.gmail.com' \
--to=david.marchand@redhat.com \
--cc=amorenoz@redhat.com \
--cc=bnemeth@redhat.com \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).