From: "Hu, Jiayu" <jiayu.hu@intel.com>
To: "Ding, Xuan" <xuan.ding@intel.com>,
"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
"Xia, Chenbo" <chenbo.xia@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"He, Xingguang" <xingguang.he@intel.com>,
"Yang, YvonneX" <yvonnex.yang@intel.com>,
"Jiang, Cheng1" <cheng1.jiang@intel.com>
Subject: RE: [PATCH] vhost: fix unnecessary dirty page logging
Date: Thu, 7 Jul 2022 08:23:36 +0000 [thread overview]
Message-ID: <fd595d34f83f4e0b94d444cb1a33d369@intel.com> (raw)
In-Reply-To: <20220707065513.66458-1-xuan.ding@intel.com>
> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Thursday, July 7, 2022 2:55 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH] vhost: fix unnecessary dirty page logging
>
> From: Xuan Ding <xuan.ding@intel.com>
>
> The dirty page logging is only required in vhost enqueue direction for live
> migration. This patch removes the unnecessary dirty page logging in vhost
> dequeue direction. Otherwise, it will result in a performance drop. Some if-
> else judgements are also optimized to improve performance.
>
> Fixes: 6d823bb302c7 ("vhost: prepare sync for descriptor to mbuf
> refactoring")
> Fixes: b6eee3e83402 ("vhost: fix sync dequeue offload")
>
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
> lib/vhost/virtio_net.c | 31 +++++++++++++------------------
> 1 file changed, 13 insertions(+), 18 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index
> e842c35fef..12b7fbe7f9 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -1113,27 +1113,27 @@ sync_fill_seg(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> rte_memcpy((void *)((uintptr_t)(buf_addr)),
> rte_pktmbuf_mtod_offset(m, void *,
> mbuf_offset),
> cpy_len);
> + vhost_log_cache_write_iova(dev, vq, buf_iova,
> cpy_len);
> + PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len,
> 0);
> } else {
> rte_memcpy(rte_pktmbuf_mtod_offset(m, void *,
> mbuf_offset),
> (void *)((uintptr_t)(buf_addr)),
> cpy_len);
> }
> - vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len);
> - PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0);
> } else {
> if (to_desc) {
> batch_copy[vq->batch_copy_nb_elems].dst =
> (void *)((uintptr_t)(buf_addr));
> batch_copy[vq->batch_copy_nb_elems].src =
> rte_pktmbuf_mtod_offset(m, void *,
> mbuf_offset);
> + batch_copy[vq->batch_copy_nb_elems].log_addr =
> buf_iova;
> + batch_copy[vq->batch_copy_nb_elems].len =
> cpy_len;
> } else {
> batch_copy[vq->batch_copy_nb_elems].dst =
> rte_pktmbuf_mtod_offset(m, void *,
> mbuf_offset);
> batch_copy[vq->batch_copy_nb_elems].src =
> (void *)((uintptr_t)(buf_addr));
> }
> - batch_copy[vq->batch_copy_nb_elems].log_addr =
> buf_iova;
> - batch_copy[vq->batch_copy_nb_elems].len = cpy_len;
> vq->batch_copy_nb_elems++;
> }
> }
> @@ -2739,18 +2739,14 @@ desc_to_mbuf(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> if (async_fill_seg(dev, vq, cur, mbuf_offset,
> buf_iova + buf_offset, cpy_len,
> false) < 0)
> goto error;
> + } else if (likely(hdr && cur == m)) {
> + rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
> mbuf_offset),
> + (void *)((uintptr_t)(buf_addr + buf_offset)),
> + cpy_len);
> } else {
> - if (hdr && cur == m) {
> - rte_memcpy(rte_pktmbuf_mtod_offset(cur,
> void *, mbuf_offset),
> - (void *)((uintptr_t)(buf_addr +
> buf_offset)),
> - cpy_len);
> - vhost_log_cache_write_iova(dev, vq,
> buf_iova + buf_offset, cpy_len);
> - PRINT_PACKET(dev, (uintptr_t)(buf_addr +
> buf_offset), cpy_len, 0);
> - } else {
> - sync_fill_seg(dev, vq, cur, mbuf_offset,
> - buf_addr + buf_offset,
> - buf_iova + buf_offset, cpy_len, false);
> - }
> + sync_fill_seg(dev, vq, cur, mbuf_offset,
> + buf_addr + buf_offset,
> + buf_iova + buf_offset, cpy_len, false);
> }
>
> mbuf_avail -= cpy_len;
> @@ -2804,9 +2800,8 @@ desc_to_mbuf(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> async_iter_finalize(async);
> if (hdr)
> pkts_info[slot_idx].nethdr = *hdr;
> - } else {
> - if (hdr)
> - vhost_dequeue_offload(dev, hdr, m,
> legacy_ol_flags);
> + } else if (hdr) {
> + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags);
> }
>
> return 0;
> --
> 2.17.1
Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Thanks,
Jiayu
next prev parent reply other threads:[~2022-07-07 8:23 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-07 6:55 xuan.ding
2022-07-07 8:23 ` Hu, Jiayu [this message]
2022-07-07 9:51 ` Xia, Chenbo
2022-07-07 11:30 ` Maxime Coquelin
2022-07-08 7:04 ` Ding, Xuan
2022-07-08 7:53 ` He, Xingguang
2022-07-08 8:13 ` He, Xingguang
2022-07-08 7:57 ` Maxime Coquelin
2022-07-08 8:09 ` Ding, Xuan
2022-07-11 8:09 ` He, Xingguang
2022-07-07 12:04 ` Maxime Coquelin
2022-07-08 9:11 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fd595d34f83f4e0b94d444cb1a33d369@intel.com \
--to=jiayu.hu@intel.com \
--cc=chenbo.xia@intel.com \
--cc=cheng1.jiang@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=xingguang.he@intel.com \
--cc=xuan.ding@intel.com \
--cc=yvonnex.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).