From: Cheng Jiang <cheng1.jiang@intel.com>
To: maxime.coquelin@redhat.com, chenbo.xia@intel.com
Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com,
wenwux.ma@intel.com, yuanx.wang@intel.com,
yvonnex.yang@intel.com, xingguang.he@intel.com,
Cheng Jiang <cheng1.jiang@intel.com>,
stable@dpdk.org
Subject: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
Date: Tue, 11 Oct 2022 03:08:03 +0000 [thread overview]
Message-ID: <20221011030803.16746-3-cheng1.jiang@intel.com> (raw)
In-Reply-To: <20221011030803.16746-1-cheng1.jiang@intel.com>
When the packet receiving failure and the DMA ring full occur
simultaneously in the asynchronous vhost, the slot_idx needs to be
decreased by 1. For packed virtqueue, the slot index should be
ring_size - 1, if the slot_idx is currently 0, since the ring size is
not necessarily the power of 2.
Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
Cc: stable@dpdk.org
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
---
lib/vhost/virtio_net.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 457ac2e92a..efebd063d7 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3457,6 +3457,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
allocerr_warned = true;
}
dropped = true;
+ slot_idx--;
break;
}
@@ -3647,6 +3648,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt,
slot_idx, legacy_ol_flags))) {
rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
+
break;
}
@@ -3674,8 +3681,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
async->buffer_idx_packed += vq->size - pkt_err;
while (pkt_err-- > 0) {
- rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
- slot_idx--;
+ rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
+ descs_err += pkts_info[slot_idx].descs;
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
}
/* recover available ring */
--
2.35.1
next prev parent reply other threads:[~2022-10-11 3:46 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220822043126.19340-1-cheng1.jiang@intel.com>
[not found] ` <20221011030803.16746-1-cheng1.jiang@intel.com>
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
2022-10-21 8:16 ` Maxime Coquelin
2022-10-24 1:41 ` Jiang, Cheng1
2022-10-24 8:42 ` Xia, Chenbo
2022-10-11 3:08 ` Cheng Jiang [this message]
2022-10-13 9:40 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Ling, WeiX
2022-10-21 8:17 ` Maxime Coquelin
2022-10-24 8:43 ` Xia, Chenbo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221011030803.16746-3-cheng1.jiang@intel.com \
--to=cheng1.jiang@intel.com \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=jiayu.hu@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=stable@dpdk.org \
--cc=wenwux.ma@intel.com \
--cc=xingguang.he@intel.com \
--cc=xuan.ding@intel.com \
--cc=yuanx.wang@intel.com \
--cc=yvonnex.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).