* [PATH 0/2] vhost: fix some async vhost index calculation issues
@ 2022-08-22 4:31 Cheng Jiang
2022-08-22 4:31 ` [PATH 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Cheng Jiang @ 2022-08-22 4:31 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang
Fix some async vhost index calculation issues.
Cheng Jiang (2):
vhost: fix descs count in async vhost packed ring
vhost: fix slot index calculation in async vhost
lib/vhost/virtio_net.c | 40 +++++++++++++++++++++++++++++-----------
1 file changed, 29 insertions(+), 11 deletions(-)
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATH 1/2] vhost: fix descs count in async vhost packed ring
2022-08-22 4:31 [PATH 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
@ 2022-08-22 4:31 ` Cheng Jiang
2022-10-03 10:07 ` Maxime Coquelin
2022-08-22 4:31 ` [PATH 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2 siblings, 1 reply; 16+ messages in thread
From: Cheng Jiang @ 2022-08-22 4:31 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang
When vhost receive packets from the front-end using packed virtqueue, it
might use multiple descriptors for one packet, so we need calculate and
record the descriptor number for each packet to update available
descriptor counter and used descriptor counter, and rollback when DMA
ring is full.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
---
lib/vhost/virtio_net.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 35fa4670fd..bfc6d65b7c 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3553,14 +3553,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev,
}
static __rte_always_inline void
-vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id)
+vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq,
+ uint16_t buf_id, uint16_t count)
{
struct vhost_async *async = vq->async;
uint16_t idx = async->buffer_idx_packed;
async->buffers_packed[idx].id = buf_id;
async->buffers_packed[idx].len = 0;
- async->buffers_packed[idx].count = 1;
+ async->buffers_packed[idx].count = count;
async->buffer_idx_packed++;
if (async->buffer_idx_packed >= vq->size)
@@ -3581,6 +3582,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
uint16_t nr_vec = 0;
uint32_t buf_len;
struct buf_vector buf_vec[BUF_VECTOR_MAX];
+ struct vhost_async *async = vq->async;
+ struct async_inflight_info *pkts_info = async->pkts_info;
static bool allocerr_warned;
if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count,
@@ -3609,8 +3612,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
return -1;
}
+ pkts_info[slot_idx].descs = desc_count;
+
/* update async shadow packed ring */
- vhost_async_shadow_dequeue_single_packed(vq, buf_id);
+ vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count);
+
+ vq_inc_last_avail_packed(vq, desc_count);
return err;
}
@@ -3649,9 +3656,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
}
pkts_info[slot_idx].mbuf = pkt;
-
- vq_inc_last_avail_packed(vq, 1);
-
}
n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx,
@@ -3662,6 +3666,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
pkt_err = pkt_idx - n_xfer;
if (unlikely(pkt_err)) {
+ uint16_t descs_err = 0;
+
pkt_idx -= pkt_err;
/**
@@ -3678,10 +3684,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
}
/* recover available ring */
- if (vq->last_avail_idx >= pkt_err) {
- vq->last_avail_idx -= pkt_err;
+ if (vq->last_avail_idx >= descs_err) {
+ vq->last_avail_idx -= descs_err;
} else {
- vq->last_avail_idx += vq->size - pkt_err;
+ vq->last_avail_idx += vq->size - descs_err;
vq->avail_wrap_counter ^= 1;
}
}
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATH 2/2] vhost: fix slot index calculation in async vhost
2022-08-22 4:31 [PATH 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-08-22 4:31 ` [PATH 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
@ 2022-08-22 4:31 ` Cheng Jiang
2022-10-03 10:10 ` Maxime Coquelin
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2 siblings, 1 reply; 16+ messages in thread
From: Cheng Jiang @ 2022-08-22 4:31 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang
When the packet receiving failure and the DMA ring full occur
simultaneously in the asynchronous vhost, the slot_idx needs to be
reduced by 1. For packed virtqueue, the slot index should be
ring_size - 1, if the slot_idx is currently 0, since the ring size is
not necessarily the power of 2.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
---
lib/vhost/virtio_net.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index bfc6d65b7c..f804bce0bd 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3462,6 +3462,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
allocerr_warned = true;
}
dropped = true;
+ slot_idx--;
break;
}
@@ -3652,6 +3653,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt,
slot_idx, legacy_ol_flags))) {
rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
+
break;
}
@@ -3679,8 +3686,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
async->buffer_idx_packed += vq->size - pkt_err;
while (pkt_err-- > 0) {
- rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
- slot_idx--;
+ rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
+ descs_err += pkts_info[slot_idx].descs;
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
}
/* recover available ring */
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATH 1/2] vhost: fix descs count in async vhost packed ring
2022-08-22 4:31 ` [PATH 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
@ 2022-10-03 10:07 ` Maxime Coquelin
0 siblings, 0 replies; 16+ messages in thread
From: Maxime Coquelin @ 2022-10-03 10:07 UTC (permalink / raw)
To: Cheng Jiang, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he
On 8/22/22 06:31, Cheng Jiang wrote:
> When vhost receive packets from the front-end using packed virtqueue, it
receives*
> might use multiple descriptors for one packet, so we need calculate and
so we need to*
> record the descriptor number for each packet to update available
> descriptor counter and used descriptor counter, and rollback when DMA
> ring is full.
This is a fix, so the Fixes tag should be present, and stable@dpdk.org
cc'ed.
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
> lib/vhost/virtio_net.c | 24 +++++++++++++++---------
> 1 file changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 35fa4670fd..bfc6d65b7c 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3553,14 +3553,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev,
> }
>
> static __rte_always_inline void
> -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id)
> +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq,
> + uint16_t buf_id, uint16_t count)
> {
> struct vhost_async *async = vq->async;
> uint16_t idx = async->buffer_idx_packed;
>
> async->buffers_packed[idx].id = buf_id;
> async->buffers_packed[idx].len = 0;
> - async->buffers_packed[idx].count = 1;
> + async->buffers_packed[idx].count = count;
>
> async->buffer_idx_packed++;
> if (async->buffer_idx_packed >= vq->size)
> @@ -3581,6 +3582,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> uint16_t nr_vec = 0;
> uint32_t buf_len;
> struct buf_vector buf_vec[BUF_VECTOR_MAX];
> + struct vhost_async *async = vq->async;
> + struct async_inflight_info *pkts_info = async->pkts_info;
> static bool allocerr_warned;
>
> if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count,
> @@ -3609,8 +3612,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> return -1;
> }
>
> + pkts_info[slot_idx].descs = desc_count;
> +
> /* update async shadow packed ring */
> - vhost_async_shadow_dequeue_single_packed(vq, buf_id);
> + vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count);
> +
> + vq_inc_last_avail_packed(vq, desc_count);
>
> return err;
> }
> @@ -3649,9 +3656,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> }
>
> pkts_info[slot_idx].mbuf = pkt;
> -
> - vq_inc_last_avail_packed(vq, 1);
> -
> }
>
> n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx,
> @@ -3662,6 +3666,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> pkt_err = pkt_idx - n_xfer;
>
> if (unlikely(pkt_err)) {
> + uint16_t descs_err = 0;
> +
> pkt_idx -= pkt_err;
>
> /**
> @@ -3678,10 +3684,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> }
>
> /* recover available ring */
> - if (vq->last_avail_idx >= pkt_err) {
> - vq->last_avail_idx -= pkt_err;
> + if (vq->last_avail_idx >= descs_err) {
> + vq->last_avail_idx -= descs_err;
> } else {
> - vq->last_avail_idx += vq->size - pkt_err;
> + vq->last_avail_idx += vq->size - descs_err;
> vq->avail_wrap_counter ^= 1;
> }
> }
I'm not sure to understand, isn't descs_err always 0 here?
Maxime
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATH 2/2] vhost: fix slot index calculation in async vhost
2022-08-22 4:31 ` [PATH 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
@ 2022-10-03 10:10 ` Maxime Coquelin
2022-10-11 2:31 ` Jiang, Cheng1
0 siblings, 1 reply; 16+ messages in thread
From: Maxime Coquelin @ 2022-10-03 10:10 UTC (permalink / raw)
To: Cheng Jiang, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he
On 8/22/22 06:31, Cheng Jiang wrote:
> When the packet receiving failure and the DMA ring full occur
> simultaneously in the asynchronous vhost, the slot_idx needs to be
> reduced by 1. For packed virtqueue, the slot index should be
s/reduced/decreased/
> ring_size - 1, if the slot_idx is currently 0, since the ring size is
> not necessarily the power of 2.
This is a fix, so fixes tag and stable@dpdk.org is required.
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
> lib/vhost/virtio_net.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index bfc6d65b7c..f804bce0bd 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3462,6 +3462,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> allocerr_warned = true;
> }
> dropped = true;
> + slot_idx--;
> break;
> }
>
> @@ -3652,6 +3653,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt,
> slot_idx, legacy_ol_flags))) {
> rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> +
> break;
> }
>
> @@ -3679,8 +3686,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> async->buffer_idx_packed += vq->size - pkt_err;
>
> while (pkt_err-- > 0) {
> - rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
> - slot_idx--;
> + rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
> + descs_err += pkts_info[slot_idx].descs;
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> }
>
> /* recover available ring */
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATH 2/2] vhost: fix slot index calculation in async vhost
2022-10-03 10:10 ` Maxime Coquelin
@ 2022-10-11 2:31 ` Jiang, Cheng1
0 siblings, 0 replies; 16+ messages in thread
From: Jiang, Cheng1 @ 2022-10-11 2:31 UTC (permalink / raw)
To: Maxime Coquelin, Xia, Chenbo
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang
Hi,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Monday, October 3, 2022 6:10 PM
> To: Jiang, Cheng1 <cheng1.jiang@intel.com>; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang,
> YuanX <yuanx.wang@intel.com>; Yang, YvonneX
> <yvonnex.yang@intel.com>; He, Xingguang <xingguang.he@intel.com>
> Subject: Re: [PATH 2/2] vhost: fix slot index calculation in async vhost
>
>
>
> On 8/22/22 06:31, Cheng Jiang wrote:
> > When the packet receiving failure and the DMA ring full occur
> > simultaneously in the asynchronous vhost, the slot_idx needs to be
> > reduced by 1. For packed virtqueue, the slot index should be
>
> s/reduced/decreased/
Sure, thanks.
>
> > ring_size - 1, if the slot_idx is currently 0, since the ring size is
> > not necessarily the power of 2.
>
> This is a fix, so fixes tag and stable@dpdk.org is required.
Sorry for the miss. It will be fixed in the next version.
Thanks,
Cheng
>
> > Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> > ---
> > lib/vhost/virtio_net.c | 16 ++++++++++++++--
> > 1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index
> > bfc6d65b7c..f804bce0bd 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -3462,6 +3462,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev,
> struct vhost_virtqueue *vq,
> > allocerr_warned = true;
> > }
> > dropped = true;
> > + slot_idx--;
> > break;
> > }
> >
> > @@ -3652,6 +3653,12 @@ virtio_dev_tx_async_packed(struct virtio_net
> *dev, struct vhost_virtqueue *vq,
> > if (unlikely(virtio_dev_tx_async_single_packed(dev, vq,
> mbuf_pool, pkt,
> > slot_idx, legacy_ol_flags))) {
> > rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx],
> count - pkt_idx);
> > +
> > + if (slot_idx == 0)
> > + slot_idx = vq->size - 1;
> > + else
> > + slot_idx--;
> > +
> > break;
> > }
> >
> > @@ -3679,8 +3686,13 @@ virtio_dev_tx_async_packed(struct virtio_net
> *dev, struct vhost_virtqueue *vq,
> > async->buffer_idx_packed += vq->size - pkt_err;
> >
> > while (pkt_err-- > 0) {
> > - rte_pktmbuf_free(pkts_info[slot_idx % vq-
> >size].mbuf);
> > - slot_idx--;
> > + rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
> > + descs_err += pkts_info[slot_idx].descs;
> > +
> > + if (slot_idx == 0)
> > + slot_idx = vq->size - 1;
> > + else
> > + slot_idx--;
> > }
> >
> > /* recover available ring */
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 0/2] vhost: fix some async vhost index calculation issues
2022-08-22 4:31 [PATH 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-08-22 4:31 ` [PATH 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
2022-08-22 4:31 ` [PATH 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
@ 2022-10-11 3:08 ` Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
` (2 more replies)
2 siblings, 3 replies; 16+ messages in thread
From: Cheng Jiang @ 2022-10-11 3:08 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang
Fix some async vhost index calculation issues.
v2: fixed fixes tag and replaced some words in commit message.
Cheng Jiang (2):
vhost: fix descs count in async vhost packed ring
vhost: fix slot index calculation in async vhost
lib/vhost/virtio_net.c | 40 +++++++++++++++++++++++++++++-----------
1 file changed, 29 insertions(+), 11 deletions(-)
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
@ 2022-10-11 3:08 ` Cheng Jiang
2022-10-21 8:16 ` Maxime Coquelin
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-26 9:27 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Xia, Chenbo
2 siblings, 1 reply; 16+ messages in thread
From: Cheng Jiang @ 2022-10-11 3:08 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang, stable
When vhost receive packets from the front-end using packed virtqueue, it
might use multiple descriptors for one packet, so we need calculate and
record the descriptor number for each packet to update available
descriptor counter and used descriptor counter, and rollback when DMA
ring is full.
Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
Cc: stable@dpdk.org
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
---
lib/vhost/virtio_net.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 8f4d0f0502..457ac2e92a 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3548,14 +3548,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev,
}
static __rte_always_inline void
-vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id)
+vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq,
+ uint16_t buf_id, uint16_t count)
{
struct vhost_async *async = vq->async;
uint16_t idx = async->buffer_idx_packed;
async->buffers_packed[idx].id = buf_id;
async->buffers_packed[idx].len = 0;
- async->buffers_packed[idx].count = 1;
+ async->buffers_packed[idx].count = count;
async->buffer_idx_packed++;
if (async->buffer_idx_packed >= vq->size)
@@ -3576,6 +3577,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
uint16_t nr_vec = 0;
uint32_t buf_len;
struct buf_vector buf_vec[BUF_VECTOR_MAX];
+ struct vhost_async *async = vq->async;
+ struct async_inflight_info *pkts_info = async->pkts_info;
static bool allocerr_warned;
if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count,
@@ -3604,8 +3607,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
return -1;
}
+ pkts_info[slot_idx].descs = desc_count;
+
/* update async shadow packed ring */
- vhost_async_shadow_dequeue_single_packed(vq, buf_id);
+ vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count);
+
+ vq_inc_last_avail_packed(vq, desc_count);
return err;
}
@@ -3644,9 +3651,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
}
pkts_info[slot_idx].mbuf = pkt;
-
- vq_inc_last_avail_packed(vq, 1);
-
}
n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx,
@@ -3657,6 +3661,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
pkt_err = pkt_idx - n_xfer;
if (unlikely(pkt_err)) {
+ uint16_t descs_err = 0;
+
pkt_idx -= pkt_err;
/**
@@ -3673,10 +3679,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
}
/* recover available ring */
- if (vq->last_avail_idx >= pkt_err) {
- vq->last_avail_idx -= pkt_err;
+ if (vq->last_avail_idx >= descs_err) {
+ vq->last_avail_idx -= descs_err;
} else {
- vq->last_avail_idx += vq->size - pkt_err;
+ vq->last_avail_idx += vq->size - descs_err;
vq->avail_wrap_counter ^= 1;
}
}
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
@ 2022-10-11 3:08 ` Cheng Jiang
2022-10-13 9:40 ` Ling, WeiX
` (2 more replies)
2022-10-26 9:27 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Xia, Chenbo
2 siblings, 3 replies; 16+ messages in thread
From: Cheng Jiang @ 2022-10-11 3:08 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, Cheng Jiang, stable
When the packet receiving failure and the DMA ring full occur
simultaneously in the asynchronous vhost, the slot_idx needs to be
decreased by 1. For packed virtqueue, the slot index should be
ring_size - 1, if the slot_idx is currently 0, since the ring size is
not necessarily the power of 2.
Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
Cc: stable@dpdk.org
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
---
lib/vhost/virtio_net.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 457ac2e92a..efebd063d7 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3457,6 +3457,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
allocerr_warned = true;
}
dropped = true;
+ slot_idx--;
break;
}
@@ -3647,6 +3648,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt,
slot_idx, legacy_ol_flags))) {
rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
+
break;
}
@@ -3674,8 +3681,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
async->buffer_idx_packed += vq->size - pkt_err;
while (pkt_err-- > 0) {
- rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
- slot_idx--;
+ rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
+ descs_err += pkts_info[slot_idx].descs;
+
+ if (slot_idx == 0)
+ slot_idx = vq->size - 1;
+ else
+ slot_idx--;
}
/* recover available ring */
--
2.35.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
@ 2022-10-13 9:40 ` Ling, WeiX
2022-10-21 8:17 ` Maxime Coquelin
2022-10-24 8:43 ` Xia, Chenbo
2 siblings, 0 replies; 16+ messages in thread
From: Ling, WeiX @ 2022-10-13 9:40 UTC (permalink / raw)
To: Jiang, Cheng1, maxime.coquelin, Xia, Chenbo
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang, Jiang, Cheng1, stable
> -----Original Message-----
> From: Cheng Jiang <cheng1.jiang@intel.com>
> Sent: Tuesday, October 11, 2022 11:08 AM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang,
> YuanX <yuanx.wang@intel.com>; Yang, YvonneX
> <yvonnex.yang@intel.com>; He, Xingguang <xingguang.he@intel.com>;
> Jiang, Cheng1 <cheng1.jiang@intel.com>; stable@dpdk.org
> Subject: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
>
> When the packet receiving failure and the DMA ring full occur simultaneously
> in the asynchronous vhost, the slot_idx needs to be decreased by 1. For
> packed virtqueue, the slot index should be ring_size - 1, if the slot_idx is
> currently 0, since the ring size is not necessarily the power of 2.
>
> Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
> Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> Cc: stable@dpdk.org
>
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
@ 2022-10-21 8:16 ` Maxime Coquelin
2022-10-24 1:41 ` Jiang, Cheng1
0 siblings, 1 reply; 16+ messages in thread
From: Maxime Coquelin @ 2022-10-21 8:16 UTC (permalink / raw)
To: Cheng Jiang, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, stable
On 10/11/22 05:08, Cheng Jiang wrote:
> When vhost receive packets from the front-end using packed virtqueue, it
receives
> might use multiple descriptors for one packet, so we need calculate and
to calculate
> record the descriptor number for each packet to update available
> descriptor counter and used descriptor counter, and rollback when DMA
> ring is full.
>
> Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> Cc: stable@dpdk.org
>
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
> lib/vhost/virtio_net.c | 24 +++++++++++++++---------
> 1 file changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 8f4d0f0502..457ac2e92a 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3548,14 +3548,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev,
> }
>
> static __rte_always_inline void
> -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id)
> +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq,
> + uint16_t buf_id, uint16_t count)
> {
> struct vhost_async *async = vq->async;
> uint16_t idx = async->buffer_idx_packed;
>
> async->buffers_packed[idx].id = buf_id;
> async->buffers_packed[idx].len = 0;
> - async->buffers_packed[idx].count = 1;
> + async->buffers_packed[idx].count = count;
>
> async->buffer_idx_packed++;
> if (async->buffer_idx_packed >= vq->size)
> @@ -3576,6 +3577,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> uint16_t nr_vec = 0;
> uint32_t buf_len;
> struct buf_vector buf_vec[BUF_VECTOR_MAX];
> + struct vhost_async *async = vq->async;
> + struct async_inflight_info *pkts_info = async->pkts_info;
> static bool allocerr_warned;
>
> if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count,
> @@ -3604,8 +3607,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> return -1;
> }
>
> + pkts_info[slot_idx].descs = desc_count;
> +
> /* update async shadow packed ring */
> - vhost_async_shadow_dequeue_single_packed(vq, buf_id);
> + vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count);
> +
> + vq_inc_last_avail_packed(vq, desc_count);
>
> return err;
> }
> @@ -3644,9 +3651,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> }
>
> pkts_info[slot_idx].mbuf = pkt;
> -
> - vq_inc_last_avail_packed(vq, 1);
> -
> }
>
> n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx,
> @@ -3657,6 +3661,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> pkt_err = pkt_idx - n_xfer;
>
> if (unlikely(pkt_err)) {
> + uint16_t descs_err = 0;
> +
> pkt_idx -= pkt_err;
>
> /**
> @@ -3673,10 +3679,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> }
>
> /* recover available ring */
> - if (vq->last_avail_idx >= pkt_err) {
> - vq->last_avail_idx -= pkt_err;
> + if (vq->last_avail_idx >= descs_err) {
> + vq->last_avail_idx -= descs_err;
> } else {
> - vq->last_avail_idx += vq->size - pkt_err;
> + vq->last_avail_idx += vq->size - descs_err;
> vq->avail_wrap_counter ^= 1;
> }
> }
If only the commit message typos need to be fixed, maybe no need to send
a new version.
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-13 9:40 ` Ling, WeiX
@ 2022-10-21 8:17 ` Maxime Coquelin
2022-10-24 8:43 ` Xia, Chenbo
2 siblings, 0 replies; 16+ messages in thread
From: Maxime Coquelin @ 2022-10-21 8:17 UTC (permalink / raw)
To: Cheng Jiang, chenbo.xia
Cc: dev, jiayu.hu, xuan.ding, wenwux.ma, yuanx.wang, yvonnex.yang,
xingguang.he, stable
On 10/11/22 05:08, Cheng Jiang wrote:
> When the packet receiving failure and the DMA ring full occur
> simultaneously in the asynchronous vhost, the slot_idx needs to be
> decreased by 1. For packed virtqueue, the slot index should be
> ring_size - 1, if the slot_idx is currently 0, since the ring size is
> not necessarily the power of 2.
>
> Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
> Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> Cc: stable@dpdk.org
>
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
> lib/vhost/virtio_net.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 457ac2e92a..efebd063d7 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3457,6 +3457,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> allocerr_warned = true;
> }
> dropped = true;
> + slot_idx--;
> break;
> }
>
> @@ -3647,6 +3648,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt,
> slot_idx, legacy_ol_flags))) {
> rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> +
> break;
> }
>
> @@ -3674,8 +3681,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> async->buffer_idx_packed += vq->size - pkt_err;
>
> while (pkt_err-- > 0) {
> - rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
> - slot_idx--;
> + rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
> + descs_err += pkts_info[slot_idx].descs;
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> }
>
> /* recover available ring */
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring
2022-10-21 8:16 ` Maxime Coquelin
@ 2022-10-24 1:41 ` Jiang, Cheng1
2022-10-24 8:42 ` Xia, Chenbo
0 siblings, 1 reply; 16+ messages in thread
From: Jiang, Cheng1 @ 2022-10-24 1:41 UTC (permalink / raw)
To: Maxime Coquelin, Xia, Chenbo
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang, stable
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, October 21, 2022 4:16 PM
> To: Jiang, Cheng1 <cheng1.jiang@intel.com>; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang,
> YuanX <yuanx.wang@intel.com>; Yang, YvonneX
> <yvonnex.yang@intel.com>; He, Xingguang <xingguang.he@intel.com>;
> stable@dpdk.org
> Subject: Re: [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring
>
>
>
> On 10/11/22 05:08, Cheng Jiang wrote:
> > When vhost receive packets from the front-end using packed virtqueue,
> > it
>
> receives
>
> > might use multiple descriptors for one packet, so we need calculate
> > and
>
> to calculate
>
> > record the descriptor number for each packet to update available
> > descriptor counter and used descriptor counter, and rollback when DMA
> > ring is full.
> >
> > Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> > ---
> > lib/vhost/virtio_net.c | 24 +++++++++++++++---------
> > 1 file changed, 15 insertions(+), 9 deletions(-)
> >
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index
> > 8f4d0f0502..457ac2e92a 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -3548,14 +3548,15 @@ virtio_dev_tx_async_split_compliant(struct
> virtio_net *dev,
> > }
> >
> > static __rte_always_inline void
> > -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue
> *vq,
> > uint16_t buf_id)
> > +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue
> *vq,
> > + uint16_t buf_id, uint16_t count)
> > {
> > struct vhost_async *async = vq->async;
> > uint16_t idx = async->buffer_idx_packed;
> >
> > async->buffers_packed[idx].id = buf_id;
> > async->buffers_packed[idx].len = 0;
> > - async->buffers_packed[idx].count = 1;
> > + async->buffers_packed[idx].count = count;
> >
> > async->buffer_idx_packed++;
> > if (async->buffer_idx_packed >= vq->size) @@ -3576,6 +3577,8 @@
> > virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> > uint16_t nr_vec = 0;
> > uint32_t buf_len;
> > struct buf_vector buf_vec[BUF_VECTOR_MAX];
> > + struct vhost_async *async = vq->async;
> > + struct async_inflight_info *pkts_info = async->pkts_info;
> > static bool allocerr_warned;
> >
> > if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx,
> > &desc_count, @@ -3604,8 +3607,12 @@
> virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> > return -1;
> > }
> >
> > + pkts_info[slot_idx].descs = desc_count;
> > +
> > /* update async shadow packed ring */
> > - vhost_async_shadow_dequeue_single_packed(vq, buf_id);
> > + vhost_async_shadow_dequeue_single_packed(vq, buf_id,
> desc_count);
> > +
> > + vq_inc_last_avail_packed(vq, desc_count);
> >
> > return err;
> > }
> > @@ -3644,9 +3651,6 @@ virtio_dev_tx_async_packed(struct virtio_net
> *dev, struct vhost_virtqueue *vq,
> > }
> >
> > pkts_info[slot_idx].mbuf = pkt;
> > -
> > - vq_inc_last_avail_packed(vq, 1);
> > -
> > }
> >
> > n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id,
> > async->pkts_idx, @@ -3657,6 +3661,8 @@
> virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue
> *vq,
> > pkt_err = pkt_idx - n_xfer;
> >
> > if (unlikely(pkt_err)) {
> > + uint16_t descs_err = 0;
> > +
> > pkt_idx -= pkt_err;
> >
> > /**
> > @@ -3673,10 +3679,10 @@ virtio_dev_tx_async_packed(struct virtio_net
> *dev, struct vhost_virtqueue *vq,
> > }
> >
> > /* recover available ring */
> > - if (vq->last_avail_idx >= pkt_err) {
> > - vq->last_avail_idx -= pkt_err;
> > + if (vq->last_avail_idx >= descs_err) {
> > + vq->last_avail_idx -= descs_err;
> > } else {
> > - vq->last_avail_idx += vq->size - pkt_err;
> > + vq->last_avail_idx += vq->size - descs_err;
> > vq->avail_wrap_counter ^= 1;
> > }
> > }
>
> If only the commit message typos need to be fixed, maybe no need to send
> a new version.
Sure, thanks a lot!
Cheng
>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> Thanks,
> Maxime
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring
2022-10-24 1:41 ` Jiang, Cheng1
@ 2022-10-24 8:42 ` Xia, Chenbo
0 siblings, 0 replies; 16+ messages in thread
From: Xia, Chenbo @ 2022-10-24 8:42 UTC (permalink / raw)
To: Jiang, Cheng1, Maxime Coquelin
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang, stable
> -----Original Message-----
> From: Jiang, Cheng1 <cheng1.jiang@intel.com>
> Sent: Monday, October 24, 2022 9:42 AM
> To: Maxime Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; He,
> Xingguang <xingguang.he@intel.com>; stable@dpdk.org
> Subject: RE: [PATCH v2 1/2] vhost: fix descs count in async vhost packed
> ring
>
> Hi Maxime,
>
> > -----Original Message-----
> > From: Maxime Coquelin <maxime.coquelin@redhat.com>
> > Sent: Friday, October 21, 2022 4:16 PM
> > To: Jiang, Cheng1 <cheng1.jiang@intel.com>; Xia, Chenbo
> > <chenbo.xia@intel.com>
> > Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> > <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang,
> > YuanX <yuanx.wang@intel.com>; Yang, YvonneX
> > <yvonnex.yang@intel.com>; He, Xingguang <xingguang.he@intel.com>;
> > stable@dpdk.org
> > Subject: Re: [PATCH v2 1/2] vhost: fix descs count in async vhost packed
> ring
> >
> >
> >
> > On 10/11/22 05:08, Cheng Jiang wrote:
> > > When vhost receive packets from the front-end using packed virtqueue,
> > > it
> >
> > receives
> >
> > > might use multiple descriptors for one packet, so we need calculate
> > > and
> >
> > to calculate
> >
> > > record the descriptor number for each packet to update available
> > > descriptor counter and used descriptor counter, and rollback when DMA
> > > ring is full.
> > >
> > > Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> > > ---
> > > lib/vhost/virtio_net.c | 24 +++++++++++++++---------
> > > 1 file changed, 15 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index
> > > 8f4d0f0502..457ac2e92a 100644
> > > --- a/lib/vhost/virtio_net.c
> > > +++ b/lib/vhost/virtio_net.c
> > > @@ -3548,14 +3548,15 @@ virtio_dev_tx_async_split_compliant(struct
> > virtio_net *dev,
> > > }
> > >
> > > static __rte_always_inline void
> > > -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue
> > *vq,
> > > uint16_t buf_id)
> > > +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue
> > *vq,
> > > + uint16_t buf_id, uint16_t count)
> > > {
> > > struct vhost_async *async = vq->async;
> > > uint16_t idx = async->buffer_idx_packed;
> > >
> > > async->buffers_packed[idx].id = buf_id;
> > > async->buffers_packed[idx].len = 0;
> > > - async->buffers_packed[idx].count = 1;
> > > + async->buffers_packed[idx].count = count;
> > >
> > > async->buffer_idx_packed++;
> > > if (async->buffer_idx_packed >= vq->size) @@ -3576,6 +3577,8
> @@
> > > virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> > > uint16_t nr_vec = 0;
> > > uint32_t buf_len;
> > > struct buf_vector buf_vec[BUF_VECTOR_MAX];
> > > + struct vhost_async *async = vq->async;
> > > + struct async_inflight_info *pkts_info = async->pkts_info;
> > > static bool allocerr_warned;
> > >
> > > if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx,
> > > &desc_count, @@ -3604,8 +3607,12 @@
> > virtio_dev_tx_async_single_packed(struct virtio_net *dev,
> > > return -1;
> > > }
> > >
> > > + pkts_info[slot_idx].descs = desc_count;
> > > +
> > > /* update async shadow packed ring */
> > > - vhost_async_shadow_dequeue_single_packed(vq, buf_id);
> > > + vhost_async_shadow_dequeue_single_packed(vq, buf_id,
> > desc_count);
> > > +
> > > + vq_inc_last_avail_packed(vq, desc_count);
> > >
> > > return err;
> > > }
> > > @@ -3644,9 +3651,6 @@ virtio_dev_tx_async_packed(struct virtio_net
> > *dev, struct vhost_virtqueue *vq,
> > > }
> > >
> > > pkts_info[slot_idx].mbuf = pkt;
> > > -
> > > - vq_inc_last_avail_packed(vq, 1);
> > > -
> > > }
> > >
> > > n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id,
> > > async->pkts_idx, @@ -3657,6 +3661,8 @@
> > virtio_dev_tx_async_packed(struct virtio_net *dev, struct
> vhost_virtqueue
> > *vq,
> > > pkt_err = pkt_idx - n_xfer;
> > >
> > > if (unlikely(pkt_err)) {
> > > + uint16_t descs_err = 0;
> > > +
> > > pkt_idx -= pkt_err;
> > >
> > > /**
> > > @@ -3673,10 +3679,10 @@ virtio_dev_tx_async_packed(struct virtio_net
> > *dev, struct vhost_virtqueue *vq,
> > > }
> > >
> > > /* recover available ring */
> > > - if (vq->last_avail_idx >= pkt_err) {
> > > - vq->last_avail_idx -= pkt_err;
> > > + if (vq->last_avail_idx >= descs_err) {
> > > + vq->last_avail_idx -= descs_err;
> > > } else {
> > > - vq->last_avail_idx += vq->size - pkt_err;
> > > + vq->last_avail_idx += vq->size - descs_err;
> > > vq->avail_wrap_counter ^= 1;
> > > }
> > > }
> >
> > If only the commit message typos need to be fixed, maybe no need to send
> > a new version.
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Will fix above typos when applying
Thanks,
Chenbo
>
> Sure, thanks a lot!
> Cheng
>
> >
> > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >
> > Thanks,
> > Maxime
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-13 9:40 ` Ling, WeiX
2022-10-21 8:17 ` Maxime Coquelin
@ 2022-10-24 8:43 ` Xia, Chenbo
2 siblings, 0 replies; 16+ messages in thread
From: Xia, Chenbo @ 2022-10-24 8:43 UTC (permalink / raw)
To: Jiang, Cheng1, maxime.coquelin
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang, stable
> -----Original Message-----
> From: Jiang, Cheng1 <cheng1.jiang@intel.com>
> Sent: Tuesday, October 11, 2022 11:08 AM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; He,
> Xingguang <xingguang.he@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> stable@dpdk.org
> Subject: [PATCH v2 2/2] vhost: fix slot index calculation in async vhost
>
> When the packet receiving failure and the DMA ring full occur
> simultaneously in the asynchronous vhost, the slot_idx needs to be
> decreased by 1. For packed virtqueue, the slot index should be
> ring_size - 1, if the slot_idx is currently 0, since the ring size is
> not necessarily the power of 2.
>
> Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
> Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue")
> Cc: stable@dpdk.org
>
> Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> ---
> lib/vhost/virtio_net.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 457ac2e92a..efebd063d7 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -3457,6 +3457,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev,
> struct vhost_virtqueue *vq,
> allocerr_warned = true;
> }
> dropped = true;
> + slot_idx--;
> break;
> }
>
> @@ -3647,6 +3648,12 @@ virtio_dev_tx_async_packed(struct virtio_net *dev,
> struct vhost_virtqueue *vq,
> if (unlikely(virtio_dev_tx_async_single_packed(dev, vq,
> mbuf_pool, pkt,
> slot_idx, legacy_ol_flags))) {
> rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count -
> pkt_idx);
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> +
> break;
> }
>
> @@ -3674,8 +3681,13 @@ virtio_dev_tx_async_packed(struct virtio_net *dev,
> struct vhost_virtqueue *vq,
> async->buffer_idx_packed += vq->size - pkt_err;
>
> while (pkt_err-- > 0) {
> - rte_pktmbuf_free(pkts_info[slot_idx % vq->size].mbuf);
> - slot_idx--;
> + rte_pktmbuf_free(pkts_info[slot_idx].mbuf);
> + descs_err += pkts_info[slot_idx].descs;
> +
> + if (slot_idx == 0)
> + slot_idx = vq->size - 1;
> + else
> + slot_idx--;
> }
>
> /* recover available ring */
> --
> 2.35.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 0/2] vhost: fix some async vhost index calculation issues
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
@ 2022-10-26 9:27 ` Xia, Chenbo
2 siblings, 0 replies; 16+ messages in thread
From: Xia, Chenbo @ 2022-10-26 9:27 UTC (permalink / raw)
To: Jiang, Cheng1, maxime.coquelin
Cc: dev, Hu, Jiayu, Ding, Xuan, Ma, WenwuX, Wang, YuanX, Yang,
YvonneX, He, Xingguang
> -----Original Message-----
> From: Jiang, Cheng1 <cheng1.jiang@intel.com>
> Sent: Tuesday, October 11, 2022 11:08 AM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Ma, WenwuX <wenwux.ma@intel.com>; Wang, YuanX
> <yuanx.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>; He,
> Xingguang <xingguang.he@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>
> Subject: [PATCH v2 0/2] vhost: fix some async vhost index calculation
> issues
>
> Fix some async vhost index calculation issues.
>
> v2: fixed fixes tag and replaced some words in commit message.
>
> Cheng Jiang (2):
> vhost: fix descs count in async vhost packed ring
> vhost: fix slot index calculation in async vhost
>
> lib/vhost/virtio_net.c | 40 +++++++++++++++++++++++++++++-----------
> 1 file changed, 29 insertions(+), 11 deletions(-)
>
> --
> 2.35.1
Series applied to next-virtio/main, thanks
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2022-10-26 9:27 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-22 4:31 [PATH 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-08-22 4:31 ` [PATH 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
2022-10-03 10:07 ` Maxime Coquelin
2022-08-22 4:31 ` [PATH 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-03 10:10 ` Maxime Coquelin
2022-10-11 2:31 ` Jiang, Cheng1
2022-10-11 3:08 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Cheng Jiang
2022-10-11 3:08 ` [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Cheng Jiang
2022-10-21 8:16 ` Maxime Coquelin
2022-10-24 1:41 ` Jiang, Cheng1
2022-10-24 8:42 ` Xia, Chenbo
2022-10-11 3:08 ` [PATCH v2 2/2] vhost: fix slot index calculation in async vhost Cheng Jiang
2022-10-13 9:40 ` Ling, WeiX
2022-10-21 8:17 ` Maxime Coquelin
2022-10-24 8:43 ` Xia, Chenbo
2022-10-26 9:27 ` [PATCH v2 0/2] vhost: fix some async vhost index calculation issues Xia, Chenbo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).