* [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
@ 2019-09-23 14:05 Marvin Liu
2019-09-23 15:22 ` Stephen Hemminger
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Marvin Liu @ 2019-09-23 14:05 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen; +Cc: dev, stable, Marvin Liu
If reserve virtio header room by function rte_pktmbuf_prepend, both
segment data length and packet length of mbuf will be increased.
Data length will be equal to descriptor length, while packet length
should be decreased as virtio-net header won't be taken into packet.
Thus will cause mismatch in mbuf structure. Fix this issue by access
mbuf data directly and increase descriptor length if it is needed.
Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 27ead19fb..822cce06d 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -597,9 +597,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
dxp->cookie = (void *)cookies[i];
dxp->ndescs = 1;
- hdr = (struct virtio_net_hdr *)
- rte_pktmbuf_prepend(cookies[i], head_size);
- cookies[i]->pkt_len -= head_size;
+ hdr = (struct virtio_net_hdr *)(char *)cookies[i]->buf_addr +
+ cookies[i]->data_off - head_size;
/* if offload disabled, hdr is not zeroed yet, do it now */
if (!vq->hw->has_tx_offload)
@@ -608,9 +607,10 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
virtqueue_xmit_offload(hdr, cookies[i], true);
start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
- start_dp[idx].len = cookies[i]->data_len;
+ start_dp[idx].len = cookies[i]->data_len + head_size;
start_dp[idx].flags = 0;
+
vq_update_avail_ring(vq, idx);
idx++;
@@ -644,9 +644,8 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
flags = vq->vq_packed.cached_flags;
/* prepend cannot fail, checked by caller */
- hdr = (struct virtio_net_hdr *)
- rte_pktmbuf_prepend(cookie, head_size);
- cookie->pkt_len -= head_size;
+ hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
+ cookie->data_off - head_size;
/* if offload disabled, hdr is not zeroed yet, do it now */
if (!vq->hw->has_tx_offload)
@@ -655,7 +654,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
virtqueue_xmit_offload(hdr, cookie, true);
dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
- dp->len = cookie->data_len;
+ dp->len = cookie->data_len + head_size;
dp->id = id;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
@@ -687,6 +686,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t head_size = vq->hw->vtnet_hdr_size;
struct virtio_net_hdr *hdr;
uint16_t prev;
+ bool prepend_header = false;
id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
@@ -705,12 +705,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
if (can_push) {
/* prepend cannot fail, checked by caller */
- hdr = (struct virtio_net_hdr *)
- rte_pktmbuf_prepend(cookie, head_size);
- /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
- * which is wrong. Below subtract restores correct pkt size.
- */
- cookie->pkt_len -= head_size;
+ hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
+ cookie->data_off - head_size;
+ prepend_header = true;
/* if offload disabled, it is not zeroed below, do it now */
if (!vq->hw->has_tx_offload)
@@ -738,6 +735,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
start_dp[idx].len = cookie->data_len;
+ if (prepend_header) {
+ start_dp[idx].len += head_size;
+ prepend_header = false;
+ }
+
if (likely(idx != head_idx)) {
flags = cookie->next ? VRING_DESC_F_NEXT : 0;
flags |= vq->vq_packed.cached_flags;
@@ -779,6 +781,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t seg_num = cookie->nb_segs;
uint16_t head_idx, idx;
uint16_t head_size = vq->hw->vtnet_hdr_size;
+ bool prepend_header = false;
struct virtio_net_hdr *hdr;
head_idx = vq->vq_desc_head_idx;
@@ -794,12 +797,9 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
if (can_push) {
/* prepend cannot fail, checked by caller */
- hdr = (struct virtio_net_hdr *)
- rte_pktmbuf_prepend(cookie, head_size);
- /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
- * which is wrong. Below subtract restores correct pkt size.
- */
- cookie->pkt_len -= head_size;
+ hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
+ cookie->data_off - head_size;
+ prepend_header = true;
/* if offload disabled, it is not zeroed below, do it now */
if (!vq->hw->has_tx_offload)
@@ -838,6 +838,10 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
do {
start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
start_dp[idx].len = cookie->data_len;
+ if (prepend_header) {
+ start_dp[idx].len += head_size;
+ prepend_header = false;
+ }
start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
idx = start_dp[idx].next;
} while ((cookie = cookie->next) != NULL);
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-23 14:05 [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch Marvin Liu
@ 2019-09-23 15:22 ` Stephen Hemminger
2019-09-24 4:53 ` Liu, Yong
2019-09-27 9:30 ` Maxime Coquelin
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Stephen Hemminger @ 2019-09-23 15:22 UTC (permalink / raw)
To: Marvin Liu; +Cc: maxime.coquelin, tiwei.bie, zhihong.wang, dev, stable
On Mon, 23 Sep 2019 22:05:11 +0800
Marvin Liu <yong.liu@intel.com> wrote:
> If reserve virtio header room by function rte_pktmbuf_prepend, both
> segment data length and packet length of mbuf will be increased.
> Data length will be equal to descriptor length, while packet length
> should be decreased as virtio-net header won't be taken into packet.
> Thus will cause mismatch in mbuf structure. Fix this issue by access
> mbuf data directly and increase descriptor length if it is needed.
>
> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
> Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
> Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> Cc: stable@dpdk.org
>
> Reported-by: Stephen Hemminger <stephen@networkplumber.org>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
Looks good, for current code.
Won't apply cleanly to 18.11. Could you send a version for that as well?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-23 15:22 ` Stephen Hemminger
@ 2019-09-24 4:53 ` Liu, Yong
0 siblings, 0 replies; 10+ messages in thread
From: Liu, Yong @ 2019-09-24 4:53 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: maxime.coquelin, Bie, Tiwei, Wang, Zhihong, dev, stable
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, September 23, 2019 11:22 PM
> To: Liu, Yong <yong.liu@intel.com>
> Cc: maxime.coquelin@redhat.com; Bie, Tiwei <tiwei.bie@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH] net/virtio: fix mbuf data and pkt length mismatch
>
> On Mon, 23 Sep 2019 22:05:11 +0800
> Marvin Liu <yong.liu@intel.com> wrote:
>
> > If reserve virtio header room by function rte_pktmbuf_prepend, both
> > segment data length and packet length of mbuf will be increased.
> > Data length will be equal to descriptor length, while packet length
> > should be decreased as virtio-net header won't be taken into packet.
> > Thus will cause mismatch in mbuf structure. Fix this issue by access
> > mbuf data directly and increase descriptor length if it is needed.
> >
> > Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
> > Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
> > Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
> > Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> > Cc: stable@dpdk.org
> >
> > Reported-by: Stephen Hemminger <stephen@networkplumber.org>
> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
> Looks good, for current code.
> Won't apply cleanly to 18.11. Could you send a version for that as well?
Thanks for reviewing, version for 18.11 has sent to stable mailing list.
Regards,
Marvin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-23 14:05 [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch Marvin Liu
2019-09-23 15:22 ` Stephen Hemminger
@ 2019-09-27 9:30 ` Maxime Coquelin
2019-09-27 9:50 ` Maxime Coquelin
2019-10-14 15:15 ` Andrew Rybchenko
3 siblings, 0 replies; 10+ messages in thread
From: Maxime Coquelin @ 2019-09-27 9:30 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie, zhihong.wang, stephen; +Cc: dev, stable
On 9/23/19 4:05 PM, Marvin Liu wrote:
> If reserve virtio header room by function rte_pktmbuf_prepend, both
> segment data length and packet length of mbuf will be increased.
> Data length will be equal to descriptor length, while packet length
> should be decreased as virtio-net header won't be taken into packet.
> Thus will cause mismatch in mbuf structure. Fix this issue by access
> mbuf data directly and increase descriptor length if it is needed.
>
> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
> Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
> Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> Cc: stable@dpdk.org
>
> Reported-by: Stephen Hemminger <stephen@networkplumber.org>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-23 14:05 [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch Marvin Liu
2019-09-23 15:22 ` Stephen Hemminger
2019-09-27 9:30 ` Maxime Coquelin
@ 2019-09-27 9:50 ` Maxime Coquelin
2019-10-07 7:17 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
2019-10-14 15:15 ` Andrew Rybchenko
3 siblings, 1 reply; 10+ messages in thread
From: Maxime Coquelin @ 2019-09-27 9:50 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie, zhihong.wang, stephen; +Cc: dev, stable
On 9/23/19 4:05 PM, Marvin Liu wrote:
> If reserve virtio header room by function rte_pktmbuf_prepend, both
> segment data length and packet length of mbuf will be increased.
> Data length will be equal to descriptor length, while packet length
> should be decreased as virtio-net header won't be taken into packet.
> Thus will cause mismatch in mbuf structure. Fix this issue by access
> mbuf data directly and increase descriptor length if it is needed.
>
> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
> Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
> Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> Cc: stable@dpdk.org
>
> Reported-by: Stephen Hemminger <stephen@networkplumber.org>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
Applied to dpdk-next-virtio/master.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-27 9:50 ` Maxime Coquelin
@ 2019-10-07 7:17 ` Andrew Rybchenko
0 siblings, 0 replies; 10+ messages in thread
From: Andrew Rybchenko @ 2019-10-07 7:17 UTC (permalink / raw)
To: Maxime Coquelin, Marvin Liu, tiwei.bie, zhihong.wang, stephen; +Cc: dev, stable
On 9/27/19 12:50 PM, Maxime Coquelin wrote:
> On 9/23/19 4:05 PM, Marvin Liu wrote:
>> If reserve virtio header room by function rte_pktmbuf_prepend, both
>> segment data length and packet length of mbuf will be increased.
>> Data length will be equal to descriptor length, while packet length
>> should be decreased as virtio-net header won't be taken into packet.
>> Thus will cause mismatch in mbuf structure. Fix this issue by access
>> mbuf data directly and increase descriptor length if it is needed.
>>
>> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
>> Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
>> Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
>> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
>> Cc: stable@dpdk.org
>>
>> Reported-by: Stephen Hemminger <stephen@networkplumber.org>
>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>
> Applied to dpdk-next-virtio/master.
We observe regressions in virtio with the patch applied.
We've not found root cause yet.
Andrew.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-09-23 14:05 [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch Marvin Liu
` (2 preceding siblings ...)
2019-09-27 9:50 ` Maxime Coquelin
@ 2019-10-14 15:15 ` Andrew Rybchenko
2019-10-14 15:28 ` Kevin Traynor
2019-10-15 5:33 ` Tiwei Bie
3 siblings, 2 replies; 10+ messages in thread
From: Andrew Rybchenko @ 2019-10-14 15:15 UTC (permalink / raw)
To: Marvin Liu, maxime.coquelin, tiwei.bie, zhihong.wang, stephen
Cc: dev, stable, Kevin Traynor, Luca Boccassi
Hi,
as far as I can see the patch introduces regressions.
CC Kevin and Luca to be careful with stable branches patches.
See details below.
Andrew.
On 9/23/19 5:05 PM, Marvin Liu wrote:
> If reserve virtio header room by function rte_pktmbuf_prepend, both
> segment data length and packet length of mbuf will be increased.
> Data length will be equal to descriptor length, while packet length
> should be decreased as virtio-net header won't be taken into packet.
> Thus will cause mismatch in mbuf structure. Fix this issue by access
> mbuf data directly and increase descriptor length if it is needed.
>
> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
> Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
> Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> Cc: stable@dpdk.org
>
> Reported-by: Stephen Hemminger <stephen@networkplumber.org>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index 27ead19fb..822cce06d 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -597,9 +597,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
> dxp->cookie = (void *)cookies[i];
> dxp->ndescs = 1;
>
> - hdr = (struct virtio_net_hdr *)
> - rte_pktmbuf_prepend(cookies[i], head_size);
> - cookies[i]->pkt_len -= head_size;
> + hdr = (struct virtio_net_hdr *)(char *)cookies[i]->buf_addr +
> + cookies[i]->data_off - head_size;
>
> /* if offload disabled, hdr is not zeroed yet, do it now */
> if (!vq->hw->has_tx_offload)
> @@ -608,9 +607,10 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
> virtqueue_xmit_offload(hdr, cookies[i], true);
>
> start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
As I understand the problem is here. It points to start of the packet
(Ethernet header) since data_off is not changed above now, but
should point to virtio_net_hdr before the packet.
I think the patch fixes the bug in a wrong direction. It looks better
to simply remove
cookies[i]->pkt_len -= head_size;
above and care about real packet length difference in
virtio_update_packet_stats() or when it is called from Tx path.
If it is OK for maintainers I'm ready to send patches to rollback back
this one and fix it as described above.
> - start_dp[idx].len = cookies[i]->data_len;
> + start_dp[idx].len = cookies[i]->data_len + head_size;
> start_dp[idx].flags = 0;
>
> +
> vq_update_avail_ring(vq, idx);
>
> idx++;
> @@ -644,9 +644,8 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
> flags = vq->vq_packed.cached_flags;
>
> /* prepend cannot fail, checked by caller */
> - hdr = (struct virtio_net_hdr *)
> - rte_pktmbuf_prepend(cookie, head_size);
> - cookie->pkt_len -= head_size;
> + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
> + cookie->data_off - head_size;
>
> /* if offload disabled, hdr is not zeroed yet, do it now */
> if (!vq->hw->has_tx_offload)
> @@ -655,7 +654,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
> virtqueue_xmit_offload(hdr, cookie, true);
>
> dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> - dp->len = cookie->data_len;
> + dp->len = cookie->data_len + head_size;
> dp->id = id;
>
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> @@ -687,6 +686,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> uint16_t head_size = vq->hw->vtnet_hdr_size;
> struct virtio_net_hdr *hdr;
> uint16_t prev;
> + bool prepend_header = false;
>
> id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
>
> @@ -705,12 +705,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
>
> if (can_push) {
> /* prepend cannot fail, checked by caller */
> - hdr = (struct virtio_net_hdr *)
> - rte_pktmbuf_prepend(cookie, head_size);
> - /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
> - * which is wrong. Below subtract restores correct pkt size.
> - */
> - cookie->pkt_len -= head_size;
> + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
> + cookie->data_off - head_size;
> + prepend_header = true;
>
> /* if offload disabled, it is not zeroed below, do it now */
> if (!vq->hw->has_tx_offload)
> @@ -738,6 +735,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
>
> start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> start_dp[idx].len = cookie->data_len;
> + if (prepend_header) {
> + start_dp[idx].len += head_size;
> + prepend_header = false;
> + }
> +
> if (likely(idx != head_idx)) {
> flags = cookie->next ? VRING_DESC_F_NEXT : 0;
> flags |= vq->vq_packed.cached_flags;
> @@ -779,6 +781,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> uint16_t seg_num = cookie->nb_segs;
> uint16_t head_idx, idx;
> uint16_t head_size = vq->hw->vtnet_hdr_size;
> + bool prepend_header = false;
> struct virtio_net_hdr *hdr;
>
> head_idx = vq->vq_desc_head_idx;
> @@ -794,12 +797,9 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
>
> if (can_push) {
> /* prepend cannot fail, checked by caller */
> - hdr = (struct virtio_net_hdr *)
> - rte_pktmbuf_prepend(cookie, head_size);
> - /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
> - * which is wrong. Below subtract restores correct pkt size.
> - */
> - cookie->pkt_len -= head_size;
> + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr +
> + cookie->data_off - head_size;
> + prepend_header = true;
>
> /* if offload disabled, it is not zeroed below, do it now */
> if (!vq->hw->has_tx_offload)
> @@ -838,6 +838,10 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> do {
> start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> start_dp[idx].len = cookie->data_len;
> + if (prepend_header) {
> + start_dp[idx].len += head_size;
> + prepend_header = false;
> + }
> start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
> idx = start_dp[idx].next;
> } while ((cookie = cookie->next) != NULL);
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-10-14 15:15 ` Andrew Rybchenko
@ 2019-10-14 15:28 ` Kevin Traynor
2019-10-15 5:33 ` Tiwei Bie
1 sibling, 0 replies; 10+ messages in thread
From: Kevin Traynor @ 2019-10-14 15:28 UTC (permalink / raw)
To: Andrew Rybchenko, Marvin Liu, maxime.coquelin, tiwei.bie,
zhihong.wang, stephen
Cc: dev, stable, Luca Boccassi
On 14/10/2019 16:15, Andrew Rybchenko wrote:
> Hi,
>
> as far as I can see the patch introduces regressions.
>
> CC Kevin and Luca to be careful with stable branches patches.
>
Thanks Andrew. Just to confirm, it's not on any stable branches. Will
note not to backport until it is resolved on master.
Kevin.
> See details below.
>
> Andrew.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-10-14 15:15 ` Andrew Rybchenko
2019-10-14 15:28 ` Kevin Traynor
@ 2019-10-15 5:33 ` Tiwei Bie
2019-10-15 8:14 ` Andrew Rybchenko
1 sibling, 1 reply; 10+ messages in thread
From: Tiwei Bie @ 2019-10-15 5:33 UTC (permalink / raw)
To: Andrew Rybchenko
Cc: Marvin Liu, maxime.coquelin, zhihong.wang, stephen, dev, stable,
Kevin Traynor, Luca Boccassi
On Mon, Oct 14, 2019 at 06:15:42PM +0300, Andrew Rybchenko wrote:
> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> > index 27ead19fb..822cce06d 100644
> > --- a/drivers/net/virtio/virtio_rxtx.c
> > +++ b/drivers/net/virtio/virtio_rxtx.c
> > @@ -597,9 +597,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
> > dxp->cookie = (void *)cookies[i];
> > dxp->ndescs = 1;
> > - hdr = (struct virtio_net_hdr *)
> > - rte_pktmbuf_prepend(cookies[i], head_size);
> > - cookies[i]->pkt_len -= head_size;
> > + hdr = (struct virtio_net_hdr *)(char *)cookies[i]->buf_addr +
> > + cookies[i]->data_off - head_size;
> > /* if offload disabled, hdr is not zeroed yet, do it now */
> > if (!vq->hw->has_tx_offload)
> > @@ -608,9 +607,10 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
> > virtqueue_xmit_offload(hdr, cookies[i], true);
> > start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
>
> As I understand the problem is here. It points to start of the packet
> (Ethernet header) since data_off is not changed above now, but
> should point to virtio_net_hdr before the packet.
>
> I think the patch fixes the bug in a wrong direction. It looks better
> to simply remove
>
> cookies[i]->pkt_len -= head_size;
>
> above and care about real packet length difference in
> virtio_update_packet_stats() or when it is called from Tx path.
>
> If it is OK for maintainers I'm ready to send patches to rollback back
> this one and fix it as described above.
Hi Andrew,
Thanks for looking into this! Feel free to send your fix.
PS. Another thing also needs to be noticed is that, after
prepending the net hdr with rte_pktmbuf_prepend(), below
code in virtio_update_packet_stats() won't be able to
access the ether header as expected:
https://github.com/DPDK/dpdk/blob/31b798a6f08e9b333b94b8bb26910209aa810b73/drivers/net/virtio/virtio_rxtx.c#L134-L140
Thanks,
Tiwei
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch
2019-10-15 5:33 ` Tiwei Bie
@ 2019-10-15 8:14 ` Andrew Rybchenko
0 siblings, 0 replies; 10+ messages in thread
From: Andrew Rybchenko @ 2019-10-15 8:14 UTC (permalink / raw)
To: Tiwei Bie
Cc: Marvin Liu, maxime.coquelin, zhihong.wang, stephen, dev, stable,
Kevin Traynor, Luca Boccassi
Hi Tiwei,
On 10/15/19 8:33 AM, Tiwei Bie wrote:
> On Mon, Oct 14, 2019 at 06:15:42PM +0300, Andrew Rybchenko wrote:
>>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>>> index 27ead19fb..822cce06d 100644
>>> --- a/drivers/net/virtio/virtio_rxtx.c
>>> +++ b/drivers/net/virtio/virtio_rxtx.c
>>> @@ -597,9 +597,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
>>> dxp->cookie = (void *)cookies[i];
>>> dxp->ndescs = 1;
>>> - hdr = (struct virtio_net_hdr *)
>>> - rte_pktmbuf_prepend(cookies[i], head_size);
>>> - cookies[i]->pkt_len -= head_size;
>>> + hdr = (struct virtio_net_hdr *)(char *)cookies[i]->buf_addr +
>>> + cookies[i]->data_off - head_size;
>>> /* if offload disabled, hdr is not zeroed yet, do it now */
>>> if (!vq->hw->has_tx_offload)
>>> @@ -608,9 +607,10 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
>>> virtqueue_xmit_offload(hdr, cookies[i], true);
>>> start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
>> As I understand the problem is here. It points to start of the packet
>> (Ethernet header) since data_off is not changed above now, but
>> should point to virtio_net_hdr before the packet.
>>
>> I think the patch fixes the bug in a wrong direction. It looks better
>> to simply remove
>>
>> cookies[i]->pkt_len -= head_size;
>>
>> above and care about real packet length difference in
>> virtio_update_packet_stats() or when it is called from Tx path.
>>
>> If it is OK for maintainers I'm ready to send patches to rollback back
>> this one and fix it as described above.
> Hi Andrew,
>
> Thanks for looking into this! Feel free to send your fix.
> PS. Another thing also needs to be noticed is that, after
> prepending the net hdr with rte_pktmbuf_prepend(), below
> code in virtio_update_packet_stats() won't be able to
> access the ether header as expected:
>
> https://github.com/DPDK/dpdk/blob/31b798a6f08e9b333b94b8bb26910209aa810b73/drivers/net/virtio/virtio_rxtx.c#L134-L140
Thanks for the information. Looking closer I've changed my mind.
Will send patch shortly.
Andrew.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-10-15 8:14 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-23 14:05 [dpdk-stable] [PATCH] net/virtio: fix mbuf data and pkt length mismatch Marvin Liu
2019-09-23 15:22 ` Stephen Hemminger
2019-09-24 4:53 ` Liu, Yong
2019-09-27 9:30 ` Maxime Coquelin
2019-09-27 9:50 ` Maxime Coquelin
2019-10-07 7:17 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
2019-10-14 15:15 ` Andrew Rybchenko
2019-10-14 15:28 ` Kevin Traynor
2019-10-15 5:33 ` Tiwei Bie
2019-10-15 8:14 ` Andrew Rybchenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).