* [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation @ 2019-06-03 18:50 Andrew Rybchenko 2019-06-05 1:41 ` Tiwei Bie ` (2 more replies) 0 siblings, 3 replies; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-03 18:50 UTC (permalink / raw) To: Maxime Coquelin, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Virtio requires pseudo-header checksum in TCP/UDP checksum to do offload, but it was lost when Tx prepare is introduced. Also rte_validate_tx_offload() should be used to validate Tx offloads. Also it is incorrect to do virtio_tso_fix_cksum() after prepend to mbuf without taking prepended size into account, since layer 2/3/4 lengths provide incorrect offsets after prepend. Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") Cc: stable@dpdk.org Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> --- v2: - fix typo in E-mail address drivers/net/virtio/virtio_ethdev.c | 1 + drivers/net/virtio/virtio_ethdev.h | 3 +++ drivers/net/virtio/virtio_rxtx.c | 32 +++++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index c4570bb..97d3c29 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1473,6 +1473,7 @@ static int virtio_dev_xstats_get_names(struct rte_eth_dev *dev, { struct virtio_hw *hw = eth_dev->data->dev_private; + eth_dev->tx_pkt_prepare = virtio_xmit_pkts_prepare; if (vtpci_packed_queue(hw)) { PMD_INIT_LOG(INFO, "virtio: using packed ring %s Tx path on port %u", diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 45e96f3..20d3317 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -89,6 +89,9 @@ uint16_t virtio_recv_mergeable_pkts_packed(void *rx_queue, uint16_t virtio_recv_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 1de2854..f2ca085 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -559,7 +559,6 @@ /* TCP Segmentation Offload */ if (cookie->ol_flags & PKT_TX_TCP_SEG) { - virtio_tso_fix_cksum(cookie); hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? VIRTIO_NET_HDR_GSO_TCPV6 : VIRTIO_NET_HDR_GSO_TCPV4; @@ -1970,6 +1969,37 @@ } uint16_t +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + int error; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + + if (m->ol_flags & PKT_TX_TCP_SEG) + virtio_tso_fix_cksum(m); + } + + return nb_tx; +} + +uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { -- 1.8.3.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation 2019-06-03 18:50 [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation Andrew Rybchenko @ 2019-06-05 1:41 ` Tiwei Bie 2019-06-05 7:28 ` Andrew Rybchenko 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 1/2] " Andrew Rybchenko 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 " Andrew Rybchenko 2 siblings, 1 reply; 14+ messages in thread From: Tiwei Bie @ 2019-06-05 1:41 UTC (permalink / raw) To: Andrew Rybchenko Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable Hi, Thanks for the patch! On Mon, Jun 03, 2019 at 07:50:05PM +0100, Andrew Rybchenko wrote: [...] > uint16_t > +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, > + uint16_t nb_pkts) > +{ > + uint16_t nb_tx; > + int error; > + > + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { > + struct rte_mbuf *m = tx_pkts[nb_tx]; > + > +#ifdef RTE_LIBRTE_ETHDEV_DEBUG > + error = rte_validate_tx_offload(m); > + if (unlikely(error)) { > + rte_errno = -error; > + break; > + } > +#endif > + > + error = rte_net_intel_cksum_prepare(m); > + if (unlikely(error)) { > + rte_errno = -error; It's a bit confusing here. Based on the API doc of rte_eth_tx_prepare(): https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/lib/librte_ethdev/rte_ethdev.h#L4360-L4362 It should set negative value to rte_errno when error happens, and that's also what some other PMDs do, e.g.: https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/drivers/net/iavf/iavf_rxtx.c#L1701 https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/drivers/net/iavf/iavf_rxtx.c#L1725 https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/drivers/net/vmxnet3/vmxnet3_rxtx.c#L364 But some PMDs and rte_eth_tx_prepare() itself don't do this: https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/lib/librte_ethdev/rte_ethdev.h#L4377 https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/lib/librte_ethdev/rte_ethdev.h#L4387 > + break; > + } > + > + if (m->ol_flags & PKT_TX_TCP_SEG) > + virtio_tso_fix_cksum(m); > + } > + > + return nb_tx; > +} > + > +uint16_t > virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, > uint16_t nb_pkts) > { > -- > 1.8.3.1 > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation 2019-06-05 1:41 ` Tiwei Bie @ 2019-06-05 7:28 ` Andrew Rybchenko 2019-06-05 9:37 ` Tiwei Bie 0 siblings, 1 reply; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-05 7:28 UTC (permalink / raw) To: Tiwei Bie; +Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable Hi, On 6/5/19 4:41 AM, Tiwei Bie wrote: > Hi, > > Thanks for the patch! More will follow. At least Tx checksum offload is broken when used together with VLAN insertion since the later prepend to mbuf, but do nothing with l2_len/outer_l2_len. We'll send a patch to move rte_vlan_insert() to Tx prepare and suggest separate patch to update l2_len or outer_l2_len when software VLAN insertion is done. > On Mon, Jun 03, 2019 at 07:50:05PM +0100, Andrew Rybchenko wrote: > [...] >> uint16_t >> +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, >> + uint16_t nb_pkts) >> +{ >> + uint16_t nb_tx; >> + int error; >> + >> + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { >> + struct rte_mbuf *m = tx_pkts[nb_tx]; >> + >> +#ifdef RTE_LIBRTE_ETHDEV_DEBUG >> + error = rte_validate_tx_offload(m); >> + if (unlikely(error)) { >> + rte_errno = -error; >> + break; >> + } >> +#endif >> + >> + error = rte_net_intel_cksum_prepare(m); >> + if (unlikely(error)) { >> + rte_errno = -error; > It's a bit confusing here. > Based on the API doc of rte_eth_tx_prepare(): > > https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/lib/librte_ethdev/rte_ethdev.h#L4360-L4362 > > It should set negative value to rte_errno when error happens, I'm pretty sure that it is a bug in documentation. rte_errno must be positive. I'll send a patch to fix it. Even the code just below sets positive rte_errno. Simple cases are very easy to find: $ git grep 'rte_errno = E' | wc -l 557 $ git grep 'rte_errno = -E' | wc -l 50 A bit more complex cases which require careful review: $ git grep -e 'rte_errno = -[a-z]' | wc -l 37 $ git grep -e 'rte_errno = [a-z]' | wc -l 150 Cases which look right from the first sight overweight wrong 3 times. But it is still too many cases which are potentially wrong. Andrew. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation 2019-06-05 7:28 ` Andrew Rybchenko @ 2019-06-05 9:37 ` Tiwei Bie 0 siblings, 0 replies; 14+ messages in thread From: Tiwei Bie @ 2019-06-05 9:37 UTC (permalink / raw) To: Andrew Rybchenko Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable On Wed, Jun 05, 2019 at 10:28:14AM +0300, Andrew Rybchenko wrote: > Hi, > > On 6/5/19 4:41 AM, Tiwei Bie wrote: > > Hi, > > Thanks for the patch! > > > More will follow. At least Tx checksum offload is broken when used together > with VLAN insertion since the later prepend to mbuf, but do nothing with > l2_len/outer_l2_len. We'll send a patch to move rte_vlan_insert() > to Tx prepare and suggest separate patch to update l2_len or outer_l2_len > when software VLAN insertion is done. > Thanks :) > > On Mon, Jun 03, 2019 at 07:50:05PM +0100, Andrew Rybchenko wrote: > [...] > > uint16_t > +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, > + uint16_t nb_pkts) > +{ > + uint16_t nb_tx; > + int error; > + > + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { > + struct rte_mbuf *m = tx_pkts[nb_tx]; > + > +#ifdef RTE_LIBRTE_ETHDEV_DEBUG > + error = rte_validate_tx_offload(m); > + if (unlikely(error)) { > + rte_errno = -error; > + break; > + } > +#endif > + > + error = rte_net_intel_cksum_prepare(m); > + if (unlikely(error)) { > + rte_errno = -error; > > It's a bit confusing here. > Based on the API doc of rte_eth_tx_prepare(): > > https://github.com/DPDK/dpdk/blob/7f9f46d6cef5b03681a3935b9a18378e08ca6f62/lib/librte_ethdev/rte_ethdev.h#L4360-L4362 > > It should set negative value to rte_errno when error happens, > > > I'm pretty sure that it is a bug in documentation. rte_errno must be positive. Yeah. Negative rte_errno looks confusing. > I'll send a patch to fix it. > Even the code just below sets positive rte_errno. Simple cases are very easy to > find: > $ git grep 'rte_errno = E' | wc -l > 557 > $ git grep 'rte_errno = -E' | wc -l > 50 > > A bit more complex cases which require careful review: > $ git grep -e 'rte_errno = -[a-z]' | wc -l > 37 > $ git grep -e 'rte_errno = [a-z]' | wc -l > 150 > > Cases which look right from the first sight overweight wrong 3 times. > But it is still too many cases which are potentially wrong. > > Andrew. Thanks, Tiwei ^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] net/virtio: add Tx preparation 2019-06-03 18:50 [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation Andrew Rybchenko 2019-06-05 1:41 ` Tiwei Bie @ 2019-06-16 10:17 ` Andrew Rybchenko 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko 2019-06-17 8:52 ` [dpdk-dev] [PATCH v3 1/2] net/virtio: add Tx preparation Tiwei Bie 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 " Andrew Rybchenko 2 siblings, 2 replies; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-16 10:17 UTC (permalink / raw) To: Maxime Coquelin, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Virtio requires pseudo-header checksum in TCP/UDP checksum to do offload, but it was lost when Tx prepare is introduced. Also rte_validate_tx_offload() should be used to validate Tx offloads. Also it is incorrect to do virtio_tso_fix_cksum() after prepend to mbuf without taking prepended size into account, since layer 2/3/4 lengths provide incorrect offsets after prepend. Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") Cc: stable@dpdk.org Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> --- v3: - add a patch to move soft VLAN insertion to Tx prepare drivers/net/virtio/virtio_ethdev.c | 1 + drivers/net/virtio/virtio_ethdev.h | 3 +++ drivers/net/virtio/virtio_rxtx.c | 32 +++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index c4570bbf8..97d3c293e 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1473,6 +1473,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) { struct virtio_hw *hw = eth_dev->data->dev_private; + eth_dev->tx_pkt_prepare = virtio_xmit_pkts_prepare; if (vtpci_packed_queue(hw)) { PMD_INIT_LOG(INFO, "virtio: using packed ring %s Tx path on port %u", diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 45e96f32b..20d331795 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -89,6 +89,9 @@ uint16_t virtio_recv_mergeable_pkts_packed(void *rx_queue, uint16_t virtio_recv_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 1f1178467..07f8f47de 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -559,7 +559,6 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, /* TCP Segmentation Offload */ if (cookie->ol_flags & PKT_TX_TCP_SEG) { - virtio_tso_fix_cksum(cookie); hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? VIRTIO_NET_HDR_GSO_TCPV6 : VIRTIO_NET_HDR_GSO_TCPV4; @@ -1949,6 +1948,37 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue, return nb_rx; } +uint16_t +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + int error; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + + if (m->ol_flags & PKT_TX_TCP_SEG) + virtio_tso_fix_cksum(m); + } + + return nb_tx; +} + uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -- 2.17.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 1/2] " Andrew Rybchenko @ 2019-06-16 10:17 ` Andrew Rybchenko 2019-06-17 8:54 ` Tiwei Bie 2019-06-17 8:52 ` [dpdk-dev] [PATCH v3 1/2] net/virtio: add Tx preparation Tiwei Bie 1 sibling, 1 reply; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-16 10:17 UTC (permalink / raw) To: Maxime Coquelin, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> VLAN tag insertion should be in Tx prepare, not in Tx burst functions. Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") Cc: stable@dpdk.org Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> --- drivers/net/virtio/virtio_rxtx.c | 50 +++++++++----------------------- 1 file changed, 14 insertions(+), 36 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 07f8f47de..dcce39e8c 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -1966,6 +1966,20 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, } #endif + /* Do VLAN tag insertion */ + if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) { + error = rte_vlan_insert(&m); + /* rte_vlan_insert() may change pointer + * even in the case of failure + */ + tx_pkts[nb_tx] = m; + + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + error = rte_net_intel_cksum_prepare(m); if (unlikely(error)) { rte_errno = -error; @@ -1989,7 +2003,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_tx = 0; bool in_order = hw->use_inorder_tx; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2007,17 +2020,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, struct rte_mbuf *txm = tx_pkts[nb_tx]; int can_push = 0, slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && @@ -2077,7 +2079,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct virtio_hw *hw = vq->hw; uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_tx = 0; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2096,17 +2097,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct rte_mbuf *txm = tx_pkts[nb_tx]; int can_push = 0, use_indirect = 0, slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && @@ -2176,7 +2166,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0; struct rte_mbuf *inorder_pkts[nb_pkts]; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2201,17 +2190,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf *txm = tx_pkts[nb_tx]; int slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && -- 2.17.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko @ 2019-06-17 8:54 ` Tiwei Bie 2019-06-17 11:35 ` Andrew Rybchenko 0 siblings, 1 reply; 14+ messages in thread From: Tiwei Bie @ 2019-06-17 8:54 UTC (permalink / raw) To: Andrew Rybchenko Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable On Sun, Jun 16, 2019 at 11:17:09AM +0100, Andrew Rybchenko wrote: > From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > > VLAN tag insertion should be in Tx prepare, not in Tx burst functions. Please add some details in the commit log. Thanks, Tiwei > > Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") > Cc: stable@dpdk.org > > Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> > --- > drivers/net/virtio/virtio_rxtx.c | 50 +++++++++----------------------- > 1 file changed, 14 insertions(+), 36 deletions(-) > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index 07f8f47de..dcce39e8c 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -1966,6 +1966,20 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, > } > #endif > > + /* Do VLAN tag insertion */ > + if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) { > + error = rte_vlan_insert(&m); > + /* rte_vlan_insert() may change pointer > + * even in the case of failure > + */ > + tx_pkts[nb_tx] = m; > + > + if (unlikely(error)) { > + rte_errno = -error; > + break; > + } > + } > + > error = rte_net_intel_cksum_prepare(m); > if (unlikely(error)) { > rte_errno = -error; > @@ -1989,7 +2003,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, > uint16_t hdr_size = hw->vtnet_hdr_size; > uint16_t nb_tx = 0; > bool in_order = hw->use_inorder_tx; > - int error; > > if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) > return nb_tx; > @@ -2007,17 +2020,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, > struct rte_mbuf *txm = tx_pkts[nb_tx]; > int can_push = 0, slots, need; > > - /* Do VLAN tag insertion */ > - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { > - error = rte_vlan_insert(&txm); > - if (unlikely(error)) { > - rte_pktmbuf_free(txm); > - continue; > - } > - /* vlan_insert may add a header mbuf */ > - tx_pkts[nb_tx] = txm; > - } > - > /* optimize ring usage */ > if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || > vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && > @@ -2077,7 +2079,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > struct virtio_hw *hw = vq->hw; > uint16_t hdr_size = hw->vtnet_hdr_size; > uint16_t nb_used, nb_tx = 0; > - int error; > > if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) > return nb_tx; > @@ -2096,17 +2097,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > struct rte_mbuf *txm = tx_pkts[nb_tx]; > int can_push = 0, use_indirect = 0, slots, need; > > - /* Do VLAN tag insertion */ > - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { > - error = rte_vlan_insert(&txm); > - if (unlikely(error)) { > - rte_pktmbuf_free(txm); > - continue; > - } > - /* vlan_insert may add a header mbuf */ > - tx_pkts[nb_tx] = txm; > - } > - > /* optimize ring usage */ > if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || > vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && > @@ -2176,7 +2166,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, > uint16_t hdr_size = hw->vtnet_hdr_size; > uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0; > struct rte_mbuf *inorder_pkts[nb_pkts]; > - int error; > > if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) > return nb_tx; > @@ -2201,17 +2190,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, > struct rte_mbuf *txm = tx_pkts[nb_tx]; > int slots, need; > > - /* Do VLAN tag insertion */ > - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { > - error = rte_vlan_insert(&txm); > - if (unlikely(error)) { > - rte_pktmbuf_free(txm); > - continue; > - } > - /* vlan_insert may add a header mbuf */ > - tx_pkts[nb_tx] = txm; > - } > - > /* optimize ring usage */ > if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || > vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && > -- > 2.17.1 > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-17 8:54 ` Tiwei Bie @ 2019-06-17 11:35 ` Andrew Rybchenko 0 siblings, 0 replies; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-17 11:35 UTC (permalink / raw) To: Tiwei Bie; +Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable On 6/17/19 11:54 AM, Tiwei Bie wrote: > On Sun, Jun 16, 2019 at 11:17:09AM +0100, Andrew Rybchenko wrote: >> From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> >> >> VLAN tag insertion should be in Tx prepare, not in Tx burst functions. > Please add some details in the commit log. Done, please, see v4. Also, please, note that the following patch [1] fixes the problem with Tx checksum offload after VLAN insertion. Andrew. [1] https://patches.dpdk.org/patch/54830/ ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/2] net/virtio: add Tx preparation 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 1/2] " Andrew Rybchenko 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko @ 2019-06-17 8:52 ` Tiwei Bie 1 sibling, 0 replies; 14+ messages in thread From: Tiwei Bie @ 2019-06-17 8:52 UTC (permalink / raw) To: Andrew Rybchenko Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable On Sun, Jun 16, 2019 at 11:17:08AM +0100, Andrew Rybchenko wrote: > From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > > Virtio requires pseudo-header checksum in TCP/UDP checksum to do > offload, but it was lost when Tx prepare is introduced. Also > rte_validate_tx_offload() should be used to validate Tx offloads. > > Also it is incorrect to do virtio_tso_fix_cksum() after prepend > to mbuf without taking prepended size into account, since layer 2/3/4 > lengths provide incorrect offsets after prepend. > > Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") > Cc: stable@dpdk.org > > Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> > --- Reviewed-by: Tiwei Bie <tiwei.bie@intel.com> Thanks, Tiwei ^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH v4 1/2] net/virtio: add Tx preparation 2019-06-03 18:50 [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation Andrew Rybchenko 2019-06-05 1:41 ` Tiwei Bie 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 1/2] " Andrew Rybchenko @ 2019-06-17 11:31 ` Andrew Rybchenko 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko 2019-06-20 9:58 ` [dpdk-dev] [PATCH v4 1/2] net/virtio: add Tx preparation Maxime Coquelin 2 siblings, 2 replies; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-17 11:31 UTC (permalink / raw) To: Maxime Coquelin, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Virtio requires pseudo-header checksum in TCP/UDP checksum to do offload, but it was lost when Tx prepare is introduced. Also rte_validate_tx_offload() should be used to validate Tx offloads. Also it is incorrect to do virtio_tso_fix_cksum() after prepend to mbuf without taking prepended size into account, since layer 2/3/4 lengths provide incorrect offsets after prepend. Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") Cc: stable@dpdk.org Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> Reviewed-by: Tiwei Bie <tiwei.bie@intel.com> --- drivers/net/virtio/virtio_ethdev.c | 1 + drivers/net/virtio/virtio_ethdev.h | 3 +++ drivers/net/virtio/virtio_rxtx.c | 32 +++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index c4570bbf8..97d3c293e 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1473,6 +1473,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) { struct virtio_hw *hw = eth_dev->data->dev_private; + eth_dev->tx_pkt_prepare = virtio_xmit_pkts_prepare; if (vtpci_packed_queue(hw)) { PMD_INIT_LOG(INFO, "virtio: using packed ring %s Tx path on port %u", diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 45e96f32b..20d331795 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -89,6 +89,9 @@ uint16_t virtio_recv_mergeable_pkts_packed(void *rx_queue, uint16_t virtio_recv_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 1f1178467..07f8f47de 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -559,7 +559,6 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, /* TCP Segmentation Offload */ if (cookie->ol_flags & PKT_TX_TCP_SEG) { - virtio_tso_fix_cksum(cookie); hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? VIRTIO_NET_HDR_GSO_TCPV6 : VIRTIO_NET_HDR_GSO_TCPV4; @@ -1949,6 +1948,37 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue, return nb_rx; } +uint16_t +virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + int error; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + + if (m->ol_flags & PKT_TX_TCP_SEG) + virtio_tso_fix_cksum(m); + } + + return nb_tx; +} + uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -- 2.17.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 " Andrew Rybchenko @ 2019-06-17 11:31 ` Andrew Rybchenko 2019-06-18 4:42 ` Tiwei Bie 2019-06-20 9:58 ` Maxime Coquelin 2019-06-20 9:58 ` [dpdk-dev] [PATCH v4 1/2] net/virtio: add Tx preparation Maxime Coquelin 1 sibling, 2 replies; 14+ messages in thread From: Andrew Rybchenko @ 2019-06-17 11:31 UTC (permalink / raw) To: Maxime Coquelin, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> VLAN tag insertion should be in Tx prepare, not in Tx burst functions. One of Tx prepare goals is to be able to do preparations in advance (possibliy on different CPU core) and then transmit it fast. Also Tx prepare can report that a packet does not pass Tx offloads check. E.g. has no enough headroom to insert VLAN header. Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") Cc: stable@dpdk.org Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> --- v4: - add more details to commit log drivers/net/virtio/virtio_rxtx.c | 50 +++++++++----------------------- 1 file changed, 14 insertions(+), 36 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 07f8f47de..dcce39e8c 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -1966,6 +1966,20 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, } #endif + /* Do VLAN tag insertion */ + if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) { + error = rte_vlan_insert(&m); + /* rte_vlan_insert() may change pointer + * even in the case of failure + */ + tx_pkts[nb_tx] = m; + + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + error = rte_net_intel_cksum_prepare(m); if (unlikely(error)) { rte_errno = -error; @@ -1989,7 +2003,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_tx = 0; bool in_order = hw->use_inorder_tx; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2007,17 +2020,6 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, struct rte_mbuf *txm = tx_pkts[nb_tx]; int can_push = 0, slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && @@ -2077,7 +2079,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct virtio_hw *hw = vq->hw; uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_tx = 0; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2096,17 +2097,6 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct rte_mbuf *txm = tx_pkts[nb_tx]; int can_push = 0, use_indirect = 0, slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && @@ -2176,7 +2166,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0; struct rte_mbuf *inorder_pkts[nb_pkts]; - int error; if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; @@ -2201,17 +2190,6 @@ virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf *txm = tx_pkts[nb_tx]; int slots, need; - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - /* vlan_insert may add a header mbuf */ - tx_pkts[nb_tx] = txm; - } - /* optimize ring usage */ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && -- 2.17.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko @ 2019-06-18 4:42 ` Tiwei Bie 2019-06-20 9:58 ` Maxime Coquelin 1 sibling, 0 replies; 14+ messages in thread From: Tiwei Bie @ 2019-06-18 4:42 UTC (permalink / raw) To: Andrew Rybchenko Cc: Maxime Coquelin, Zhihong Wang, dev, Dilshod Urazov, stable On Mon, Jun 17, 2019 at 12:31:38PM +0100, Andrew Rybchenko wrote: > From: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > > VLAN tag insertion should be in Tx prepare, not in Tx burst functions. > One of Tx prepare goals is to be able to do preparations in advance > (possibliy on different CPU core) and then transmit it fast. > Also Tx prepare can report that a packet does not pass Tx offloads > check. E.g. has no enough headroom to insert VLAN header. > > Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") > Cc: stable@dpdk.org > > Signed-off-by: Dilshod Urazov <Dilshod.Urazov@oktetlabs.ru> > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> > --- > v4: > - add more details to commit log > > drivers/net/virtio/virtio_rxtx.c | 50 +++++++++----------------------- > 1 file changed, 14 insertions(+), 36 deletions(-) Reviewed-by: Tiwei Bie <tiwei.bie@intel.com> Thanks, Tiwei ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko 2019-06-18 4:42 ` Tiwei Bie @ 2019-06-20 9:58 ` Maxime Coquelin 1 sibling, 0 replies; 14+ messages in thread From: Maxime Coquelin @ 2019-06-20 9:58 UTC (permalink / raw) To: Andrew Rybchenko, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable On 6/17/19 1:31 PM, Andrew Rybchenko wrote: > From: Dilshod Urazov<Dilshod.Urazov@oktetlabs.ru> > > VLAN tag insertion should be in Tx prepare, not in Tx burst functions. > One of Tx prepare goals is to be able to do preparations in advance > (possibliy on different CPU core) and then transmit it fast. > Also Tx prepare can report that a packet does not pass Tx offloads > check. E.g. has no enough headroom to insert VLAN header. > > Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") > Cc:stable@dpdk.org > > Signed-off-by: Dilshod Urazov<Dilshod.Urazov@oktetlabs.ru> > Signed-off-by: Andrew Rybchenko<arybchenko@solarflare.com> > --- > v4: > - add more details to commit log > > drivers/net/virtio/virtio_rxtx.c | 50 +++++++++----------------------- > 1 file changed, 14 insertions(+), 36 deletions(-) Applied to dpdk-next-virtio/master. Thanks, Maxime ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/2] net/virtio: add Tx preparation 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 " Andrew Rybchenko 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko @ 2019-06-20 9:58 ` Maxime Coquelin 1 sibling, 0 replies; 14+ messages in thread From: Maxime Coquelin @ 2019-06-20 9:58 UTC (permalink / raw) To: Andrew Rybchenko, Tiwei Bie, Zhihong Wang; +Cc: dev, Dilshod Urazov, stable On 6/17/19 1:31 PM, Andrew Rybchenko wrote: > From: Dilshod Urazov<Dilshod.Urazov@oktetlabs.ru> > > Virtio requires pseudo-header checksum in TCP/UDP checksum to do > offload, but it was lost when Tx prepare is introduced. Also > rte_validate_tx_offload() should be used to validate Tx offloads. > > Also it is incorrect to do virtio_tso_fix_cksum() after prepend > to mbuf without taking prepended size into account, since layer 2/3/4 > lengths provide incorrect offsets after prepend. > > Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation") > Cc:stable@dpdk.org > > Signed-off-by: Dilshod Urazov<Dilshod.Urazov@oktetlabs.ru> > Signed-off-by: Andrew Rybchenko<arybchenko@solarflare.com> > Reviewed-by: Tiwei Bie<tiwei.bie@intel.com> > --- > drivers/net/virtio/virtio_ethdev.c | 1 + > drivers/net/virtio/virtio_ethdev.h | 3 +++ > drivers/net/virtio/virtio_rxtx.c | 32 +++++++++++++++++++++++++++++- > 3 files changed, 35 insertions(+), 1 deletion(-) Applied to dpdk-next-virtio/master. Thanks, Maxime ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2019-06-20 9:58 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-06-03 18:50 [dpdk-dev] [PATCH v2] net/virtio: add Tx preparation Andrew Rybchenko 2019-06-05 1:41 ` Tiwei Bie 2019-06-05 7:28 ` Andrew Rybchenko 2019-06-05 9:37 ` Tiwei Bie 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 1/2] " Andrew Rybchenko 2019-06-16 10:17 ` [dpdk-dev] [PATCH v3 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko 2019-06-17 8:54 ` Tiwei Bie 2019-06-17 11:35 ` Andrew Rybchenko 2019-06-17 8:52 ` [dpdk-dev] [PATCH v3 1/2] net/virtio: add Tx preparation Tiwei Bie 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 " Andrew Rybchenko 2019-06-17 11:31 ` [dpdk-dev] [PATCH v4 2/2] net/virtio: move VLAN tag insertion to Tx prepare Andrew Rybchenko 2019-06-18 4:42 ` Tiwei Bie 2019-06-20 9:58 ` Maxime Coquelin 2019-06-20 9:58 ` [dpdk-dev] [PATCH v4 1/2] net/virtio: add Tx preparation Maxime Coquelin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).