DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO
@ 2019-04-02  9:28 Andrew Rybchenko
  2019-04-02  9:28 ` Andrew Rybchenko
                   ` (13 more replies)
  0 siblings, 14 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev

Move and add missing Tx offloads checks to Tx prepare stage.
Keep absolutely required checks in Tx burst to avoid spoil of
memory and segmentation faults.

There are few checkpatches.sh warnings since positive errno is
used inside driver.

The patch series depends on [1] and should be applied only after it.
[1] is acked by Olivier and was acked by Konstantin Ananyev at RFC
stage saying that more testing is required.

[1] https://patches.dpdk.org/patch/51908/

Igor Romanov (9):
  net/sfc: improve TSO header length check in EFX datapath
  net/sfc: improve TSO header length check in EF10 datapath
  net/sfc: make TSO descriptor numbers EF10-specific
  net/sfc: support Tx preparation in EFX datapath
  net/sfc: support Tx preparation in EF10 datapath
  net/sfc: support Tx preparation in EF10 simple datapath
  net/sfc: move TSO header checks from Tx burst to Tx prepare
  net/sfc: introduce descriptor space check in Tx prepare
  net/sfc: add TSO header length check to Tx prepare

Ivan Malov (3):
  net/sfc: factor out function to get IPv4 packet ID for TSO
  net/sfc: improve log message about missing HW TSO support
  net/sfc: support tunnel TSO on EF10 native Tx datapath

 doc/guides/nics/sfc_efx.rst            |   2 +-
 doc/guides/rel_notes/release_19_05.rst |   2 +
 drivers/net/sfc/sfc.c                  |   9 +-
 drivers/net/sfc/sfc.h                  |   1 +
 drivers/net/sfc/sfc_dp_tx.h            |  84 ++++++++++++
 drivers/net/sfc/sfc_ef10_tx.c          | 172 ++++++++++++++++++++-----
 drivers/net/sfc/sfc_ethdev.c           |   4 +
 drivers/net/sfc/sfc_tso.c              |  46 +++----
 drivers/net/sfc/sfc_tso.h              |  16 ++-
 drivers/net/sfc/sfc_tx.c               |  59 +++++++--
 10 files changed, 322 insertions(+), 73 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath Andrew Rybchenko
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev

Move and add missing Tx offloads checks to Tx prepare stage.
Keep absolutely required checks in Tx burst to avoid spoil of
memory and segmentation faults.

There are few checkpatches.sh warnings since positive errno is
used inside driver.

The patch series depends on [1] and should be applied only after it.
[1] is acked by Olivier and was acked by Konstantin Ananyev at RFC
stage saying that more testing is required.

[1] https://patches.dpdk.org/patch/51908/

Igor Romanov (9):
  net/sfc: improve TSO header length check in EFX datapath
  net/sfc: improve TSO header length check in EF10 datapath
  net/sfc: make TSO descriptor numbers EF10-specific
  net/sfc: support Tx preparation in EFX datapath
  net/sfc: support Tx preparation in EF10 datapath
  net/sfc: support Tx preparation in EF10 simple datapath
  net/sfc: move TSO header checks from Tx burst to Tx prepare
  net/sfc: introduce descriptor space check in Tx prepare
  net/sfc: add TSO header length check to Tx prepare

Ivan Malov (3):
  net/sfc: factor out function to get IPv4 packet ID for TSO
  net/sfc: improve log message about missing HW TSO support
  net/sfc: support tunnel TSO on EF10 native Tx datapath

 doc/guides/nics/sfc_efx.rst            |   2 +-
 doc/guides/rel_notes/release_19_05.rst |   2 +
 drivers/net/sfc/sfc.c                  |   9 +-
 drivers/net/sfc/sfc.h                  |   1 +
 drivers/net/sfc/sfc_dp_tx.h            |  84 ++++++++++++
 drivers/net/sfc/sfc_ef10_tx.c          | 172 ++++++++++++++++++++-----
 drivers/net/sfc/sfc_ethdev.c           |   4 +
 drivers/net/sfc/sfc_tso.c              |  46 +++----
 drivers/net/sfc/sfc_tso.h              |  16 ++-
 drivers/net/sfc/sfc_tx.c               |  59 +++++++--
 10 files changed, 322 insertions(+), 73 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
  2019-04-02  9:28 ` Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath Andrew Rybchenko
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, stable

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: fec33d5bb3eb ("net/sfc: support firmware-assisted TSO")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_tso.c | 11 +++++++----
 drivers/net/sfc/sfc_tx.c  |  3 ++-
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 076a25d44..a28af0e78 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -107,10 +107,6 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 
 	idx += SFC_TSO_OPT_DESCS_NUM;
 
-	/* Packets which have too big headers should be discarded */
-	if (unlikely(header_len > SFC_TSOH_STD_LEN))
-		return EMSGSIZE;
-
 	/*
 	 * The TCP header must start at most 208 bytes into the frame.
 	 * If it starts later than this then the NIC won't realise
@@ -129,6 +125,13 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * limitations on address boundaries crossing by DMA descriptor data.
 	 */
 	if (m->data_len < header_len) {
+		/*
+		 * Discard a packet if header linearization is needed but
+		 * the header is too big.
+		 */
+		if (unlikely(header_len > SFC_TSOH_STD_LEN))
+			return EMSGSIZE;
+
 		tsoh = txq->sw_ring[idx & txq->ptr_mask].tsoh;
 		sfc_tso_prepare_header(tsoh, header_len, in_seg, in_off);
 
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index c3e0936cc..4b1f94ce8 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -760,7 +760,8 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				/* We may have reached this place for
 				 * one of the following reasons:
 				 *
-				 * 1) Packet header length is greater
+				 * 1) Packet header linearization is needed
+				 *    and the header length is greater
 				 *    than SFC_TSOH_STD_LEN
 				 * 2) TCP header starts at more then
 				 *    208 bytes into the frame
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, stable

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: fec33d5bb3eb ("net/sfc: support firmware-assisted TSO")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_tso.c | 11 +++++++----
 drivers/net/sfc/sfc_tx.c  |  3 ++-
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 076a25d44..a28af0e78 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -107,10 +107,6 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 
 	idx += SFC_TSO_OPT_DESCS_NUM;
 
-	/* Packets which have too big headers should be discarded */
-	if (unlikely(header_len > SFC_TSOH_STD_LEN))
-		return EMSGSIZE;
-
 	/*
 	 * The TCP header must start at most 208 bytes into the frame.
 	 * If it starts later than this then the NIC won't realise
@@ -129,6 +125,13 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * limitations on address boundaries crossing by DMA descriptor data.
 	 */
 	if (m->data_len < header_len) {
+		/*
+		 * Discard a packet if header linearization is needed but
+		 * the header is too big.
+		 */
+		if (unlikely(header_len > SFC_TSOH_STD_LEN))
+			return EMSGSIZE;
+
 		tsoh = txq->sw_ring[idx & txq->ptr_mask].tsoh;
 		sfc_tso_prepare_header(tsoh, header_len, in_seg, in_off);
 
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index c3e0936cc..4b1f94ce8 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -760,7 +760,8 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				/* We may have reached this place for
 				 * one of the following reasons:
 				 *
-				 * 1) Packet header length is greater
+				 * 1) Packet header linearization is needed
+				 *    and the header length is greater
 				 *    than SFC_TSOH_STD_LEN
 				 * 2) TCP header starts at more then
 				 *    208 bytes into the frame
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
  2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific Andrew Rybchenko
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, stable

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: 6bc985e41155 ("net/sfc: support TSO in EF10 Tx datapath")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 0711c1136..97b1b6252 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -340,9 +340,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	struct rte_mbuf *m_seg_to_free_up_to = first_m_seg;
 	bool eop;
 
-	/* Both checks may be done, so use bit OR to have only one branching */
-	if (unlikely((header_len > SFC_TSOH_STD_LEN) |
-		     (tcph_off > txq->tso_tcp_header_offset_limit)))
+	if (unlikely(tcph_off > txq->tso_tcp_header_offset_limit))
 		return EMSGSIZE;
 
 	/*
@@ -407,6 +405,13 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		unsigned int hdr_addr_off = (*added & txq->ptr_mask) *
 				SFC_TSOH_STD_LEN;
 
+		/*
+		 * Discard a packet if header linearization is needed but
+		 * the header is too big.
+		 */
+		if (unlikely(header_len > SFC_TSOH_STD_LEN))
+			return EMSGSIZE;
+
 		hdr_addr = txq->tsoh + hdr_addr_off;
 		hdr_iova = txq->tsoh_iova + hdr_addr_off;
 		copied_segs = sfc_tso_prepare_header(hdr_addr, header_len,
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, stable

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: 6bc985e41155 ("net/sfc: support TSO in EF10 Tx datapath")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 0711c1136..97b1b6252 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -340,9 +340,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	struct rte_mbuf *m_seg_to_free_up_to = first_m_seg;
 	bool eop;
 
-	/* Both checks may be done, so use bit OR to have only one branching */
-	if (unlikely((header_len > SFC_TSOH_STD_LEN) |
-		     (tcph_off > txq->tso_tcp_header_offset_limit)))
+	if (unlikely(tcph_off > txq->tso_tcp_header_offset_limit))
 		return EMSGSIZE;
 
 	/*
@@ -407,6 +405,13 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		unsigned int hdr_addr_off = (*added & txq->ptr_mask) *
 				SFC_TSOH_STD_LEN;
 
+		/*
+		 * Discard a packet if header linearization is needed but
+		 * the header is too big.
+		 */
+		if (unlikely(header_len > SFC_TSOH_STD_LEN))
+			return EMSGSIZE;
+
 		hdr_addr = txq->tsoh + hdr_addr_off;
 		hdr_iova = txq->tsoh_iova + hdr_addr_off;
 		copied_segs = sfc_tso_prepare_header(hdr_addr, header_len,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath Andrew Rybchenko
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Numbers of extra descriptors required for TSO are EF10-specific
in fact. Highlight it in define names.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 12 ++++++------
 drivers/net/sfc/sfc_tso.c     |  2 +-
 drivers/net/sfc/sfc_tso.h     |  4 ++--
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 97b1b6252..999dabd12 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -351,8 +351,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * several descriptors.
 	 */
 	needed_desc = m_seg->nb_segs +
-			(unsigned int)SFC_TSO_OPT_DESCS_NUM +
-			(unsigned int)SFC_TSO_HDR_DESCS_NUM;
+			(unsigned int)SFC_EF10_TSO_OPT_DESCS_NUM +
+			(unsigned int)SFC_EF10_TSO_HDR_DESCS_NUM;
 
 	if (needed_desc > *dma_desc_space &&
 	    !sfc_ef10_try_reap(txq, pkt_start, needed_desc,
@@ -369,8 +369,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		 * descriptors, header descriptor and at least 1
 		 * segment descriptor.
 		 */
-		if (*dma_desc_space < SFC_TSO_OPT_DESCS_NUM +
-				SFC_TSO_HDR_DESCS_NUM + 1)
+		if (*dma_desc_space < SFC_EF10_TSO_OPT_DESCS_NUM +
+				SFC_EF10_TSO_HDR_DESCS_NUM + 1)
 			return EMSGSIZE;
 	}
 
@@ -386,7 +386,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 			 * Associate header mbuf with header descriptor
 			 * which is located after TSO descriptors.
 			 */
-			txq->sw_ring[(pkt_start + SFC_TSO_OPT_DESCS_NUM) &
+			txq->sw_ring[(pkt_start + SFC_EF10_TSO_OPT_DESCS_NUM) &
 				     txq->ptr_mask].mbuf = m_seg;
 			m_seg = m_seg->next;
 			in_off = 0;
@@ -455,7 +455,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 
 	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, 0, sent_seq,
 			first_m_seg->tso_segsz);
-	(*added) += SFC_TSO_OPT_DESCS_NUM;
+	(*added) += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	sfc_ef10_tx_qdesc_dma_create(hdr_iova, header_len, false,
 			&txq->txq_hw_ring[(*added) & txq->ptr_mask]);
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index a28af0e78..1ce787f3c 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -105,7 +105,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 
-	idx += SFC_TSO_OPT_DESCS_NUM;
+	idx += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	/*
 	 * The TCP header must start at most 208 bytes into the frame.
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f89aef07c..cd151782f 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -18,13 +18,13 @@ extern "C" {
 #define SFC_TSOH_STD_LEN	256
 
 /** The number of TSO option descriptors that precede the packet descriptors */
-#define SFC_TSO_OPT_DESCS_NUM	2
+#define SFC_EF10_TSO_OPT_DESCS_NUM	2
 
 /**
  * The number of DMA descriptors for TSO header that may or may not precede the
  * packet's payload descriptors
  */
-#define SFC_TSO_HDR_DESCS_NUM	1
+#define SFC_EF10_TSO_HDR_DESCS_NUM	1
 
 unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
 				    struct rte_mbuf **in_seg, size_t *in_off);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Numbers of extra descriptors required for TSO are EF10-specific
in fact. Highlight it in define names.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 12 ++++++------
 drivers/net/sfc/sfc_tso.c     |  2 +-
 drivers/net/sfc/sfc_tso.h     |  4 ++--
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 97b1b6252..999dabd12 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -351,8 +351,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * several descriptors.
 	 */
 	needed_desc = m_seg->nb_segs +
-			(unsigned int)SFC_TSO_OPT_DESCS_NUM +
-			(unsigned int)SFC_TSO_HDR_DESCS_NUM;
+			(unsigned int)SFC_EF10_TSO_OPT_DESCS_NUM +
+			(unsigned int)SFC_EF10_TSO_HDR_DESCS_NUM;
 
 	if (needed_desc > *dma_desc_space &&
 	    !sfc_ef10_try_reap(txq, pkt_start, needed_desc,
@@ -369,8 +369,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		 * descriptors, header descriptor and at least 1
 		 * segment descriptor.
 		 */
-		if (*dma_desc_space < SFC_TSO_OPT_DESCS_NUM +
-				SFC_TSO_HDR_DESCS_NUM + 1)
+		if (*dma_desc_space < SFC_EF10_TSO_OPT_DESCS_NUM +
+				SFC_EF10_TSO_HDR_DESCS_NUM + 1)
 			return EMSGSIZE;
 	}
 
@@ -386,7 +386,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 			 * Associate header mbuf with header descriptor
 			 * which is located after TSO descriptors.
 			 */
-			txq->sw_ring[(pkt_start + SFC_TSO_OPT_DESCS_NUM) &
+			txq->sw_ring[(pkt_start + SFC_EF10_TSO_OPT_DESCS_NUM) &
 				     txq->ptr_mask].mbuf = m_seg;
 			m_seg = m_seg->next;
 			in_off = 0;
@@ -455,7 +455,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 
 	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, 0, sent_seq,
 			first_m_seg->tso_segsz);
-	(*added) += SFC_TSO_OPT_DESCS_NUM;
+	(*added) += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	sfc_ef10_tx_qdesc_dma_create(hdr_iova, header_len, false,
 			&txq->txq_hw_ring[(*added) & txq->ptr_mask]);
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index a28af0e78..1ce787f3c 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -105,7 +105,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 
-	idx += SFC_TSO_OPT_DESCS_NUM;
+	idx += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	/*
 	 * The TCP header must start at most 208 bytes into the frame.
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f89aef07c..cd151782f 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -18,13 +18,13 @@ extern "C" {
 #define SFC_TSOH_STD_LEN	256
 
 /** The number of TSO option descriptors that precede the packet descriptors */
-#define SFC_TSO_OPT_DESCS_NUM	2
+#define SFC_EF10_TSO_OPT_DESCS_NUM	2
 
 /**
  * The number of DMA descriptors for TSO header that may or may not precede the
  * packet's payload descriptors
  */
-#define SFC_TSO_HDR_DESCS_NUM	1
+#define SFC_EF10_TSO_HDR_DESCS_NUM	1
 
 unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
 				    struct rte_mbuf **in_seg, size_t *in_off);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath Andrew Rybchenko
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Implement generic checks in Tx prepare function and update Tx burst
function accordingly.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_19_05.rst |  1 +
 drivers/net/sfc/sfc_dp_tx.h            | 24 ++++++++++++++++++++++++
 drivers/net/sfc/sfc_ethdev.c           |  4 ++++
 drivers/net/sfc/sfc_tso.c              | 13 +++++++------
 drivers/net/sfc/sfc_tx.c               | 20 ++++++++++++++++++++
 5 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..173c852c8 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -73,6 +73,7 @@ New Features
   * Added support for RSS RETA and hash configuration get API in a secondary
     process.
   * Added support for Rx packet types list in a secondary process.
+  * Added Tx prepare to do Tx offloads checks.
 
 * **Updated Mellanox drivers.**
 
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 9cb2198e2..885094b67 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -13,6 +13,7 @@
 #include <rte_ethdev_driver.h>
 
 #include "sfc_dp.h"
+#include "sfc_debug.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -170,6 +171,7 @@ struct sfc_dp_tx {
 	sfc_dp_tx_qtx_ev_t		*qtx_ev;
 	sfc_dp_tx_qreap_t		*qreap;
 	sfc_dp_tx_qdesc_status_t	*qdesc_status;
+	eth_tx_prep_t			pkt_prepare;
 	eth_tx_burst_t			pkt_burst;
 };
 
@@ -192,6 +194,28 @@ sfc_dp_find_tx_by_caps(struct sfc_dp_list *head, unsigned int avail_caps)
 /** Get Tx datapath ops by the datapath TxQ handle */
 const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
+static inline int
+sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
+{
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+	int ret;
+
+	ret = rte_validate_tx_offload(m);
+	if (ret != 0) {
+		/*
+		 * Negative error code is returned by rte_validate_tx_offload(),
+		 * but positive are used inside net/sfc PMD.
+		 */
+		SFC_ASSERT(ret < 0);
+		return -ret;
+	}
+#else
+	RTE_SET_USED(m);
+#endif
+
+	return 0;
+}
+
 extern struct sfc_dp_tx sfc_efx_tx;
 extern struct sfc_dp_tx sfc_ef10_tx;
 extern struct sfc_dp_tx sfc_ef10_simple_tx;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2675d4a8c..6c33601e7 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1854,6 +1854,7 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	sa->priv.dp_tx = dp_tx;
 
 	dev->rx_pkt_burst = dp_rx->pkt_burst;
+	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 
 	dev->dev_ops = &sfc_eth_dev_ops;
@@ -1881,6 +1882,7 @@ sfc_eth_dev_clear_ops(struct rte_eth_dev *dev)
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 
 	dev->dev_ops = NULL;
+	dev->tx_pkt_prepare = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
 
@@ -1961,6 +1963,7 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main)
 
 	dev->process_private = sap;
 	dev->rx_pkt_burst = dp_rx->pkt_burst;
+	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 	dev->dev_ops = &sfc_eth_dev_secondary_ops;
 
@@ -1982,6 +1985,7 @@ sfc_eth_dev_secondary_clear_ops(struct rte_eth_dev *dev)
 	free(dev->process_private);
 	dev->process_private = NULL;
 	dev->dev_ops = NULL;
+	dev->tx_pkt_prepare = NULL;
 	dev->tx_pkt_burst = NULL;
 	dev->rx_pkt_burst = NULL;
 }
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 1ce787f3c..2c03c0837 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -97,7 +97,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	uint8_t *tsoh;
 	const struct tcp_hdr *th;
 	efsys_dma_addr_t header_paddr;
-	uint16_t packet_id;
+	uint16_t packet_id = 0;
 	uint32_t sent_seq;
 	struct rte_mbuf *m = *in_seg;
 	size_t nh_off = m->l2_len; /* IP header offset */
@@ -147,17 +147,18 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 		tsoh = rte_pktmbuf_mtod(m, uint8_t *);
 	}
 
-	/* Handle IP header */
+	/*
+	 * Handle IP header. Tx prepare has debug-only checks that offload flags
+	 * are correctly filled in in TSO mbuf. Use zero IPID if there is no
+	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
+	 * zero IPID.
+	 */
 	if (m->ol_flags & PKT_TX_IPV4) {
 		const struct ipv4_hdr *iphe4;
 
 		iphe4 = (const struct ipv4_hdr *)(tsoh + nh_off);
 		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
 		packet_id = rte_be_to_cpu_16(packet_id);
-	} else if (m->ol_flags & PKT_TX_IPV6) {
-		packet_id = 0;
-	} else {
-		return EINVAL;
 	}
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 4b1f94ce8..16fd220bf 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -697,6 +697,25 @@ sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 	return 1;
 }
 
+static uint16_t
+sfc_efx_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		int ret;
+
+		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i]);
+		if (unlikely(ret != 0)) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	return i;
+}
+
 static uint16_t
 sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
@@ -1122,5 +1141,6 @@ struct sfc_dp_tx sfc_efx_tx = {
 	.qstop			= sfc_efx_tx_qstop,
 	.qreap			= sfc_efx_tx_qreap,
 	.qdesc_status		= sfc_efx_tx_qdesc_status,
+	.pkt_prepare		= sfc_efx_prepare_pkts,
 	.pkt_burst		= sfc_efx_xmit_pkts,
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Implement generic checks in Tx prepare function and update Tx burst
function accordingly.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_19_05.rst |  1 +
 drivers/net/sfc/sfc_dp_tx.h            | 24 ++++++++++++++++++++++++
 drivers/net/sfc/sfc_ethdev.c           |  4 ++++
 drivers/net/sfc/sfc_tso.c              | 13 +++++++------
 drivers/net/sfc/sfc_tx.c               | 20 ++++++++++++++++++++
 5 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..173c852c8 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -73,6 +73,7 @@ New Features
   * Added support for RSS RETA and hash configuration get API in a secondary
     process.
   * Added support for Rx packet types list in a secondary process.
+  * Added Tx prepare to do Tx offloads checks.
 
 * **Updated Mellanox drivers.**
 
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 9cb2198e2..885094b67 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -13,6 +13,7 @@
 #include <rte_ethdev_driver.h>
 
 #include "sfc_dp.h"
+#include "sfc_debug.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -170,6 +171,7 @@ struct sfc_dp_tx {
 	sfc_dp_tx_qtx_ev_t		*qtx_ev;
 	sfc_dp_tx_qreap_t		*qreap;
 	sfc_dp_tx_qdesc_status_t	*qdesc_status;
+	eth_tx_prep_t			pkt_prepare;
 	eth_tx_burst_t			pkt_burst;
 };
 
@@ -192,6 +194,28 @@ sfc_dp_find_tx_by_caps(struct sfc_dp_list *head, unsigned int avail_caps)
 /** Get Tx datapath ops by the datapath TxQ handle */
 const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
+static inline int
+sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
+{
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+	int ret;
+
+	ret = rte_validate_tx_offload(m);
+	if (ret != 0) {
+		/*
+		 * Negative error code is returned by rte_validate_tx_offload(),
+		 * but positive are used inside net/sfc PMD.
+		 */
+		SFC_ASSERT(ret < 0);
+		return -ret;
+	}
+#else
+	RTE_SET_USED(m);
+#endif
+
+	return 0;
+}
+
 extern struct sfc_dp_tx sfc_efx_tx;
 extern struct sfc_dp_tx sfc_ef10_tx;
 extern struct sfc_dp_tx sfc_ef10_simple_tx;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2675d4a8c..6c33601e7 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1854,6 +1854,7 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	sa->priv.dp_tx = dp_tx;
 
 	dev->rx_pkt_burst = dp_rx->pkt_burst;
+	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 
 	dev->dev_ops = &sfc_eth_dev_ops;
@@ -1881,6 +1882,7 @@ sfc_eth_dev_clear_ops(struct rte_eth_dev *dev)
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 
 	dev->dev_ops = NULL;
+	dev->tx_pkt_prepare = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
 
@@ -1961,6 +1963,7 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main)
 
 	dev->process_private = sap;
 	dev->rx_pkt_burst = dp_rx->pkt_burst;
+	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 	dev->dev_ops = &sfc_eth_dev_secondary_ops;
 
@@ -1982,6 +1985,7 @@ sfc_eth_dev_secondary_clear_ops(struct rte_eth_dev *dev)
 	free(dev->process_private);
 	dev->process_private = NULL;
 	dev->dev_ops = NULL;
+	dev->tx_pkt_prepare = NULL;
 	dev->tx_pkt_burst = NULL;
 	dev->rx_pkt_burst = NULL;
 }
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 1ce787f3c..2c03c0837 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -97,7 +97,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	uint8_t *tsoh;
 	const struct tcp_hdr *th;
 	efsys_dma_addr_t header_paddr;
-	uint16_t packet_id;
+	uint16_t packet_id = 0;
 	uint32_t sent_seq;
 	struct rte_mbuf *m = *in_seg;
 	size_t nh_off = m->l2_len; /* IP header offset */
@@ -147,17 +147,18 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 		tsoh = rte_pktmbuf_mtod(m, uint8_t *);
 	}
 
-	/* Handle IP header */
+	/*
+	 * Handle IP header. Tx prepare has debug-only checks that offload flags
+	 * are correctly filled in in TSO mbuf. Use zero IPID if there is no
+	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
+	 * zero IPID.
+	 */
 	if (m->ol_flags & PKT_TX_IPV4) {
 		const struct ipv4_hdr *iphe4;
 
 		iphe4 = (const struct ipv4_hdr *)(tsoh + nh_off);
 		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
 		packet_id = rte_be_to_cpu_16(packet_id);
-	} else if (m->ol_flags & PKT_TX_IPV6) {
-		packet_id = 0;
-	} else {
-		return EINVAL;
 	}
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 4b1f94ce8..16fd220bf 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -697,6 +697,25 @@ sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 	return 1;
 }
 
+static uint16_t
+sfc_efx_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		int ret;
+
+		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i]);
+		if (unlikely(ret != 0)) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	return i;
+}
+
 static uint16_t
 sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
@@ -1122,5 +1141,6 @@ struct sfc_dp_tx sfc_efx_tx = {
 	.qstop			= sfc_efx_tx_qstop,
 	.qreap			= sfc_efx_tx_qreap,
 	.qdesc_status		= sfc_efx_tx_qdesc_status,
+	.pkt_prepare		= sfc_efx_prepare_pkts,
 	.pkt_burst		= sfc_efx_xmit_pkts,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath Andrew Rybchenko
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <Igor.Romanov@oktetlabs.ru>

Implement tx_prepare callback and update Tx burst function accordingly.

Signed-off-by: Igor Romanov <Igor.Romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 56 ++++++++++++++++++++++++++++-------
 1 file changed, 46 insertions(+), 10 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 999dabd12..05f30cb2e 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -319,6 +319,44 @@ sfc_ef10_try_reap(struct sfc_ef10_txq * const txq, unsigned int added,
 	return (needed_desc <= *dma_desc_space);
 }
 
+static uint16_t
+sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *m = tx_pkts[i];
+		int ret;
+
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+		/*
+		 * In non-TSO case, check that a packet segments do not exceed
+		 * the size limit. Perform the check in debug mode since MTU
+		 * more than 9k is not supported, but the limit here is 16k-1.
+		 */
+		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+			struct rte_mbuf *m_seg;
+
+			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
+				if (m_seg->data_len >
+				    SFC_EF10_TX_DMA_DESC_LEN_MAX) {
+					rte_errno = EINVAL;
+					break;
+				}
+			}
+		}
+#endif
+		ret = sfc_dp_tx_prepare_pkt(m);
+		if (unlikely(ret != 0)) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	return i;
+}
+
 static int
 sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
@@ -330,7 +368,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	/* Offset of the payload in the last segment that contains the header */
 	size_t in_off = 0;
 	const struct tcp_hdr *th;
-	uint16_t packet_id;
+	uint16_t packet_id = 0;
 	uint32_t sent_seq;
 	uint8_t *hdr_addr;
 	rte_iova_t hdr_iova;
@@ -433,20 +471,17 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 			needed_desc--;
 	}
 
-	switch (first_m_seg->ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) {
-	case PKT_TX_IPV4: {
+	/*
+	 * Tx prepare has debug-only checks that offload flags are correctly
+	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
+	 * If the packet is still IPv4, HW will simply start from zero IPID.
+	 */
+	if (first_m_seg->ol_flags & PKT_TX_IPV4) {
 		const struct ipv4_hdr *iphe4;
 
 		iphe4 = (const struct ipv4_hdr *)(hdr_addr + iph_off);
 		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
 		packet_id = rte_be_to_cpu_16(packet_id);
-		break;
-	}
-	case PKT_TX_IPV6:
-		packet_id = 0;
-		break;
-	default:
-		return EINVAL;
 	}
 
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
@@ -1014,6 +1049,7 @@ struct sfc_dp_tx sfc_ef10_tx = {
 	.qstop			= sfc_ef10_tx_qstop,
 	.qreap			= sfc_ef10_tx_qreap,
 	.qdesc_status		= sfc_ef10_tx_qdesc_status,
+	.pkt_prepare		= sfc_ef10_prepare_pkts,
 	.pkt_burst		= sfc_ef10_xmit_pkts,
 };
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <Igor.Romanov@oktetlabs.ru>

Implement tx_prepare callback and update Tx burst function accordingly.

Signed-off-by: Igor Romanov <Igor.Romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 56 ++++++++++++++++++++++++++++-------
 1 file changed, 46 insertions(+), 10 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 999dabd12..05f30cb2e 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -319,6 +319,44 @@ sfc_ef10_try_reap(struct sfc_ef10_txq * const txq, unsigned int added,
 	return (needed_desc <= *dma_desc_space);
 }
 
+static uint16_t
+sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *m = tx_pkts[i];
+		int ret;
+
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+		/*
+		 * In non-TSO case, check that a packet segments do not exceed
+		 * the size limit. Perform the check in debug mode since MTU
+		 * more than 9k is not supported, but the limit here is 16k-1.
+		 */
+		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+			struct rte_mbuf *m_seg;
+
+			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
+				if (m_seg->data_len >
+				    SFC_EF10_TX_DMA_DESC_LEN_MAX) {
+					rte_errno = EINVAL;
+					break;
+				}
+			}
+		}
+#endif
+		ret = sfc_dp_tx_prepare_pkt(m);
+		if (unlikely(ret != 0)) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	return i;
+}
+
 static int
 sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
@@ -330,7 +368,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	/* Offset of the payload in the last segment that contains the header */
 	size_t in_off = 0;
 	const struct tcp_hdr *th;
-	uint16_t packet_id;
+	uint16_t packet_id = 0;
 	uint32_t sent_seq;
 	uint8_t *hdr_addr;
 	rte_iova_t hdr_iova;
@@ -433,20 +471,17 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 			needed_desc--;
 	}
 
-	switch (first_m_seg->ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) {
-	case PKT_TX_IPV4: {
+	/*
+	 * Tx prepare has debug-only checks that offload flags are correctly
+	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
+	 * If the packet is still IPv4, HW will simply start from zero IPID.
+	 */
+	if (first_m_seg->ol_flags & PKT_TX_IPV4) {
 		const struct ipv4_hdr *iphe4;
 
 		iphe4 = (const struct ipv4_hdr *)(hdr_addr + iph_off);
 		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
 		packet_id = rte_be_to_cpu_16(packet_id);
-		break;
-	}
-	case PKT_TX_IPV6:
-		packet_id = 0;
-		break;
-	default:
-		return EINVAL;
 	}
 
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
@@ -1014,6 +1049,7 @@ struct sfc_dp_tx sfc_ef10_tx = {
 	.qstop			= sfc_ef10_tx_qstop,
 	.qreap			= sfc_ef10_tx_qreap,
 	.qdesc_status		= sfc_ef10_tx_qdesc_status,
+	.pkt_prepare		= sfc_ef10_prepare_pkts,
 	.pkt_burst		= sfc_ef10_xmit_pkts,
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare Andrew Rybchenko
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Implement tx_prepare callback. The implementation checks for anything
only in RTE debug mode. No checks are done otherwise because EF10
simple datapath ignores Tx offloads.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 59 +++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 05f30cb2e..b317997ca 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -750,6 +750,62 @@ sfc_ef10_simple_tx_reap(struct sfc_ef10_txq *txq)
 			   txq->evq_read_ptr);
 }
 
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+static uint16_t
+sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
+			     struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *m = tx_pkts[i];
+		int ret;
+
+		ret = rte_validate_tx_offload(m);
+		if (unlikely(ret != 0)) {
+			/*
+			 * Negative error code is returned by
+			 * rte_validate_tx_offload(), but positive are used
+			 * inside net/sfc PMD.
+			 */
+			SFC_ASSERT(ret < 0);
+			rte_errno = -ret;
+			break;
+		}
+
+		/* ef10_simple does not support TSO and VLAN insertion */
+		if (unlikely(m->ol_flags &
+			     (PKT_TX_TCP_SEG | PKT_TX_VLAN_PKT))) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/* ef10_simple does not support scattered packets */
+		if (unlikely(m->nb_segs != 1)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/*
+		 * ef10_simple requires fast-free which ignores reference
+		 * counters
+		 */
+		if (unlikely(rte_mbuf_refcnt_read(m) != 1)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/* ef10_simple requires single pool for all packets */
+		if (unlikely(m->pool != tx_pkts[0]->pool)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+	}
+
+	return i;
+}
+#endif
 
 static uint16_t
 sfc_ef10_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1068,5 +1124,8 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 	.qstop			= sfc_ef10_tx_qstop,
 	.qreap			= sfc_ef10_tx_qreap,
 	.qdesc_status		= sfc_ef10_tx_qdesc_status,
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+	.pkt_prepare		= sfc_ef10_simple_prepare_pkts,
+#endif
 	.pkt_burst		= sfc_ef10_simple_xmit_pkts,
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Implement tx_prepare callback. The implementation checks for anything
only in RTE debug mode. No checks are done otherwise because EF10
simple datapath ignores Tx offloads.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c | 59 +++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 05f30cb2e..b317997ca 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -750,6 +750,62 @@ sfc_ef10_simple_tx_reap(struct sfc_ef10_txq *txq)
 			   txq->evq_read_ptr);
 }
 
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+static uint16_t
+sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
+			     struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts)
+{
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *m = tx_pkts[i];
+		int ret;
+
+		ret = rte_validate_tx_offload(m);
+		if (unlikely(ret != 0)) {
+			/*
+			 * Negative error code is returned by
+			 * rte_validate_tx_offload(), but positive are used
+			 * inside net/sfc PMD.
+			 */
+			SFC_ASSERT(ret < 0);
+			rte_errno = -ret;
+			break;
+		}
+
+		/* ef10_simple does not support TSO and VLAN insertion */
+		if (unlikely(m->ol_flags &
+			     (PKT_TX_TCP_SEG | PKT_TX_VLAN_PKT))) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/* ef10_simple does not support scattered packets */
+		if (unlikely(m->nb_segs != 1)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/*
+		 * ef10_simple requires fast-free which ignores reference
+		 * counters
+		 */
+		if (unlikely(rte_mbuf_refcnt_read(m) != 1)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+
+		/* ef10_simple requires single pool for all packets */
+		if (unlikely(m->pool != tx_pkts[0]->pool)) {
+			rte_errno = ENOTSUP;
+			break;
+		}
+	}
+
+	return i;
+}
+#endif
 
 static uint16_t
 sfc_ef10_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1068,5 +1124,8 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 	.qstop			= sfc_ef10_tx_qstop,
 	.qreap			= sfc_ef10_tx_qreap,
 	.qdesc_status		= sfc_ef10_tx_qdesc_status,
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+	.pkt_prepare		= sfc_ef10_simple_prepare_pkts,
+#endif
 	.pkt_burst		= sfc_ef10_simple_xmit_pkts,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in " Andrew Rybchenko
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Tx offloads checks should be done in Tx prepare.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 12 +++++++++---
 drivers/net/sfc/sfc_ef10_tx.c |  9 ++++-----
 drivers/net/sfc/sfc_tso.c     |  9 ---------
 drivers/net/sfc/sfc_tx.c      | 20 ++++++++++----------
 4 files changed, 23 insertions(+), 27 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 885094b67..c42d0d01f 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -195,7 +195,8 @@ sfc_dp_find_tx_by_caps(struct sfc_dp_list *head, unsigned int avail_caps)
 const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
 static inline int
-sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
+sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
+			   uint32_t tso_tcp_header_offset_limit)
 {
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 	int ret;
@@ -209,10 +210,15 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
 		SFC_ASSERT(ret < 0);
 		return -ret;
 	}
-#else
-	RTE_SET_USED(m);
 #endif
 
+	if (m->ol_flags & PKT_TX_TCP_SEG) {
+		unsigned int tcph_off = m->l2_len + m->l3_len;
+
+		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
+			return EINVAL;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index b317997ca..3d6ba4292 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -320,9 +320,10 @@ sfc_ef10_try_reap(struct sfc_ef10_txq * const txq, unsigned int added,
 }
 
 static uint16_t
-sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		      uint16_t nb_pkts)
 {
+	struct sfc_ef10_txq * const txq = sfc_ef10_txq_by_dp_txq(tx_queue);
 	uint16_t i;
 
 	for (i = 0; i < nb_pkts; i++) {
@@ -347,7 +348,8 @@ sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 			}
 		}
 #endif
-		ret = sfc_dp_tx_prepare_pkt(m);
+		ret = sfc_dp_tx_prepare_pkt(m,
+				txq->tso_tcp_header_offset_limit);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
@@ -378,9 +380,6 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	struct rte_mbuf *m_seg_to_free_up_to = first_m_seg;
 	bool eop;
 
-	if (unlikely(tcph_off > txq->tso_tcp_header_offset_limit))
-		return EMSGSIZE;
-
 	/*
 	 * Preliminary estimation of required DMA descriptors, including extra
 	 * descriptor for TSO header that is needed when the header is
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 2c03c0837..f46c0e912 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -103,18 +103,9 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	size_t nh_off = m->l2_len; /* IP header offset */
 	size_t tcph_off = m->l2_len + m->l3_len; /* TCP header offset */
 	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
-	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 
 	idx += SFC_EF10_TSO_OPT_DESCS_NUM;
 
-	/*
-	 * The TCP header must start at most 208 bytes into the frame.
-	 * If it starts later than this then the NIC won't realise
-	 * it's a TCP packet and TSO edits won't be applied
-	 */
-	if (unlikely(tcph_off > encp->enc_tx_tso_tcp_header_offset_limit))
-		return EMSGSIZE;
-
 	header_paddr = rte_pktmbuf_iova(m);
 
 	/*
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 16fd220bf..e128bff90 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -698,15 +698,19 @@ sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 }
 
 static uint16_t
-sfc_efx_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		     uint16_t nb_pkts)
 {
+	struct sfc_dp_txq *dp_txq = tx_queue;
+	struct sfc_efx_txq *txq = sfc_efx_txq_by_dp_txq(dp_txq);
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 	uint16_t i;
 
 	for (i = 0; i < nb_pkts; i++) {
 		int ret;
 
-		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i]);
+		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i],
+				encp->enc_tx_tso_tcp_header_offset_limit);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
@@ -776,14 +780,10 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 */
 			if (sfc_efx_tso_do(txq, added, &m_seg, &in_off, &pend,
 					   &pkt_descs, &pkt_len) != 0) {
-				/* We may have reached this place for
-				 * one of the following reasons:
-				 *
-				 * 1) Packet header linearization is needed
-				 *    and the header length is greater
-				 *    than SFC_TSOH_STD_LEN
-				 * 2) TCP header starts at more then
-				 *    208 bytes into the frame
+				/* We may have reached this place if packet
+				 * header linearization is needed but the
+				 * header length is greater than
+				 * SFC_TSOH_STD_LEN
 				 *
 				 * We will deceive RTE saying that we have sent
 				 * the packet, but we will actually drop it.
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Tx offloads checks should be done in Tx prepare.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 12 +++++++++---
 drivers/net/sfc/sfc_ef10_tx.c |  9 ++++-----
 drivers/net/sfc/sfc_tso.c     |  9 ---------
 drivers/net/sfc/sfc_tx.c      | 20 ++++++++++----------
 4 files changed, 23 insertions(+), 27 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 885094b67..c42d0d01f 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -195,7 +195,8 @@ sfc_dp_find_tx_by_caps(struct sfc_dp_list *head, unsigned int avail_caps)
 const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
 static inline int
-sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
+sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
+			   uint32_t tso_tcp_header_offset_limit)
 {
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 	int ret;
@@ -209,10 +210,15 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m)
 		SFC_ASSERT(ret < 0);
 		return -ret;
 	}
-#else
-	RTE_SET_USED(m);
 #endif
 
+	if (m->ol_flags & PKT_TX_TCP_SEG) {
+		unsigned int tcph_off = m->l2_len + m->l3_len;
+
+		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
+			return EINVAL;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index b317997ca..3d6ba4292 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -320,9 +320,10 @@ sfc_ef10_try_reap(struct sfc_ef10_txq * const txq, unsigned int added,
 }
 
 static uint16_t
-sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		      uint16_t nb_pkts)
 {
+	struct sfc_ef10_txq * const txq = sfc_ef10_txq_by_dp_txq(tx_queue);
 	uint16_t i;
 
 	for (i = 0; i < nb_pkts; i++) {
@@ -347,7 +348,8 @@ sfc_ef10_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 			}
 		}
 #endif
-		ret = sfc_dp_tx_prepare_pkt(m);
+		ret = sfc_dp_tx_prepare_pkt(m,
+				txq->tso_tcp_header_offset_limit);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
@@ -378,9 +380,6 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	struct rte_mbuf *m_seg_to_free_up_to = first_m_seg;
 	bool eop;
 
-	if (unlikely(tcph_off > txq->tso_tcp_header_offset_limit))
-		return EMSGSIZE;
-
 	/*
 	 * Preliminary estimation of required DMA descriptors, including extra
 	 * descriptor for TSO header that is needed when the header is
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 2c03c0837..f46c0e912 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -103,18 +103,9 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	size_t nh_off = m->l2_len; /* IP header offset */
 	size_t tcph_off = m->l2_len + m->l3_len; /* TCP header offset */
 	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
-	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 
 	idx += SFC_EF10_TSO_OPT_DESCS_NUM;
 
-	/*
-	 * The TCP header must start at most 208 bytes into the frame.
-	 * If it starts later than this then the NIC won't realise
-	 * it's a TCP packet and TSO edits won't be applied
-	 */
-	if (unlikely(tcph_off > encp->enc_tx_tso_tcp_header_offset_limit))
-		return EMSGSIZE;
-
 	header_paddr = rte_pktmbuf_iova(m);
 
 	/*
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 16fd220bf..e128bff90 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -698,15 +698,19 @@ sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 }
 
 static uint16_t
-sfc_efx_prepare_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		     uint16_t nb_pkts)
 {
+	struct sfc_dp_txq *dp_txq = tx_queue;
+	struct sfc_efx_txq *txq = sfc_efx_txq_by_dp_txq(dp_txq);
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(txq->evq->sa->nic);
 	uint16_t i;
 
 	for (i = 0; i < nb_pkts; i++) {
 		int ret;
 
-		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i]);
+		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i],
+				encp->enc_tx_tso_tcp_header_offset_limit);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
@@ -776,14 +780,10 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 */
 			if (sfc_efx_tso_do(txq, added, &m_seg, &in_off, &pend,
 					   &pkt_descs, &pkt_len) != 0) {
-				/* We may have reached this place for
-				 * one of the following reasons:
-				 *
-				 * 1) Packet header linearization is needed
-				 *    and the header length is greater
-				 *    than SFC_TSOH_STD_LEN
-				 * 2) TCP header starts at more then
-				 *    208 bytes into the frame
+				/* We may have reached this place if packet
+				 * header linearization is needed but the
+				 * header length is greater than
+				 * SFC_TSOH_STD_LEN
 				 *
 				 * We will deceive RTE saying that we have sent
 				 * the packet, but we will actually drop it.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in Tx prepare
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (7 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to " Andrew Rybchenko
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add descriptor space check to Tx prepare function to inform a caller
that a packet that needs more than maximum Tx descriptors of a queue
can not be sent.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 31 ++++++++++++++++++++++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c |  4 +++-
 drivers/net/sfc/sfc_tx.c      |  9 ++++++++-
 3 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index c42d0d01f..ebc941857 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -196,8 +196,13 @@ const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
 static inline int
 sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
-			   uint32_t tso_tcp_header_offset_limit)
+			   uint32_t tso_tcp_header_offset_limit,
+			   unsigned int max_fill_level,
+			   unsigned int nb_tso_descs,
+			   unsigned int nb_vlan_descs)
 {
+	unsigned int descs_required = m->nb_segs;
+
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 	int ret;
 
@@ -214,11 +219,35 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 
 	if (m->ol_flags & PKT_TX_TCP_SEG) {
 		unsigned int tcph_off = m->l2_len + m->l3_len;
+		unsigned int header_len = tcph_off + m->l4_len;
 
 		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
 			return EINVAL;
+
+		descs_required += nb_tso_descs;
+
+		/*
+		 * Extra descriptor that is required when a packet header
+		 * is separated from remaining content of the first segment.
+		 */
+		if (rte_pktmbuf_data_len(m) > header_len)
+			descs_required++;
 	}
 
+	/*
+	 * The number of VLAN descriptors is added regardless of requested
+	 * VLAN offload since VLAN is sticky and sending packet without VLAN
+	 * insertion may require VLAN descriptor to reset the sticky to 0.
+	 */
+	descs_required += nb_vlan_descs;
+
+	/*
+	 * Max fill level must be sufficient to hold all required descriptors
+	 * to send the packet entirely.
+	 */
+	if (descs_required > max_fill_level)
+		return ENOBUFS;
+
 	return 0;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 3d6ba4292..e7ab993dd 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -349,7 +349,9 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 #endif
 		ret = sfc_dp_tx_prepare_pkt(m,
-				txq->tso_tcp_header_offset_limit);
+				txq->tso_tcp_header_offset_limit,
+				txq->max_fill_level,
+				SFC_EF10_TSO_OPT_DESCS_NUM, 0);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index e128bff90..4037802e6 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -709,8 +709,15 @@ sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		int ret;
 
+		/*
+		 * EFX Tx datapath may require extra VLAN descriptor if VLAN
+		 * insertion offload is requested regardless the offload
+		 * requested/supported.
+		 */
 		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i],
-				encp->enc_tx_tso_tcp_header_offset_limit);
+				encp->enc_tx_tso_tcp_header_offset_limit,
+				txq->max_fill_level, EFX_TX_FATSOV2_OPT_NDESCS,
+				1);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in Tx prepare
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in " Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add descriptor space check to Tx prepare function to inform a caller
that a packet that needs more than maximum Tx descriptors of a queue
can not be sent.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 31 ++++++++++++++++++++++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c |  4 +++-
 drivers/net/sfc/sfc_tx.c      |  9 ++++++++-
 3 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index c42d0d01f..ebc941857 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -196,8 +196,13 @@ const struct sfc_dp_tx *sfc_dp_tx_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 
 static inline int
 sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
-			   uint32_t tso_tcp_header_offset_limit)
+			   uint32_t tso_tcp_header_offset_limit,
+			   unsigned int max_fill_level,
+			   unsigned int nb_tso_descs,
+			   unsigned int nb_vlan_descs)
 {
+	unsigned int descs_required = m->nb_segs;
+
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 	int ret;
 
@@ -214,11 +219,35 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 
 	if (m->ol_flags & PKT_TX_TCP_SEG) {
 		unsigned int tcph_off = m->l2_len + m->l3_len;
+		unsigned int header_len = tcph_off + m->l4_len;
 
 		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
 			return EINVAL;
+
+		descs_required += nb_tso_descs;
+
+		/*
+		 * Extra descriptor that is required when a packet header
+		 * is separated from remaining content of the first segment.
+		 */
+		if (rte_pktmbuf_data_len(m) > header_len)
+			descs_required++;
 	}
 
+	/*
+	 * The number of VLAN descriptors is added regardless of requested
+	 * VLAN offload since VLAN is sticky and sending packet without VLAN
+	 * insertion may require VLAN descriptor to reset the sticky to 0.
+	 */
+	descs_required += nb_vlan_descs;
+
+	/*
+	 * Max fill level must be sufficient to hold all required descriptors
+	 * to send the packet entirely.
+	 */
+	if (descs_required > max_fill_level)
+		return ENOBUFS;
+
 	return 0;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 3d6ba4292..e7ab993dd 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -349,7 +349,9 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 #endif
 		ret = sfc_dp_tx_prepare_pkt(m,
-				txq->tso_tcp_header_offset_limit);
+				txq->tso_tcp_header_offset_limit,
+				txq->max_fill_level,
+				SFC_EF10_TSO_OPT_DESCS_NUM, 0);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index e128bff90..4037802e6 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -709,8 +709,15 @@ sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		int ret;
 
+		/*
+		 * EFX Tx datapath may require extra VLAN descriptor if VLAN
+		 * insertion offload is requested regardless the offload
+		 * requested/supported.
+		 */
 		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i],
-				encp->enc_tx_tso_tcp_header_offset_limit);
+				encp->enc_tx_tso_tcp_header_offset_limit,
+				txq->max_fill_level, EFX_TX_FATSOV2_OPT_NDESCS,
+				1);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to Tx prepare
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (8 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in " Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO Andrew Rybchenko
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make Tx prepare function able to detect packets with invalid header
size when header linearization is required.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 11 ++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c |  2 ++
 drivers/net/sfc/sfc_tso.c     |  2 ++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index ebc941857..ae5524f24 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -14,6 +14,7 @@
 
 #include "sfc_dp.h"
 #include "sfc_debug.h"
+#include "sfc_tso.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -230,8 +231,16 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 		 * Extra descriptor that is required when a packet header
 		 * is separated from remaining content of the first segment.
 		 */
-		if (rte_pktmbuf_data_len(m) > header_len)
+		if (rte_pktmbuf_data_len(m) > header_len) {
 			descs_required++;
+		} else if (rte_pktmbuf_data_len(m) < header_len &&
+			 unlikely(header_len > SFC_TSOH_STD_LEN)) {
+			/*
+			 * Header linearization is required and
+			 * the header is too big to be linearized
+			 */
+			return EINVAL;
+		}
 	}
 
 	/*
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index e7ab993dd..959408449 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -447,6 +447,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		/*
 		 * Discard a packet if header linearization is needed but
 		 * the header is too big.
+		 * Duplicate Tx prepare check here to avoid spoil of
+		 * memory if Tx prepare is skipped.
 		 */
 		if (unlikely(header_len > SFC_TSOH_STD_LEN))
 			return EMSGSIZE;
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index f46c0e912..a882e64dd 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -119,6 +119,8 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 		/*
 		 * Discard a packet if header linearization is needed but
 		 * the header is too big.
+		 * Duplicate Tx prepare check here to avoid spoil of
+		 * memory if Tx prepare is skipped.
 		 */
 		if (unlikely(header_len > SFC_TSOH_STD_LEN))
 			return EMSGSIZE;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to Tx prepare
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to " Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make Tx prepare function able to detect packets with invalid header
size when header linearization is required.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h   | 11 ++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c |  2 ++
 drivers/net/sfc/sfc_tso.c     |  2 ++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index ebc941857..ae5524f24 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -14,6 +14,7 @@
 
 #include "sfc_dp.h"
 #include "sfc_debug.h"
+#include "sfc_tso.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -230,8 +231,16 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 		 * Extra descriptor that is required when a packet header
 		 * is separated from remaining content of the first segment.
 		 */
-		if (rte_pktmbuf_data_len(m) > header_len)
+		if (rte_pktmbuf_data_len(m) > header_len) {
 			descs_required++;
+		} else if (rte_pktmbuf_data_len(m) < header_len &&
+			 unlikely(header_len > SFC_TSOH_STD_LEN)) {
+			/*
+			 * Header linearization is required and
+			 * the header is too big to be linearized
+			 */
+			return EINVAL;
+		}
 	}
 
 	/*
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index e7ab993dd..959408449 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -447,6 +447,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		/*
 		 * Discard a packet if header linearization is needed but
 		 * the header is too big.
+		 * Duplicate Tx prepare check here to avoid spoil of
+		 * memory if Tx prepare is skipped.
 		 */
 		if (unlikely(header_len > SFC_TSOH_STD_LEN))
 			return EMSGSIZE;
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index f46c0e912..a882e64dd 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -119,6 +119,8 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 		/*
 		 * Discard a packet if header linearization is needed but
 		 * the header is too big.
+		 * Duplicate Tx prepare check here to avoid spoil of
+		 * memory if Tx prepare is skipped.
 		 */
 		if (unlikely(header_len > SFC_TSOH_STD_LEN))
 			return EMSGSIZE;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (9 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to " Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support Andrew Rybchenko
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

As a result, code duplication will be avoided in the current
TSO implementations (EFX and EF10 native). The future patch to
add support for tunnel TSO will also reuse the new function.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c |  9 ++-------
 drivers/net/sfc/sfc_tso.c     |  9 ++-------
 drivers/net/sfc/sfc_tso.h     | 12 ++++++++++++
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 959408449..bcbd15d55 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -479,13 +479,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4) {
-		const struct ipv4_hdr *iphe4;
-
-		iphe4 = (const struct ipv4_hdr *)(hdr_addr + iph_off);
-		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
-		packet_id = rte_be_to_cpu_16(packet_id);
-	}
+	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
 	rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index a882e64dd..1374aceaa 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -146,13 +146,8 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4) {
-		const struct ipv4_hdr *iphe4;
-
-		iphe4 = (const struct ipv4_hdr *)(tsoh + nh_off);
-		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
-		packet_id = rte_be_to_cpu_16(packet_id);
-	}
+	if (m->ol_flags & PKT_TX_IPV4)
+		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
 	th = (const struct tcp_hdr *)(tsoh + tcph_off);
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index cd151782f..8ecefdfd2 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -26,6 +26,18 @@ extern "C" {
  */
 #define SFC_EF10_TSO_HDR_DESCS_NUM	1
 
+static inline uint16_t
+sfc_tso_ip4_get_ipid(const uint8_t *pkt_hdrp, size_t ip_hdr_off)
+{
+	const struct ipv4_hdr *ip_hdrp;
+	uint16_t ipid;
+
+	ip_hdrp = (const struct ipv4_hdr *)(pkt_hdrp + ip_hdr_off);
+	rte_memcpy(&ipid, &ip_hdrp->packet_id, sizeof(ipid));
+
+	return rte_be_to_cpu_16(ipid);
+}
+
 unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
 				    struct rte_mbuf **in_seg, size_t *in_off);
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

As a result, code duplication will be avoided in the current
TSO implementations (EFX and EF10 native). The future patch to
add support for tunnel TSO will also reuse the new function.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_tx.c |  9 ++-------
 drivers/net/sfc/sfc_tso.c     |  9 ++-------
 drivers/net/sfc/sfc_tso.h     | 12 ++++++++++++
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 959408449..bcbd15d55 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -479,13 +479,8 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4) {
-		const struct ipv4_hdr *iphe4;
-
-		iphe4 = (const struct ipv4_hdr *)(hdr_addr + iph_off);
-		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
-		packet_id = rte_be_to_cpu_16(packet_id);
-	}
+	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
 	rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index a882e64dd..1374aceaa 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -146,13 +146,8 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4) {
-		const struct ipv4_hdr *iphe4;
-
-		iphe4 = (const struct ipv4_hdr *)(tsoh + nh_off);
-		rte_memcpy(&packet_id, &iphe4->packet_id, sizeof(uint16_t));
-		packet_id = rte_be_to_cpu_16(packet_id);
-	}
+	if (m->ol_flags & PKT_TX_IPV4)
+		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
 	th = (const struct tcp_hdr *)(tsoh + tcph_off);
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index cd151782f..8ecefdfd2 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -26,6 +26,18 @@ extern "C" {
  */
 #define SFC_EF10_TSO_HDR_DESCS_NUM	1
 
+static inline uint16_t
+sfc_tso_ip4_get_ipid(const uint8_t *pkt_hdrp, size_t ip_hdr_off)
+{
+	const struct ipv4_hdr *ip_hdrp;
+	uint16_t ipid;
+
+	ip_hdrp = (const struct ipv4_hdr *)(pkt_hdrp + ip_hdr_off);
+	rte_memcpy(&ipid, &ip_hdrp->packet_id, sizeof(ipid));
+
+	return rte_be_to_cpu_16(ipid);
+}
+
 unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
 				    struct rte_mbuf **in_seg, size_t *in_off);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (10 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath Andrew Rybchenko
  2019-04-03 18:03 ` [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Ferruh Yigit
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Said message cannot be considered as warning since
the PMD anyway reports available offload capabilities
by means of device info interface. Make this log
message informational and improve its formatting
by placing the text itself on the same line.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index fd4156f78..dee468f89 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -747,8 +747,7 @@ sfc_attach(struct sfc_adapter *sa)
 	if (sa->priv.dp_tx->features & SFC_DP_TX_FEAT_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled;
 		if (!sa->tso)
-			sfc_warn(sa,
-				 "TSO support isn't available on this adapter");
+			sfc_info(sa, "TSO support isn't available on this adapter");
 	}
 
 	sfc_log_init(sa, "estimate resource limits");
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Said message cannot be considered as warning since
the PMD anyway reports available offload capabilities
by means of device info interface. Make this log
message informational and improve its formatting
by placing the text itself on the same line.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index fd4156f78..dee468f89 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -747,8 +747,7 @@ sfc_attach(struct sfc_adapter *sa)
 	if (sa->priv.dp_tx->features & SFC_DP_TX_FEAT_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled;
 		if (!sa->tso)
-			sfc_warn(sa,
-				 "TSO support isn't available on this adapter");
+			sfc_info(sa, "TSO support isn't available on this adapter");
 	}
 
 	sfc_log_init(sa, "estimate resource limits");
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (11 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support Andrew Rybchenko
@ 2019-04-02  9:28 ` Andrew Rybchenko
  2019-04-02  9:28   ` Andrew Rybchenko
  2019-04-03 18:03 ` [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Ferruh Yigit
  13 siblings, 1 reply; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Handle VXLAN and GENEVE TSO on EF10 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst            |  2 +-
 doc/guides/rel_notes/release_19_05.rst |  1 +
 drivers/net/sfc/sfc.c                  |  6 ++++++
 drivers/net/sfc/sfc.h                  |  1 +
 drivers/net/sfc/sfc_dp_tx.h            | 18 +++++++++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c          | 22 ++++++++++++++++------
 drivers/net/sfc/sfc_tx.c               | 17 +++++++++++++++--
 7 files changed, 57 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 028c92cc3..eb47f25e3 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -66,7 +66,7 @@ SFC EFX PMD has support for:
 
 - Allmulticast mode
 
-- TCP segmentation offload (TSO)
+- TCP segmentation offload (TSO) including VXLAN and GENEVE encapsulated
 
 - Multicast MAC filter
 
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 173c852c8..f434c4823 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -74,6 +74,7 @@ New Features
     process.
   * Added support for Rx packet types list in a secondary process.
   * Added Tx prepare to do Tx offloads checks.
+  * Added support for VXLAN and GENEVE encapsulated TSO.
 
 * **Updated Mellanox drivers.**
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index dee468f89..406386a8c 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -750,6 +750,12 @@ sfc_attach(struct sfc_adapter *sa)
 			sfc_info(sa, "TSO support isn't available on this adapter");
 	}
 
+	if (sa->tso && sa->priv.dp_tx->features & SFC_DP_TX_FEAT_TSO_ENCAP) {
+		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled;
+		if (!sa->tso_encap)
+			sfc_info(sa, "Encapsulated TSO support isn't available on this adapter");
+	}
+
 	sfc_log_init(sa, "estimate resource limits");
 	rc = sfc_estimate_resource_limits(sa);
 	if (rc != 0)
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index a4b9a3f33..ecd20e546 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -286,6 +286,7 @@ struct sfc_adapter {
 	struct sfc_txq			*txq_ctrl;
 
 	boolean_t			tso;
+	boolean_t			tso_encap;
 
 	uint32_t			rxd_wait_timeout_ns;
 };
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index ae5524f24..72a69149b 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -163,6 +163,7 @@ struct sfc_dp_tx {
 #define SFC_DP_TX_FEAT_MULTI_PROCESS	0x8
 #define SFC_DP_TX_FEAT_MULTI_POOL	0x10
 #define SFC_DP_TX_FEAT_REFCNT		0x20
+#define SFC_DP_TX_FEAT_TSO_ENCAP	0x40
 	sfc_dp_tx_get_dev_info_t	*get_dev_info;
 	sfc_dp_tx_qsize_up_rings_t	*qsize_up_rings;
 	sfc_dp_tx_qcreate_t		*qcreate;
@@ -220,7 +221,22 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 
 	if (m->ol_flags & PKT_TX_TCP_SEG) {
 		unsigned int tcph_off = m->l2_len + m->l3_len;
-		unsigned int header_len = tcph_off + m->l4_len;
+		unsigned int header_len;
+
+		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+		case 0:
+			break;
+		case PKT_TX_TUNNEL_VXLAN:
+			/* FALLTHROUGH */
+		case PKT_TX_TUNNEL_GENEVE:
+			if (!(m->ol_flags &
+			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+				return EINVAL;
+
+			tcph_off += m->outer_l2_len + m->outer_l3_len;
+		}
+
+		header_len = tcph_off + m->l4_len;
 
 		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
 			return EINVAL;
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index bcbd15d55..055389efe 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -366,13 +366,16 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = m_seg->l2_len;
-	size_t tcph_off = m_seg->l2_len + m_seg->l3_len;
-	size_t header_len = m_seg->l2_len + m_seg->l3_len + m_seg->l4_len;
+	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
+			 m_seg->l2_len;
+	size_t tcph_off = iph_off + m_seg->l3_len;
+	size_t header_len = tcph_off + m_seg->l4_len;
 	/* Offset of the payload in the last segment that contains the header */
 	size_t in_off = 0;
 	const struct tcp_hdr *th;
 	uint16_t packet_id = 0;
+	uint16_t outer_packet_id = 0;
 	uint32_t sent_seq;
 	uint8_t *hdr_addr;
 	rte_iova_t hdr_iova;
@@ -482,12 +485,16 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	if (first_m_seg->ol_flags & PKT_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
+	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
+						first_m_seg->outer_l2_len);
+
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
 	rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
 	sent_seq = rte_be_to_cpu_32(sent_seq);
 
-	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, 0, sent_seq,
-			first_m_seg->tso_segsz);
+	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, outer_packet_id,
+			sent_seq, first_m_seg->tso_segsz);
 	(*added) += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	sfc_ef10_tx_qdesc_dma_create(hdr_iova, header_len, false,
@@ -927,7 +934,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
+			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1090,6 +1099,7 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_TSO |
+				  SFC_DP_TX_FEAT_TSO_ENCAP |
 				  SFC_DP_TX_FEAT_MULTI_SEG |
 				  SFC_DP_TX_FEAT_MULTI_POOL |
 				  SFC_DP_TX_FEAT_REFCNT |
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 4037802e6..e1ef00cc7 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -70,6 +70,10 @@ sfc_tx_get_queue_offload_caps(struct sfc_adapter *sa)
 	if (sa->tso)
 		caps |= DEV_TX_OFFLOAD_TCP_TSO;
 
+	if (sa->tso_encap)
+		caps |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+			 DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+
 	return caps;
 }
 
@@ -469,7 +473,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
+				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -588,18 +594,25 @@ int
 sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int sw_index;
 	int rc = 0;
 
 	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
 
 	if (sa->tso) {
-		if (!efx_nic_cfg_get(sa->nic)->enc_fw_assisted_tso_v2_enabled) {
+		if (!encp->enc_fw_assisted_tso_v2_enabled) {
 			sfc_warn(sa, "TSO support was unable to be restored");
 			sa->tso = B_FALSE;
+			sa->tso_encap = B_FALSE;
 		}
 	}
 
+	if (sa->tso_encap && !encp->enc_fw_assisted_tso_v2_encap_enabled) {
+		sfc_warn(sa, "Encapsulated TSO support was unable to be restored");
+		sa->tso_encap = B_FALSE;
+	}
+
 	rc = efx_tx_init(sa->nic);
 	if (rc != 0)
 		goto fail_efx_tx_init;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath Andrew Rybchenko
@ 2019-04-02  9:28   ` Andrew Rybchenko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Rybchenko @ 2019-04-02  9:28 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Handle VXLAN and GENEVE TSO on EF10 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst            |  2 +-
 doc/guides/rel_notes/release_19_05.rst |  1 +
 drivers/net/sfc/sfc.c                  |  6 ++++++
 drivers/net/sfc/sfc.h                  |  1 +
 drivers/net/sfc/sfc_dp_tx.h            | 18 +++++++++++++++++-
 drivers/net/sfc/sfc_ef10_tx.c          | 22 ++++++++++++++++------
 drivers/net/sfc/sfc_tx.c               | 17 +++++++++++++++--
 7 files changed, 57 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 028c92cc3..eb47f25e3 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -66,7 +66,7 @@ SFC EFX PMD has support for:
 
 - Allmulticast mode
 
-- TCP segmentation offload (TSO)
+- TCP segmentation offload (TSO) including VXLAN and GENEVE encapsulated
 
 - Multicast MAC filter
 
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 173c852c8..f434c4823 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -74,6 +74,7 @@ New Features
     process.
   * Added support for Rx packet types list in a secondary process.
   * Added Tx prepare to do Tx offloads checks.
+  * Added support for VXLAN and GENEVE encapsulated TSO.
 
 * **Updated Mellanox drivers.**
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index dee468f89..406386a8c 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -750,6 +750,12 @@ sfc_attach(struct sfc_adapter *sa)
 			sfc_info(sa, "TSO support isn't available on this adapter");
 	}
 
+	if (sa->tso && sa->priv.dp_tx->features & SFC_DP_TX_FEAT_TSO_ENCAP) {
+		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled;
+		if (!sa->tso_encap)
+			sfc_info(sa, "Encapsulated TSO support isn't available on this adapter");
+	}
+
 	sfc_log_init(sa, "estimate resource limits");
 	rc = sfc_estimate_resource_limits(sa);
 	if (rc != 0)
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index a4b9a3f33..ecd20e546 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -286,6 +286,7 @@ struct sfc_adapter {
 	struct sfc_txq			*txq_ctrl;
 
 	boolean_t			tso;
+	boolean_t			tso_encap;
 
 	uint32_t			rxd_wait_timeout_ns;
 };
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index ae5524f24..72a69149b 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -163,6 +163,7 @@ struct sfc_dp_tx {
 #define SFC_DP_TX_FEAT_MULTI_PROCESS	0x8
 #define SFC_DP_TX_FEAT_MULTI_POOL	0x10
 #define SFC_DP_TX_FEAT_REFCNT		0x20
+#define SFC_DP_TX_FEAT_TSO_ENCAP	0x40
 	sfc_dp_tx_get_dev_info_t	*get_dev_info;
 	sfc_dp_tx_qsize_up_rings_t	*qsize_up_rings;
 	sfc_dp_tx_qcreate_t		*qcreate;
@@ -220,7 +221,22 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 
 	if (m->ol_flags & PKT_TX_TCP_SEG) {
 		unsigned int tcph_off = m->l2_len + m->l3_len;
-		unsigned int header_len = tcph_off + m->l4_len;
+		unsigned int header_len;
+
+		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+		case 0:
+			break;
+		case PKT_TX_TUNNEL_VXLAN:
+			/* FALLTHROUGH */
+		case PKT_TX_TUNNEL_GENEVE:
+			if (!(m->ol_flags &
+			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+				return EINVAL;
+
+			tcph_off += m->outer_l2_len + m->outer_l3_len;
+		}
+
+		header_len = tcph_off + m->l4_len;
 
 		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
 			return EINVAL;
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index bcbd15d55..055389efe 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -366,13 +366,16 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = m_seg->l2_len;
-	size_t tcph_off = m_seg->l2_len + m_seg->l3_len;
-	size_t header_len = m_seg->l2_len + m_seg->l3_len + m_seg->l4_len;
+	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
+			 m_seg->l2_len;
+	size_t tcph_off = iph_off + m_seg->l3_len;
+	size_t header_len = tcph_off + m_seg->l4_len;
 	/* Offset of the payload in the last segment that contains the header */
 	size_t in_off = 0;
 	const struct tcp_hdr *th;
 	uint16_t packet_id = 0;
+	uint16_t outer_packet_id = 0;
 	uint32_t sent_seq;
 	uint8_t *hdr_addr;
 	rte_iova_t hdr_iova;
@@ -482,12 +485,16 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	if (first_m_seg->ol_flags & PKT_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
+	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
+						first_m_seg->outer_l2_len);
+
 	th = (const struct tcp_hdr *)(hdr_addr + tcph_off);
 	rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
 	sent_seq = rte_be_to_cpu_32(sent_seq);
 
-	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, 0, sent_seq,
-			first_m_seg->tso_segsz);
+	sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, outer_packet_id,
+			sent_seq, first_m_seg->tso_segsz);
 	(*added) += SFC_EF10_TSO_OPT_DESCS_NUM;
 
 	sfc_ef10_tx_qdesc_dma_create(hdr_iova, header_len, false,
@@ -927,7 +934,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
+			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1090,6 +1099,7 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_TSO |
+				  SFC_DP_TX_FEAT_TSO_ENCAP |
 				  SFC_DP_TX_FEAT_MULTI_SEG |
 				  SFC_DP_TX_FEAT_MULTI_POOL |
 				  SFC_DP_TX_FEAT_REFCNT |
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 4037802e6..e1ef00cc7 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -70,6 +70,10 @@ sfc_tx_get_queue_offload_caps(struct sfc_adapter *sa)
 	if (sa->tso)
 		caps |= DEV_TX_OFFLOAD_TCP_TSO;
 
+	if (sa->tso_encap)
+		caps |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+			 DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+
 	return caps;
 }
 
@@ -469,7 +473,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
+				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -588,18 +594,25 @@ int
 sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int sw_index;
 	int rc = 0;
 
 	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
 
 	if (sa->tso) {
-		if (!efx_nic_cfg_get(sa->nic)->enc_fw_assisted_tso_v2_enabled) {
+		if (!encp->enc_fw_assisted_tso_v2_enabled) {
 			sfc_warn(sa, "TSO support was unable to be restored");
 			sa->tso = B_FALSE;
+			sa->tso_encap = B_FALSE;
 		}
 	}
 
+	if (sa->tso_encap && !encp->enc_fw_assisted_tso_v2_encap_enabled) {
+		sfc_warn(sa, "Encapsulated TSO support was unable to be restored");
+		sa->tso_encap = B_FALSE;
+	}
+
 	rc = efx_tx_init(sa->nic);
 	if (rc != 0)
 		goto fail_efx_tx_init;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO
  2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
                   ` (12 preceding siblings ...)
  2019-04-02  9:28 ` [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath Andrew Rybchenko
@ 2019-04-03 18:03 ` Ferruh Yigit
  2019-04-03 18:03   ` Ferruh Yigit
  13 siblings, 1 reply; 28+ messages in thread
From: Ferruh Yigit @ 2019-04-03 18:03 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 4/2/2019 10:28 AM, Andrew Rybchenko wrote:
> Move and add missing Tx offloads checks to Tx prepare stage.
> Keep absolutely required checks in Tx burst to avoid spoil of
> memory and segmentation faults.
> 
> There are few checkpatches.sh warnings since positive errno is
> used inside driver.
> 
> The patch series depends on [1] and should be applied only after it.
> [1] is acked by Olivier and was acked by Konstantin Ananyev at RFC
> stage saying that more testing is required.
> 
> [1] https://patches.dpdk.org/patch/51908/
> 
> Igor Romanov (9):
>   net/sfc: improve TSO header length check in EFX datapath
>   net/sfc: improve TSO header length check in EF10 datapath
>   net/sfc: make TSO descriptor numbers EF10-specific
>   net/sfc: support Tx preparation in EFX datapath
>   net/sfc: support Tx preparation in EF10 datapath
>   net/sfc: support Tx preparation in EF10 simple datapath
>   net/sfc: move TSO header checks from Tx burst to Tx prepare
>   net/sfc: introduce descriptor space check in Tx prepare
>   net/sfc: add TSO header length check to Tx prepare
> 
> Ivan Malov (3):
>   net/sfc: factor out function to get IPv4 packet ID for TSO
>   net/sfc: improve log message about missing HW TSO support
>   net/sfc: support tunnel TSO on EF10 native Tx datapath

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO
  2019-04-03 18:03 ` [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Ferruh Yigit
@ 2019-04-03 18:03   ` Ferruh Yigit
  0 siblings, 0 replies; 28+ messages in thread
From: Ferruh Yigit @ 2019-04-03 18:03 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 4/2/2019 10:28 AM, Andrew Rybchenko wrote:
> Move and add missing Tx offloads checks to Tx prepare stage.
> Keep absolutely required checks in Tx burst to avoid spoil of
> memory and segmentation faults.
> 
> There are few checkpatches.sh warnings since positive errno is
> used inside driver.
> 
> The patch series depends on [1] and should be applied only after it.
> [1] is acked by Olivier and was acked by Konstantin Ananyev at RFC
> stage saying that more testing is required.
> 
> [1] https://patches.dpdk.org/patch/51908/
> 
> Igor Romanov (9):
>   net/sfc: improve TSO header length check in EFX datapath
>   net/sfc: improve TSO header length check in EF10 datapath
>   net/sfc: make TSO descriptor numbers EF10-specific
>   net/sfc: support Tx preparation in EFX datapath
>   net/sfc: support Tx preparation in EF10 datapath
>   net/sfc: support Tx preparation in EF10 simple datapath
>   net/sfc: move TSO header checks from Tx burst to Tx prepare
>   net/sfc: introduce descriptor space check in Tx prepare
>   net/sfc: add TSO header length check to Tx prepare
> 
> Ivan Malov (3):
>   net/sfc: factor out function to get IPv4 packet ID for TSO
>   net/sfc: improve log message about missing HW TSO support
>   net/sfc: support tunnel TSO on EF10 native Tx datapath

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2019-04-03 18:03 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-02  9:28 [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Andrew Rybchenko
2019-04-02  9:28 ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 01/12] net/sfc: improve TSO header length check in EFX datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 02/12] net/sfc: improve TSO header length check in EF10 datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 03/12] net/sfc: make TSO descriptor numbers EF10-specific Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 04/12] net/sfc: support Tx preparation in EFX datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 05/12] net/sfc: support Tx preparation in EF10 datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 06/12] net/sfc: support Tx preparation in EF10 simple datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 07/12] net/sfc: move TSO header checks from Tx burst to Tx prepare Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 08/12] net/sfc: introduce descriptor space check in " Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 09/12] net/sfc: add TSO header length check to " Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 10/12] net/sfc: factor out function to get IPv4 packet ID for TSO Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 11/12] net/sfc: improve log message about missing HW TSO support Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-02  9:28 ` [dpdk-dev] [PATCH 12/12] net/sfc: support tunnel TSO on EF10 native Tx datapath Andrew Rybchenko
2019-04-02  9:28   ` Andrew Rybchenko
2019-04-03 18:03 ` [dpdk-dev] [PATCH 00/12] net/sfc: add Tx prepare and encapsulated TSO Ferruh Yigit
2019-04-03 18:03   ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).