DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [PATCH 2/2] net/iavf: support inner and outer checksum offload
@ 2022-08-30  2:22 Xu, Ke1
  0 siblings, 0 replies; 4+ messages in thread
From: Xu, Ke1 @ 2022-08-30  2:22 UTC (permalink / raw)
  To: Zhang, Peng1X; +Cc: Xing, Beilei, dev, Wu, Jingjing


> Subject: [PATCH 2/2] net/iavf: support inner and outer checksum offload
> Date: Sat, 13 Aug 2022 00:52:23 +0800
> Message-ID: <20220812165223.470777-2-peng1x.zhang@intel.com> (raw)
> In-Reply-To: <20220812165223.470777-1-peng1x.zhang@intel.com>
>
> From: Peng Zhang <peng1x.zhang@intel.com>
>
> Add the support of inner and outer Tx checksum offload for tunneling
> packets by configuring tunneling parameters in Tx descriptors,
> including outer L3/L4 checksum offload.
>
> Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>

Tested and passed.

Regards,
Tested-by: Ke Xu <ke1.xu@intel.com>

> ---
>  drivers/net/iavf/iavf_ethdev.c |  3 +-
>  drivers/net/iavf/iavf_rxtx.c   | 51 +++++++++++++++++++++++++++++++---
>  drivers/net/iavf/iavf_rxtx.h   |  8 +++++-
>  3 files changed, 56 insertions(+), 6 deletions(-)

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [PATCH 2/2] net/iavf: support inner and outer checksum offload
@ 2022-08-26 14:46 Buckley, Daniel M
  0 siblings, 0 replies; 4+ messages in thread
From: Buckley, Daniel M @ 2022-08-26 14:46 UTC (permalink / raw)
  To: dev, Jiang, YuX

[-- Attachment #1: Type: text/plain, Size: 11434 bytes --]

From patchwork Fri Aug 12 16:52:23 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: "Zhang, Peng1X" <peng1x.zhang@intel.com>
X-Patchwork-Id: 114859
X-Patchwork-Delegate: qi.z.zhang@intel.com
Return-Path: <dev-bounces@dpdk.org>
X-Original-To: patchwork@inbox.dpdk.org
Delivered-To: patchwork@inbox.dpdk.org
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
      by inbox.dpdk.org (Postfix) with ESMTP id C9074A0543;
      Fri, 12 Aug 2022 11:02:29 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
      by mails.dpdk.org (Postfix) with ESMTP id BC9E3427F2;
      Fri, 12 Aug 2022 11:02:29 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id EB424410FC
 for <dev@dpdk.org>; Fri, 12 Aug 2022 11:02:27 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1660294948; x=1691830948;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=IpO6pCd5TXQDILuXUPoUuCViBMkYfHWoRV2RV4bEyOs=;
 b=czOHY8+0z/9icpoaY8UM3Up/T5Td5o6VKtvi/l/Jrw+/Mg8y7vat0loH
 M3ZWRKgO+ljh9CyDK4pcXjCZYvBRdw8Yo2PH1ZoDzVcRQmcBnqa2NnLNI
 0NjKAjinoFZJogZEYYRgeoGx3gQTSIZDSYG+RhYWijBXB9VkIawArGScp
 Onw4UrYkIHSiZFN48jBbeoflcOpMvj72KQk5jLfyxLetJ/EUjHiSj6tYa
 6ZGILyL/hmSH2pQMwmIH+Hw03qjp/fPjuxB6PdGBGg5aMQ3y8NmlFWGxZ
 aTzhpllEuWfAiH3F5JPR+PZ2w9YVXd6QLrF+Q+4lFf2g1EiUQOOH7dTkf Q==;
X-IronPort-AV: E=McAfee;i="6400,9594,10436"; a="355562201"
X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="355562201"
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Aug 2022 02:02:26 -0700
X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="665750867"
Received: from unknown (HELO localhost.localdomain) ([10.239.252.253])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Aug 2022 02:02:24 -0700
From: peng1x.zhang@intel.com
To: dev@dpdk.org
Cc: beilei.xing@intel.com, jingjing.wu@intel.com,
 Peng Zhang <peng1x.zhang@intel.com>
Subject: [PATCH 2/2] net/iavf: support inner and outer checksum offload
Date: Sat, 13 Aug 2022 00:52:23 +0800
Message-Id: <20220812165223.470777-2-peng1x.zhang@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220812165223.470777-1-peng1x.zhang@intel.com>
References: <20220812165223.470777-1-peng1x.zhang@intel.com>
MIME-Version: 1.0
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Peng Zhang <peng1x.zhang@intel.com>

Add the support of inner and outer Tx checksum offload for tunneling
packets by configuring tunneling parameters in Tx descriptors,
including outer L3/L4 checksum offload.

Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
---
Tested-by: Daniel M Buckley <daniel.m.buckley@intel.con>
---
 drivers/net/iavf/iavf_ethdev.c |  3 +-
 drivers/net/iavf/iavf_rxtx.c   | 51 +++++++++++++++++++++++++++++++---
 drivers/net/iavf/iavf_rxtx.h   |  8 +++++-
 3 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 506fcff6e3..59238ecceb 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1140,7 +1140,8 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
            RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
            RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
            RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
-           RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+           RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+           RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;

      if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
            dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index adec58e90a..7cbebafc09 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2334,7 +2334,7 @@ static inline uint16_t
 iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
      if (flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG |
-                 RTE_MBUF_F_TX_TUNNEL_MASK))
+         RTE_MBUF_F_TX_TUNNEL_MASK | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
            return 1;
      if (flags & RTE_MBUF_F_TX_VLAN &&
          vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
@@ -2399,6 +2399,44 @@ iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0,
      break;
      }

+     /* L4TUNT: L4 Tunneling Type */
+     switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+     case RTE_MBUF_F_TX_TUNNEL_IPIP:
+           /* for non UDP / GRE tunneling, set to 00b */
+           break;
+     case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+     case RTE_MBUF_F_TX_TUNNEL_GTP:
+     case RTE_MBUF_F_TX_TUNNEL_GENEVE:
+           eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING;
+           break;
+     case RTE_MBUF_F_TX_TUNNEL_GRE:
+           eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING;
+           break;
+     default:
+           PMD_TX_LOG(ERR, "Tunnel type not supported");
+           return;
+     }
+
+     /* L4TUNLEN: L4 Tunneling Length, in Words
+      *
+      * We depend on app to set rte_mbuf.l2_len correctly.
+      * For IP in GRE it should be set to the length of the GRE
+      * header;
+      * For MAC in GRE or MAC in UDP it should be set to the length
+      * of the GRE or UDP headers plus the inner MAC up to including
+      * its last Ethertype.
+      * If MPLS labels exists, it should include them as well.
+      */
+     eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT;
+
+     /**
+      * Calculate the tunneling UDP checksum.
+      * Shall be set only if L4TUNT = 01b and EIPT is not zero
+      */
+     if (!(eip_typ & IAVF_TX_CTX_EXT_IP_NONE) &&
+         (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING))
+           eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK;
+
      *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT |
            eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT |
            eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT;
@@ -2417,7 +2455,7 @@ iavf_fill_ctx_desc_segmentation_field(volatile uint64_t *field,
            total_length = m->pkt_len - (m->l2_len + m->l3_len + m->l4_len);

            if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
-                 total_length -= m->outer_l3_len;
+                 total_length -= m->outer_l3_len +  m->outer_l2_len;
      }

 #ifdef RTE_LIBRTE_IAVF_DEBUG_TX
@@ -2535,8 +2573,13 @@ iavf_build_data_desc_cmd_offset_fields(volatile uint64_t *qw1,
      }

      /* Set MACLEN */
-     offset |= (m->l2_len >> 1) << IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
-
+     if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
+           offset |= (m->outer_l2_len >> 1)
+                 << IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+     else
+           offset |= (m->l2_len >> 1)
+                 << IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
      /* Enable L3 checksum offloading inner */
      if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
            if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 1695e43cd5..4b40ad3615 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -26,6 +26,8 @@
 #define IAVF_TX_NO_VECTOR_FLAGS (                     \
            RTE_ETH_TX_OFFLOAD_MULTI_SEGS |            \
            RTE_ETH_TX_OFFLOAD_TCP_TSO |         \
+           RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |    \
+           RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |      \
            RTE_ETH_TX_OFFLOAD_SECURITY)

 #define IAVF_TX_VECTOR_OFFLOAD (                      \
@@ -56,7 +58,8 @@
 #define IAVF_TX_CKSUM_OFFLOAD_MASK (            \
            RTE_MBUF_F_TX_IP_CKSUM |             \
            RTE_MBUF_F_TX_L4_MASK |        \
-           RTE_MBUF_F_TX_TCP_SEG)
+           RTE_MBUF_F_TX_TCP_SEG |          \
+           RTE_MBUF_F_TX_OUTER_IP_CKSUM)

 #define IAVF_TX_OFFLOAD_MASK (  \
            RTE_MBUF_F_TX_OUTER_IPV6 |           \
@@ -67,6 +70,9 @@
            RTE_MBUF_F_TX_IP_CKSUM |             \
            RTE_MBUF_F_TX_L4_MASK |        \
            RTE_MBUF_F_TX_TCP_SEG |        \
+           RTE_MBUF_F_TX_TUNNEL_MASK |   \
+           RTE_MBUF_F_TX_OUTER_IP_CKSUM |  \
+           RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
            RTE_ETH_TX_OFFLOAD_SECURITY)

 #define IAVF_TX_OFFLOAD_NOTSUP_MASK \

--------------------------------------------------------------
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.

[-- Attachment #2: Type: text/html, Size: 15148 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
@ 2022-08-12 16:52 peng1x.zhang
  2022-08-12 16:52 ` [PATCH 2/2] net/iavf: support inner and outer checksum offload peng1x.zhang
  0 siblings, 1 reply; 4+ messages in thread
From: peng1x.zhang @ 2022-08-12 16:52 UTC (permalink / raw)
  To: dev; +Cc: beilei.xing, jingjing.wu, Peng Zhang

From: Peng Zhang <peng1x.zhang@intel.com>

Hardware limits that max buffer size per Tx descriptor should be (16K-1)B.
So when TSO enabled under unencrypt scenario, the mbuf data size may exceed
the limit and cause malicious behavior to the NIC.

This patch supports Tx descriptors for this kind of large buffer.

Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
---
 drivers/net/iavf/iavf_rxtx.c | 66 ++++++++++++++++++++++++++++++++----
 1 file changed, 60 insertions(+), 6 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index dfd021889e..adec58e90a 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2642,6 +2642,47 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
 	return NULL;
 }
 
+/* HW requires that TX buffer size ranges from 1B up to (16K-1)B. */
+#define IAVF_MAX_DATA_PER_TXD \
+	(IAVF_TXD_QW1_TX_BUF_SZ_MASK >> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+static inline void
+iavf_fill_unencrypt_desc(volatile struct iavf_tx_desc *txd, struct rte_mbuf *m,
+			 volatile uint64_t desc_template, struct iavf_tx_entry *txe,
+			 volatile struct iavf_tx_desc *txr, struct iavf_tx_entry *txe_ring,
+			 int desc_idx_last)
+{
+		/* Setup TX Descriptor */
+		int desc_idx;
+		uint16_t slen = m->data_len;
+		uint64_t buf_dma_addr = rte_mbuf_data_iova(m);
+		struct iavf_tx_entry *txn = &txe_ring[txe->next_id];
+
+		while ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
+			unlikely(slen > IAVF_MAX_DATA_PER_TXD)) {
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+
+			txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DATA |
+			(uint64_t)IAVF_MAX_DATA_PER_TXD <<
+			IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) | desc_template;
+
+			buf_dma_addr += IAVF_MAX_DATA_PER_TXD;
+			slen -= IAVF_MAX_DATA_PER_TXD;
+
+			txe->last_id = desc_idx_last;
+			desc_idx = txe->next_id;
+			txe = txn;
+			txd = &txr[desc_idx];
+			txn = &txe_ring[txe->next_id];
+		}
+
+		txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64((uint64_t)slen << IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) |
+				desc_template;
+}
+
 /* TX function */
 uint16_t
 iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2650,6 +2691,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	volatile struct iavf_tx_desc *txr = txq->tx_ring;
 	struct iavf_tx_entry *txe_ring = txq->sw_ring;
 	struct iavf_tx_entry *txe, *txn;
+	volatile struct iavf_tx_desc *txd;
 	struct rte_mbuf *mb, *mb_seg;
 	uint16_t desc_idx, desc_idx_last;
 	uint16_t idx;
@@ -2781,6 +2823,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			ddesc = (volatile struct iavf_tx_desc *)
 					&txr[desc_idx];
 
+			txd = &txr[desc_idx];
 			txn = &txe_ring[txe->next_id];
 			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
 
@@ -2788,10 +2831,16 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				rte_pktmbuf_free_seg(txe->mbuf);
 
 			txe->mbuf = mb_seg;
-			iavf_fill_data_desc(ddesc, mb_seg,
-					ddesc_template, tlen, ipseclen);
 
-			IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+			if (nb_desc_ipsec) {
+				iavf_fill_data_desc(ddesc, mb_seg,
+					ddesc_template, tlen, ipseclen);
+				IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+			} else {
+				iavf_fill_unencrypt_desc(txd, mb_seg,
+					ddesc_template, txe, txr, txe_ring, desc_idx_last);
+				IAVF_DUMP_TX_DESC(txq, txd, desc_idx);
+			}
 
 			txe->last_id = desc_idx_last;
 			desc_idx = txe->next_id;
@@ -2816,10 +2865,15 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			txq->nb_used = 0;
 		}
 
-		ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+		if (nb_desc_ipsec) {
+			ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
 				IAVF_TXD_DATA_QW1_CMD_SHIFT);
-
-		IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+			IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+		} else {
+			txd->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+				IAVF_TXD_DATA_QW1_CMD_SHIFT);
+			IAVF_DUMP_TX_DESC(txq, txd, desc_idx - 1);
+		}
 	}
 
 end_of_tx:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-08-30  8:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-30  2:22 [PATCH 2/2] net/iavf: support inner and outer checksum offload Xu, Ke1
  -- strict thread matches above, loose matches on Subject: below --
2022-08-26 14:46 Buckley, Daniel M
2022-08-12 16:52 [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases peng1x.zhang
2022-08-12 16:52 ` [PATCH 2/2] net/iavf: support inner and outer checksum offload peng1x.zhang
2022-08-30  8:12   ` Yang, Qiming

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).