From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23F49A034D for ; Wed, 23 Feb 2022 09:59:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 152714116D; Wed, 23 Feb 2022 09:59:32 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 6096140140; Wed, 23 Feb 2022 09:59:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645606769; x=1677142769; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ypgPHkHYkA42hStyWs5+DITX4nrlWnPJyDpUzrHVAik=; b=jwnvl8RuJ5VFBOFXWPu+GZMKQ/yFU5u9sMmZp8ZEFGTzycZjtT47hURU FZUSXF++0FR9i2513vl/pH0xs/O4fLyWE2Xpi67tdHntEc2Rz3VUA9FJS Yb+nMd2S8ZRnKi1RclFe5cirK83JOKNfTd6vAcI4HvPcwo3KMQ4ZZbcU6 8BEj1ikJzlxFEnWOrBGCNyEPox//k2MsoNuyzzmEXaRfoYcdQLAzpMKqm j/Vi7e44rSmGgsvegG39n6xnlR30cYuQI/xCnB9af5urECNdjh6kkWnHA OBw4qNYHI+o1jGRcNrCb3anKMM1lIP/h4S6dFMeZgCwk7BkH8bdwphT+4 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10266"; a="251654921" X-IronPort-AV: E=Sophos;i="5.88,390,1635231600"; d="scan'208";a="251654921" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 00:59:16 -0800 X-IronPort-AV: E=Sophos;i="5.88,390,1635231600"; d="scan'208";a="508350026" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.191]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 00:59:14 -0800 From: Kevin Liu To: dev@dpdk.org Cc: Qi Zhang , stevex.yang@intel.com, Kevin Liu , stable@dpdk.org Subject: [PATCH] net/ice: fix Tx offload path choice Date: Fri, 24 Dec 2021 15:09:25 +0000 Message-Id: <20211224150925.3296471-1-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Testpmd forwards packets in checksum mode that it needs to calculate the checksum of each layer's protocol. When setting the hardware calculates the outer UDP checksum and the software calculates the outer IP checksum, the dev->tx_pkt_burst in ice_set_tx_function is set to ice_xmit_pkts_vec_avx2. The inner and outer UDP checksum of the tunnel packet after forwarding is wrong.The dev->tx_pkt_burst should be set to ice_xmit_pkts. The patch adds RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM to ICE_TX_NO_VECTOR_FLAGS,set dev->tx_pkt_burst to ice_xmit_pkts.After the tunnel packet is forwarded, the inner and outer UDP checksum is correct. At the same time, the patch of "net/ice: fix Tx Checksum offload" will cause interrupt errors in a special case that only inner IP and inner UDP checksum are set for hardware calculation.The patch is updating ICE_TX_NO_VECTOR_FLAGS, the problem can be solved, so I will restore the code modification of that patch. Fixes: e6b9d6411e91 ("app/testpmd: add SW L4 checksum in multi-segments") Fixes: 28f9002ab67f ("net/ice: add Tx AVX512 offload path") Fixes: 295968d17407 ("ethdev: add namespace") Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx") Cc: stable@dpdk.org Signed-off-by: Kevin Liu --- app/test-pmd/csumonly.c | 6 +-- drivers/net/ice/ice_rxtx.c | 41 ++++++------------- drivers/net/ice/ice_rxtx_vec_common.h | 59 +++++++++------------------ 3 files changed, 34 insertions(+), 72 deletions(-) diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 02bc3929c7..c235456e58 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -513,7 +513,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info, ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM; } else { if (info->is_tunnel) - l4_off = info->l2_len + + l4_off = info->outer_l2_len + info->outer_l3_len + info->l2_len + info->l3_len; else @@ -536,7 +536,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info, ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM; } else { if (info->is_tunnel) - l4_off = info->l2_len + info->outer_l3_len + + l4_off = info->outer_l2_len + info->outer_l3_len + info->l2_len + info->l3_len; else l4_off = info->l2_len + info->l3_len; @@ -625,7 +625,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info, if (udp_hdr->dgram_cksum != 0) { udp_hdr->dgram_cksum = 0; udp_hdr->dgram_cksum = get_udptcp_checksum(m, outer_l3_hdr, - info->l2_len + info->outer_l3_len, + info->outer_l2_len + info->outer_l3_len, info->outer_ethertype); } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 4f218bcd0d..041f4bc91f 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2501,35 +2501,18 @@ ice_txd_enable_checksum(uint64_t ol_flags, << ICE_TX_DESC_LEN_MACLEN_S; /* Enable L3 checksum offloads */ - /*Tunnel package usage outer len enable L3 checksum offload*/ - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; - *td_offset |= (tx_offload.outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; - *td_offset |= (tx_offload.outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; - *td_offset |= (tx_offload.outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } - } else { - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; - *td_offset |= (tx_offload.l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; - *td_offset |= (tx_offload.l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { - *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; - *td_offset |= (tx_offload.l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } + if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { + *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; + *td_offset |= (tx_offload.l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; + } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { + *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; + *td_offset |= (tx_offload.l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; + } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { + *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; + *td_offset |= (tx_offload.l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; } if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 8ff01046e1..2dd2d83650 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -250,7 +250,8 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq) #define ICE_TX_NO_VECTOR_FLAGS ( \ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ - RTE_ETH_TX_OFFLOAD_TCP_TSO) + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) #define ICE_TX_VECTOR_OFFLOAD ( \ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ @@ -364,45 +365,23 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt, uint32_t td_offset = 0; /* Tx Checksum Offload */ - /*Tunnel package usage outer len enable L2/L3 checksum offload*/ - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - /* SET MACLEN */ - td_offset |= (tx_pkt->outer_l2_len >> 1) << - ICE_TX_DESC_LEN_MACLEN_S; - - /* Enable L3 checksum offload */ - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; - td_offset |= (tx_pkt->outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; - td_offset |= (tx_pkt->outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; - td_offset |= (tx_pkt->outer_l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } - } else { - /* SET MACLEN */ - td_offset |= (tx_pkt->l2_len >> 1) << - ICE_TX_DESC_LEN_MACLEN_S; - - /* Enable L3 checksum offload */ - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; - td_offset |= (tx_pkt->l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; - td_offset |= (tx_pkt->l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { - td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; - td_offset |= (tx_pkt->l3_len >> 2) << - ICE_TX_DESC_LEN_IPLEN_S; - } + /* SET MACLEN */ + td_offset |= (tx_pkt->l2_len >> 1) << + ICE_TX_DESC_LEN_MACLEN_S; + + /* Enable L3 checksum offload */ + if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) { + td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM; + td_offset |= (tx_pkt->l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; + } else if (ol_flags & RTE_MBUF_F_TX_IPV4) { + td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4; + td_offset |= (tx_pkt->l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; + } else if (ol_flags & RTE_MBUF_F_TX_IPV6) { + td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6; + td_offset |= (tx_pkt->l3_len >> 2) << + ICE_TX_DESC_LEN_IPLEN_S; } /* Enable L4 checksum offloads */ -- 2.33.1