From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5EA04292D; Thu, 13 Apr 2023 07:29:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 939B1427EE; Thu, 13 Apr 2023 07:29:36 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 0586A410F9 for ; Thu, 13 Apr 2023 07:29:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681363775; x=1712899775; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Y89bEIcRMBwDhCiS3LfVels3LleT9yda1TJVbzbRQrA=; b=OEA4rS2+NcsIp0rP9QV2MEp/SjxTd2bCnh9ZZDu1xFYCFIolbPgCfj4x Ptp8FF/Sc+24nwT3KXN++wz/jwJ7Lea7UuC/ImK66slAOuT3ZBY5f2p3c FSHzWSsoWQ3tDCBu9MmyBIz7TQtQN9klCTQFInRZYzc42fpD/gr1C+aw6 qWlLc9NmCd/hWUpnhafZKubGgmPnBVWK5W5QjXxggMyxUpblQa9AKB+tD 86QT95MeYyhiIDvdaCfvXFKkuwOi+yYX9LyhaELzrIzT3NfHtSrqwAs/7 plID6d+VZNL41gP/RcPFT9c4jf3PR1HWlDuB4tbt/wmigs96MY2Mqreg4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="328205338" X-IronPort-AV: E=Sophos;i="5.98,339,1673942400"; d="scan'208";a="328205338" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2023 22:29:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="753832991" X-IronPort-AV: E=Sophos;i="5.98,339,1673942400"; d="scan'208";a="753832991" Received: from unknown (HELO localhost.localdomain) ([10.239.252.103]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2023 22:29:27 -0700 From: Zhichao Zeng To: dev@dpdk.org Cc: qi.z.zhang@intel.com, ke1.xu@intel.com, Zhichao Zeng , Qiming Yang Subject: [PATCH 3/4] net/ice: enable UDP fragmentation offload Date: Thu, 13 Apr 2023 13:34:45 +0800 Message-Id: <20230413053445.4191148-1-zhichaox.zeng@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit enables transmit segmentation offload for UDP, including both non-tunneled and tunneled packets. The command "tso set " or "tunnel_tso set " is used to enable UFO. Signed-off-by: Zhichao Zeng --- drivers/net/ice/ice_rxtx.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 0ea0045836..ed4d27389a 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -12,6 +12,7 @@ #define ICE_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ RTE_MBUF_F_TX_L4_MASK | \ RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ RTE_MBUF_F_TX_OUTER_IP_CKSUM) /** @@ -2767,6 +2768,13 @@ ice_txd_enable_checksum(uint64_t ol_flags, return; } + if (ol_flags & RTE_MBUF_F_TX_UDP_SEG) { + *td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP; + *td_offset |= (tx_offload.l4_len >> 2) << + ICE_TX_DESC_LEN_L4_LEN_S; + return; + } + /* Enable L4 checksum offloads */ switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) { case RTE_MBUF_F_TX_TCP_CKSUM: @@ -2858,6 +2866,7 @@ static inline uint16_t ice_calc_context_desc(uint64_t flags) { static uint64_t mask = RTE_MBUF_F_TX_TCP_SEG | + RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_QINQ | RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_TUNNEL_MASK | @@ -2966,7 +2975,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * the mbuf data size exceeds max data size that hw allows * per tx desc. */ - if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) + if (ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) + nb_ctx); else @@ -3026,7 +3035,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txe->mbuf = NULL; } - if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) + if (ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) cd_type_cmd_tso_mss |= ice_set_tso_ctx(tx_pkt, tx_offload); else if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) @@ -3066,7 +3075,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) slen = m_seg->data_len; buf_dma_addr = rte_mbuf_data_iova(m_seg); - while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) && + while ((ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && unlikely(slen > ICE_MAX_DATA_PER_TXD)) { txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr); txd->cmd_type_offset_bsz = -- 2.25.1