From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F99841D52 for ; Thu, 23 Feb 2023 16:08:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C13843282; Thu, 23 Feb 2023 16:08:01 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id CF34C43279 for ; Thu, 23 Feb 2023 16:07:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677164879; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XCWZX/+hLvkddG8NsjczMA5jpM7FCoaTEDPh4QwZZjg=; b=flB0Vn28SYSOcNnlK7zT0KnS2yGGXU7xxw2CYwRRUUttf/2kEGl8HyiMRb9j3jn8rHk2T5 IBrFszpbKtVLp6iLVp5Jr1OxPtuEoHStWhka9RVvABs0i3POPaTdHUXZo5CH6RA5H4owL6 BWNqjzblEbfodgRXyCkhg9ubYiKU4UI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-55-d_Sl_Wq5OXm4iT9n_4dZNg-1; Thu, 23 Feb 2023 10:07:57 -0500 X-MC-Unique: d_Sl_Wq5OXm4iT9n_4dZNg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 99FF4811E6E; Thu, 23 Feb 2023 15:07:54 +0000 (UTC) Received: from rh.redhat.com (unknown [10.39.192.53]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7F182166B29; Thu, 23 Feb 2023 15:07:53 +0000 (UTC) From: Kevin Traynor To: Jiawen Wu Cc: dpdk stable Subject: patch 'net/ngbe: fix packet type to parse from offload flags' has been queued to stable release 21.11.4 Date: Thu, 23 Feb 2023 15:05:45 +0000 Message-Id: <20230223150631.723699-54-ktraynor@redhat.com> In-Reply-To: <20230223150631.723699-1-ktraynor@redhat.com> References: <20230223150631.723699-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 21.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/28/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/f8e27fb215b6abd12293b6cf16ceeeb2d3c709b1 Thanks. Kevin --- >From f8e27fb215b6abd12293b6cf16ceeeb2d3c709b1 Mon Sep 17 00:00:00 2001 From: Jiawen Wu Date: Thu, 2 Feb 2023 17:21:27 +0800 Subject: [PATCH] net/ngbe: fix packet type to parse from offload flags [ upstream commit fa402b1e02b71a20aabef27733cbb75431363620 ] Context descriptors which contains the length of each packet layer and the packet type are needed when Tx checksum offload or TSO is on. If the packet type and length do not strictly match, it will cause Tx ring hang. In some external applications, developers may fill in wrong packet_type in rte_mbuf for Tx path. For example, they encap/decap the packets but did not refill the packet_type. To prevent this, change it to parse from ol_flags. And remove redundant tunnel type since the NIC does not support it. Fixes: 9f3206140274 ("net/ngbe: support TSO") Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_rxtx.c | 90 +++++++++--------------------------- 1 file changed, 22 insertions(+), 68 deletions(-) diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 86a5ef5486..b4f7e6c9d5 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -25,6 +25,4 @@ /* Bit Mask to indicate what bits required for building Tx context */ static const u64 NGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM | - RTE_MBUF_F_TX_OUTER_IPV6 | - RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_IPV6 | RTE_MBUF_F_TX_IPV4 | @@ -32,6 +30,4 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG | - RTE_MBUF_F_TX_TUNNEL_MASK | - RTE_MBUF_F_TX_OUTER_IP_CKSUM | NGBE_TX_IEEE1588_TMST); @@ -334,26 +330,5 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1); - - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - tx_offload_mask.outer_tun_len |= ~0; - tx_offload_mask.outer_l2_len |= ~0; - tx_offload_mask.outer_l3_len |= ~0; - tx_offload_mask.l2_len |= ~0; - tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1); - tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2); - - switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - case RTE_MBUF_F_TX_TUNNEL_IPIP: - /* for non UDP / GRE tunneling, set to 0b */ - break; - default: - PMD_TX_LOG(ERR, "Tunnel type not supported"); - return; - } - vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len); - } else { - tunnel_seed = 0; - vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len); - } + vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len); if (ol_flags & RTE_MBUF_F_TX_VLAN) { @@ -362,4 +337,6 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, } + tunnel_seed = 0; + txq->ctx_cache[ctx_idx].flags = ol_flags; txq->ctx_cache[ctx_idx].tx_offload.data[0] = @@ -450,14 +427,8 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) } -static inline uint8_t -tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) +static inline uint32_t +tx_desc_ol_flags_to_ptype(uint64_t oflags) { - bool tun; - - if (ptype) - return ngbe_encode_ptype(ptype); - - /* Only support flags in NGBE_TX_OFFLOAD_MASK */ - tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK); + uint32_t ptype; /* L2 level */ @@ -467,39 +438,34 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) /* L3 level */ - if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM)) - ptype |= RTE_PTYPE_L3_IPV4; - else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6)) - ptype |= RTE_PTYPE_L3_IPV6; - if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM)) - ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4); + ptype |= RTE_PTYPE_L3_IPV4; else if (oflags & (RTE_MBUF_F_TX_IPV6)) - ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6); + ptype |= RTE_PTYPE_L3_IPV6; /* L4 level */ switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) { case RTE_MBUF_F_TX_TCP_CKSUM: - ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP); + ptype |= RTE_PTYPE_L4_TCP; break; case RTE_MBUF_F_TX_UDP_CKSUM: - ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP); + ptype |= RTE_PTYPE_L4_UDP; break; case RTE_MBUF_F_TX_SCTP_CKSUM: - ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP); + ptype |= RTE_PTYPE_L4_SCTP; break; } if (oflags & RTE_MBUF_F_TX_TCP_SEG) - ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP); + ptype |= RTE_PTYPE_L4_TCP; - /* Tunnel */ - switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) { - case RTE_MBUF_F_TX_TUNNEL_IPIP: - case RTE_MBUF_F_TX_TUNNEL_IP: - ptype |= RTE_PTYPE_L2_ETHER | - RTE_PTYPE_L3_IPV4 | - RTE_PTYPE_TUNNEL_IP; - break; - } + return ptype; +} + +static inline uint8_t +tx_desc_ol_flags_to_ptid(uint64_t oflags) +{ + uint32_t ptype; + + ptype = tx_desc_ol_flags_to_ptype(oflags); return ngbe_encode_ptype(ptype); @@ -623,6 +589,5 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK; if (tx_ol_req) { - tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req, - tx_pkt->packet_type); + tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req); tx_offload.l2_len = tx_pkt->l2_len; tx_offload.l3_len = tx_pkt->l3_len; @@ -630,7 +595,4 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_offload.vlan_tci = tx_pkt->vlan_tci; tx_offload.tso_segsz = tx_pkt->tso_segsz; - tx_offload.outer_l2_len = tx_pkt->outer_l2_len; - tx_offload.outer_l3_len = tx_pkt->outer_l3_len; - tx_offload.outer_tun_len = 0; /* If new context need be built or reuse the exist ctx*/ @@ -753,8 +715,4 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, pkt_len -= (tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len); - pkt_len -= - (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) - ? tx_offload.outer_l2_len + - tx_offload.outer_l3_len : 0; } @@ -1940,10 +1898,6 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev) RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | - RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO | - RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO | - RTE_ETH_TX_OFFLOAD_IP_TNL_TSO | - RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; -- 2.39.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-02-23 14:46:25.115031246 +0000 +++ 0054-net-ngbe-fix-packet-type-to-parse-from-offload-flags.patch 2023-02-23 14:46:23.786236022 +0000 @@ -1 +1 @@ -From fa402b1e02b71a20aabef27733cbb75431363620 Mon Sep 17 00:00:00 2001 +From f8e27fb215b6abd12293b6cf16ceeeb2d3c709b1 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit fa402b1e02b71a20aabef27733cbb75431363620 ] + @@ -18 +19,0 @@ -Cc: stable@dpdk.org @@ -26 +27 @@ -index 9fd24fa444..0c678e1557 100644 +index 86a5ef5486..b4f7e6c9d5 100644