From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 191E446BAE for ; Fri, 18 Jul 2025 21:37:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1422740611; Fri, 18 Jul 2025 21:37:54 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id E8FB740B8F for ; Fri, 18 Jul 2025 21:37:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752867471; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nsWyL5bxPX6pyXdogxx3Ibk/BjmgazM0CL+zGvrxEvI=; b=YJRJKo6pSYTDSNAp3Yx2WhMwm07kv9xVYCcfjyVlt9xVY6LmqDtjY7L5na9KxhYL5+TYCa P7KbQkcq0FpwWmtiuz4LwsoKc7LkRY6Sy74zAi7fHoY4Wwvwz0gaj4jKKAawejWd1kHGyt 2qjsvl6GTdnb1kP/plI4UsWF7k71rFU= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-586-04w5SzKMNBOH3ZBbLsyJBA-1; Fri, 18 Jul 2025 15:37:50 -0400 X-MC-Unique: 04w5SzKMNBOH3ZBbLsyJBA-1 X-Mimecast-MFC-AGG-ID: 04w5SzKMNBOH3ZBbLsyJBA_1752867468 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D2BC919560B0; Fri, 18 Jul 2025 19:37:48 +0000 (UTC) Received: from rh.redhat.com (unknown [10.44.32.40]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AADEF18003FC; Fri, 18 Jul 2025 19:37:47 +0000 (UTC) From: Kevin Traynor To: Jiawen Wu Cc: dpdk stable Subject: patch 'net/txgbe: fix to create FDIR filter for tunnel packet' has been queued to stable release 24.11.3 Date: Fri, 18 Jul 2025 20:31:16 +0100 Message-ID: <20250718193247.1008129-142-ktraynor@redhat.com> In-Reply-To: <20250718193247.1008129-1-ktraynor@redhat.com> References: <20250718193247.1008129-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: mz2sRH8-KnbGfsUSuuzUOvzENSdSO-1ybp0JorE1cZw_1752867468 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 24.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 07/23/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/07eee8a0d85b840aff239360a2fcb393dcbcb826 Thanks. Kevin --- >From 07eee8a0d85b840aff239360a2fcb393dcbcb826 Mon Sep 17 00:00:00 2001 From: Jiawen Wu Date: Fri, 13 Jun 2025 16:41:49 +0800 Subject: [PATCH] net/txgbe: fix to create FDIR filter for tunnel packet [ upstream commit a1851465f8252ee75a26d05b9b2d3dca7023e8f2 ] Fix to create FDIR rules for VXLAN/GRE/NVGRE/GENEVE packets, they will match the rules in the inner layers. Fixes: b973ee26747a ("net/txgbe: parse flow director filter") Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe.ini | 2 + drivers/net/txgbe/txgbe_ethdev.h | 1 - drivers/net/txgbe/txgbe_flow.c | 591 +++++++++++++++++++++++------ 3 files changed, 481 insertions(+), 113 deletions(-) diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini index be0af3dfad..ef9f0cfa0a 100644 --- a/doc/guides/nics/features/txgbe.ini +++ b/doc/guides/nics/features/txgbe.ini @@ -59,4 +59,6 @@ eth = P e_tag = Y fuzzy = Y +geneve = Y +gre = Y ipv4 = Y ipv6 = Y diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 5134c3d99e..288d9a43da 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -91,5 +91,4 @@ struct txgbe_hw_fdir_mask { uint16_t dst_port_mask; uint16_t flex_bytes_mask; - uint8_t mac_addr_byte_mask; uint8_t pkt_type_mask; /* reversed mask for hw */ }; diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c index b2a2e35351..7482cbbf63 100644 --- a/drivers/net/txgbe/txgbe_flow.c +++ b/drivers/net/txgbe/txgbe_flow.c @@ -2180,39 +2180,27 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, /** - * Parse the rule to see if it is a VxLAN or NVGRE flow director rule. + * Parse the rule to see if it is a IP tunnel flow director rule. * And get the flow director filter info BTW. - * VxLAN PATTERN: - * The first not void item must be ETH. - * The second not void item must be IPV4/ IPV6. - * The third not void item must be NVGRE. - * The next not void item must be END. - * NVGRE PATTERN: - * The first not void item must be ETH. - * The second not void item must be IPV4/ IPV6. - * The third not void item must be NVGRE. + * PATTERN: + * The first not void item can be ETH or IPV4 or IPV6 or UDP or tunnel type. + * The second not void item must be IPV4 or IPV6 if the first one is ETH. + * The next not void item could be UDP or tunnel type. + * The next not void item could be a certain inner layer. * The next not void item must be END. * ACTION: - * The first not void action should be QUEUE or DROP. - * The second not void optional action should be MARK, - * mark_id is a uint32_t number. + * The first not void action should be QUEUE. * The next not void action should be END. - * VxLAN pattern example: + * pattern example: * ITEM Spec Mask * ETH NULL NULL - * IPV4/IPV6 NULL NULL + * IPV4 NULL NULL * UDP NULL NULL - * VxLAN vni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF} - * MAC VLAN tci 0x2016 0xEFFF - * END - * NEGRV pattern example: - * ITEM Spec Mask + * VXLAN NULL NULL * ETH NULL NULL - * IPV4/IPV6 NULL NULL - * NVGRE protocol 0x6558 0xFFFF - * tni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF} - * MAC VLAN tci 0x2016 0xEFFF + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * UDP/TCP/SCTP src_port 80 0xFFFF + * dst_port 80 0xFFFF * END - * other members in mask and spec should set to 0x00. - * item->last should be NULL. */ static int @@ -2225,4 +2213,15 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, const struct rte_flow_item *item; const struct rte_flow_item_eth *eth_mask; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_ipv6 *ipv6_spec; + const struct rte_flow_item_ipv6 *ipv6_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + u8 ptid = 0; uint32_t j; @@ -2253,10 +2252,12 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, */ memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - memset(&rule->mask, 0xFF, sizeof(struct txgbe_hw_fdir_mask)); - rule->mask.vlan_tci_mask = 0; + memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask)); + rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_TUN_OUTIP | + TXGBE_ATR_TYPE_MASK_L3P | + TXGBE_ATR_TYPE_MASK_L4P; /** * The first not void item should be - * MAC or IPv4 or IPv6 or UDP or VxLAN. + * MAC or IPv4 or IPv6 or UDP or tunnel. */ item = next_no_void_pattern(pattern, NULL); @@ -2266,5 +2267,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, item->type != RTE_FLOW_ITEM_TYPE_UDP && item->type != RTE_FLOW_ITEM_TYPE_VXLAN && - item->type != RTE_FLOW_ITEM_TYPE_NVGRE) { + item->type != RTE_FLOW_ITEM_TYPE_NVGRE && + item->type != RTE_FLOW_ITEM_TYPE_GRE && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2274,5 +2277,6 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, } - rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL; + rule->mode = RTE_FDIR_MODE_PERFECT; + ptid = TXGBE_PTID_PKT_TUN; /* Skip MAC. */ @@ -2296,4 +2300,6 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Check if the next not void item is IPv4 or IPv6. */ item = next_no_void_pattern(pattern, item); + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + item = next_no_fuzzy_pattern(pattern, item); if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && item->type != RTE_FLOW_ITEM_TYPE_IPV6) { @@ -2309,4 +2315,6 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 || item->type == RTE_FLOW_ITEM_TYPE_IPV6) { + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_TUN_OUTIP; + /* Only used to describe the protocol stack. */ if (item->spec || item->mask) { @@ -2325,8 +2333,15 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, } - /* Check if the next not void item is UDP or NVGRE. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) + ptid |= TXGBE_PTID_TUN_IPV6; + item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_UDP && - item->type != RTE_FLOW_ITEM_TYPE_NVGRE) { + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_VXLAN && + item->type != RTE_FLOW_ITEM_TYPE_GRE && + item->type != RTE_FLOW_ITEM_TYPE_NVGRE && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2339,4 +2354,29 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Skip UDP. */ if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + + /* Check if the next not void item is VxLAN or GENEVE. */ + item = next_no_void_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* Skip tunnel. */ + if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN || + item->type == RTE_FLOW_ITEM_TYPE_GRE || + item->type == RTE_FLOW_ITEM_TYPE_NVGRE || + item->type == RTE_FLOW_ITEM_TYPE_GENEVE) { /* Only used to describe the protocol stack. */ if (item->spec || item->mask) { @@ -2355,7 +2395,13 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, } - /* Check if the next not void item is VxLAN. */ + if (item->type == RTE_FLOW_ITEM_TYPE_GRE) + ptid |= TXGBE_PTID_TUN_EIG; + else + ptid |= TXGBE_PTID_TUN_EIGM; + item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) { + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2366,98 +2412,419 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, } - /* check if the next not void item is MAC */ - item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_ETH) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } + /* Get the MAC info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /** + * Only support vlan and dst MAC address, + * others should be masked. + */ + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + + if (item->mask) { + rule->b_mask = TRUE; + eth_mask = item->mask; - /** - * Only support vlan and dst MAC address, - * others should be masked. - */ - - if (!item->mask) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* Ether type should be masked. */ + if (eth_mask->hdr.ether_type) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + + /** + * src MAC address must be masked, + * and don't support dst MAC address mask. + */ + for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { + if (eth_mask->hdr.src_addr.addr_bytes[j] || + eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) { + memset(rule, 0, + sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* When no VLAN, considered as full mask. */ + rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF); + } + + item = next_no_fuzzy_pattern(pattern, item); + if (rule->mask.vlan_tci_mask) { + if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } else { + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) { + ptid |= TXGBE_PTID_TUN_EIGMV; + item = next_no_fuzzy_pattern(pattern, item); + } } - /*Not supported last point for range*/ - if (item->last) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - item, "Not supported last point for range"); - return -rte_errno; + + /* Get the IPV4 info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; + + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst addresses, + * others should be masked. + */ + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + if (item->mask) { + rule->b_mask = TRUE; + ipv4_mask = item->mask; + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.next_proto_id || + ipv4_mask->hdr.hdr_checksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; + rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; + } + if (item->spec) { + rule->b_spec = TRUE; + ipv4_spec = item->spec; + rule->input.dst_ip[0] = + ipv4_spec->hdr.dst_addr; + rule->input.src_ip[0] = + ipv4_spec->hdr.src_addr; + } + + /** + * Check if the next not void item is + * TCP or UDP or SCTP or END. + */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP && + item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - rule->b_mask = TRUE; - eth_mask = item->mask; - - /* Ether type should be masked. */ - if (eth_mask->hdr.ether_type) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + + /* Get the IPV6 info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; + + if (item->last) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + if (item->mask) { + rule->b_mask = TRUE; + ipv6_mask = item->mask; + if (ipv6_mask->hdr.vtc_flow || + ipv6_mask->hdr.payload_len || + ipv6_mask->hdr.proto || + ipv6_mask->hdr.hop_limits) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + + /* check src addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { + rule->mask.src_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* check dst addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { + rule->mask.dst_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + } + if (item->spec) { + rule->b_spec = TRUE; + ipv6_spec = item->spec; + rte_memcpy(rule->input.src_ip, + &ipv6_spec->hdr.src_addr, 16); + rte_memcpy(rule->input.dst_ip, + &ipv6_spec->hdr.dst_addr, 16); + } + + /** + * Check if the next not void item is + * TCP or UDP or SCTP or END. + */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP && + item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /* src MAC address should be masked. */ - for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->hdr.src_addr.addr_bytes[j]) { - memset(rule, 0, - sizeof(struct txgbe_fdir_rule)); + /* Get the TCP info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + + /*Not supported last point for range*/ + if (item->last) { rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); return -rte_errno; } + rule->b_mask = TRUE; + tcp_mask = item->mask; + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.tcp_flags || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.src_port_mask = tcp_mask->hdr.src_port; + rule->mask.dst_port_mask = tcp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + tcp_spec = item->spec; + rule->input.src_port = + tcp_spec->hdr.src_port; + rule->input.dst_port = + tcp_spec->hdr.dst_port; + } } - rule->mask.mac_addr_byte_mask = 0; - for (j = 0; j < ETH_ADDR_LEN; j++) { - /* It's a per byte mask. */ - if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) { - rule->mask.mac_addr_byte_mask |= 0x1 << j; - } else if (eth_mask->hdr.dst_addr.addr_bytes[j]) { + + /* Get the UDP info */ + if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); return -rte_errno; } + rule->b_mask = TRUE; + udp_mask = item->mask; + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.src_port_mask = udp_mask->hdr.src_port; + rule->mask.dst_port_mask = udp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + udp_spec = item->spec; + rule->input.src_port = + udp_spec->hdr.src_port; + rule->input.dst_port = + udp_spec->hdr.dst_port; + } } - /* When no vlan, considered as full mask. */ - rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF); - - /** - * Check if the next not void item is vlan or ipv4. - * IPv6 is not supported. - */ - item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_VLAN && - item->type != RTE_FLOW_ITEM_TYPE_IPV4) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* Get the SCTP info */ + if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->b_mask = TRUE; + sctp_mask = item->mask; + if (sctp_mask->hdr.tag || sctp_mask->hdr.cksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.src_port_mask = sctp_mask->hdr.src_port; + rule->mask.dst_port_mask = sctp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + sctp_spec = item->spec; + rule->input.src_port = + sctp_spec->hdr.src_port; + rule->input.dst_port = + sctp_spec->hdr.dst_port; + } + /* others even sctp port is not supported */ + sctp_mask = item->mask; + if (sctp_mask && + (sctp_mask->hdr.src_port || + sctp_mask->hdr.dst_port || + sctp_mask->hdr.tag || + sctp_mask->hdr.cksum)) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /*Not supported last point for range*/ - if (item->last) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - item, "Not supported last point for range"); - return -rte_errno; + + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + /* check if the next not void item is END */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /** - * If the tags is 0, it means don't care about the VLAN. - * Do nothing. - */ + txgbe_fdir_parse_flow_type(&rule->input, ptid, true); return txgbe_parse_fdir_act_attr(attr, actions, rule, error); -- 2.50.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2025-07-18 20:29:16.009429124 +0100 +++ 0142-net-txgbe-fix-to-create-FDIR-filter-for-tunnel-packe.patch 2025-07-18 20:29:11.111907886 +0100 @@ -1 +1 @@ -From a1851465f8252ee75a26d05b9b2d3dca7023e8f2 Mon Sep 17 00:00:00 2001 +From 07eee8a0d85b840aff239360a2fcb393dcbcb826 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit a1851465f8252ee75a26d05b9b2d3dca7023e8f2 ] + @@ -10 +11,0 @@ -Cc: stable@dpdk.org @@ -31 +32 @@ -index 01e8a9fc05..c2d0950d2c 100644 +index 5134c3d99e..288d9a43da 100644 @@ -41 +42 @@ -index 145ee8a452..99a76daca0 100644 +index b2a2e35351..7482cbbf63 100644