From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3E9446C54 for ; Wed, 30 Jul 2025 16:57:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE706402C3; Wed, 30 Jul 2025 16:57:45 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2081.outbound.protection.outlook.com [40.107.102.81]) by mails.dpdk.org (Postfix) with ESMTP id 589B340E41 for ; Wed, 30 Jul 2025 16:57:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Vik1V2/es+ZCiKXRiTAFJFmtET/GFigjzyBJdUE8GVVMS2876OfN2Nq/0NIIzLsZ0eC075k/13nABH8H9rcC6rj5d3GZ4HCA736xC3LgUju5lGbEFo4AA3cxLm64MtIN/+6E9K+S/B59EboNcVHmuFsm/2waV0Hcby+Skkvav4pqkSaT9RAB3d8Jxkzsic3KJZXqdnx2dMTnJ52D95BaHYiFTaPwKJhM/Ohakju/ug8sfcnHlQNv5wTyiq++z/k/rYDvtCfxfGqk585cGVtIzvsnHA9atUq/oP8jH3PfW6WqqRFB803h6vqlMn8Pas3P3jcyHgBIzy8ydoyk6bS8Bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w+9VuRRa68y624xp2DluLv2OQIx00t7cVUPPHrd+fpc=; b=tiAzsoOCZq9ux4JveM/XBUyhNezukbpRGnWp1MbVjq73hop1+shrHjShgTkGbwYiJ9lOLDDSuA4NlPsT8CuEuM+ugtfwed0GajZeUKeA/6rI+UPnJ+nQT7KRsHSx4BjkMCho1rU5VAHlCqQvwEwIBnAXyYXJRCm6mGyLDsi0I1EnO3xRAWEUdnmcVj7oBpexSC3z1zDdSnTjKUjfCytCHqCx5abAdSbrrFTRshWZkyfiXBiW+om9QE7jEOFhu6KS0tMPshD9vXCfaPGL3tIVBufVmMq+W0hYW6W86jKlWiQrmEZB0LJsT4eG4ZvA5N5p9qADHzNKuVE79HEzdg9KEw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=trustnetic.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w+9VuRRa68y624xp2DluLv2OQIx00t7cVUPPHrd+fpc=; b=HdsOO+5oXK19AEtdb2feayFejg7ILP/pmMH5xBDHkw8JEGHaXv84C42tIY266gJedm9M1ezm0TavKGywETWv/fmqt+eqBX+hAgLdzvY+YB2d4OvYQYPqRCO8weYR467ZSEBZItmT/KCGMBKhNQ4aup1kyROH6LPsLHTC9MAiY98HAup+9NsI2/XpZKFmvWvBaDTB/Gm59nsdLduQTjCZFuaq3KQ+zuZZpJiSkF5JOPvY7yS8Wz9uVRaPixXvSyl0tGgPLpA6e5faRBBXvwupRYUeizw2dzjgIGlRkWIyFIc6cVNZ/LE3UiyT+aXG5+m/sxkce3PFfSvAX/7gQJ7bDw== Received: from MW4PR04CA0300.namprd04.prod.outlook.com (2603:10b6:303:89::35) by CH3PR12MB8484.namprd12.prod.outlook.com (2603:10b6:610:158::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8989.11; Wed, 30 Jul 2025 14:57:38 +0000 Received: from CO1PEPF000075EE.namprd03.prod.outlook.com (2603:10b6:303:89:cafe::a2) by MW4PR04CA0300.outlook.office365.com (2603:10b6:303:89::35) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8989.13 via Frontend Transport; Wed, 30 Jul 2025 14:57:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1PEPF000075EE.mail.protection.outlook.com (10.167.249.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8989.10 via Frontend Transport; Wed, 30 Jul 2025 14:57:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Wed, 30 Jul 2025 07:57:14 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Wed, 30 Jul 2025 07:57:05 -0700 From: Xueming Li To: Jiawen Wu CC: Xueming Li , dpdk stable Subject: patch 'net/txgbe: fix packet type for FDIR filter' has been queued to stable release 23.11.5 Date: Wed, 30 Jul 2025 22:56:10 +0800 Message-ID: <20250730145633.245984-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250730145633.245984-1-xuemingl@nvidia.com> References: <20250626120145.27369-1-xuemingl@nvidia.com> <20250730145633.245984-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000075EE:EE_|CH3PR12MB8484:EE_ X-MS-Office365-Filtering-Correlation-Id: bb30dd5b-cd71-48aa-c137-08ddcf796ca5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|82310400026|1800799024|36860700013|376014|7053199007; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?F31G4mNxRmmCfr1KWHYBP4Cv8uT/5xIhP/t2sx8W+L5iZQfFoSuCSo2kz6hZ?= =?us-ascii?Q?rYwfJmRQMHZCSAdz7xUw9xE+4PwcgwxA7Im1Q+qaXuir3ebT7EB26kdySvAP?= =?us-ascii?Q?hMag25Tw24Wzq8sHcxj0Ltgr7ZBUoiaYKvqFnMfahAIn1votJsDMzgZDbsBc?= =?us-ascii?Q?nDkq0sBYS+dfiko3mBOW8tXumcGKNp3DABQ6BGYDmb59/uBoxjvgZsK3cwVZ?= =?us-ascii?Q?6FxGLpNEdhjJxHAvRoLWxnko/EPypItZp42L8PwnuDKuDZQ54/7qGTMhnEDU?= =?us-ascii?Q?GKDS5DSgYJ69h4tTTLf1UG58uHIEFM1ukEHCIdXgGduoiAM2Zq9liO89pYyx?= =?us-ascii?Q?4xeHq79wdinioMOo9UJU1ehRd98MaAa+glM8TwIqv7bIijzyUQiYR/gKr67g?= =?us-ascii?Q?DJTssgrQs3MX70cO0ZQBgVOikTQr16KHIeK89Ds0THsSyUvuj+hvEYmK1G1/?= =?us-ascii?Q?ZKfjiNJrtKGpihiefhXaRXPAnec60HMCUoGq0F250drNjeVoTDKotQL7qEXV?= =?us-ascii?Q?6MZcw1wMeK1s3IdljOZrMOG87T48JOIPNy9DYY3sXnCqcCspAiiPmabkHUeZ?= =?us-ascii?Q?s4g2qQ5OQVL6SPtY29gDY6noFmrzplLJNKt/71e0ozTcx/s+321FZ8GRJK3O?= =?us-ascii?Q?U3nguBpzRH6pLjI4bqbOcZLT500vCpf/nkh2K9loYMzekNcsQQtj85XZsNtR?= =?us-ascii?Q?bfqVeW32Y3jtm42F7SvNC9A6Xx4kKtbOUtKg+zWKM0btAS3/8IEr2azSEF+j?= =?us-ascii?Q?y3MU6a+FFw1JmBMZqu9mxjYpu/en2XmNwmOgSdavRxk0FQwWdPUrISdJ9kA9?= =?us-ascii?Q?f4CeOi85hyZfaJKrYDbMflx8MFIKwbTPEuK80BQl7csNQBRnK5fG5jmXPjEj?= =?us-ascii?Q?xqjT6TNTNi/NFBKC0OzvLdku8n4V1DvkwMOBVDfiT718RiBmvFz/8ldIkGYY?= =?us-ascii?Q?N19FncJ04oUBHKe7nSUjHb6xw1r3Cc910XQoYRss8Bg9tXzXpgo9hY/pLTwP?= =?us-ascii?Q?wokMvN+TmQwCMJ5S8GxRYB1gNcnMayygVoiO7G7f/QEGFTE+J1cs5KlXcu6x?= =?us-ascii?Q?5mDZnJ51YqPlqN1jUlkQ+qmvhM8DkXvPJsTZb7B9sSHyRxeNZsjGGU29bF4p?= =?us-ascii?Q?dJXKHiteYIwimxFncvrjwhSqmEkuL4KH+kmBMzm8i6OZga68WYP+HhXKGFrT?= =?us-ascii?Q?QI8ozozdaWy90mGtIhnGSeFHZ3U1saGdRU+oRtWDW1zFyyIWwjMHqYTguT4w?= =?us-ascii?Q?A66ZiGYo1fboclgN2achq3QazM26ZYp2GtFh0jiLPoy0VaZISh0fpCq9DL2G?= =?us-ascii?Q?LMR/8+VEznGNyZxV8BG4sJrViPnJEyDrflHgiYlO5XDgRiUvEGFFy8SuYX7y?= =?us-ascii?Q?pQAeGu/jh+z5+CX3iFAcICOgRM2KF54cANiniG2yJQ/Xqr/V/VJgTQWOu2D6?= =?us-ascii?Q?Bub3nPwFzveEdoJeBXQLvhNk1NEwJwDicROzY9/0fr26hX7vVR7LTa5B8JOx?= =?us-ascii?Q?1MsGVSgEPEFb5/pvTDSdw6/rTKPxbIgzCBpp?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230040)(82310400026)(1800799024)(36860700013)(376014)(7053199007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jul 2025 14:57:37.6612 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bb30dd5b-cd71-48aa-c137-08ddcf796ca5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000075EE.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8484 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 08/10/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4c41cedc0a0efc93f060e4bec38337b6eb850690 Thanks. Xueming Li --- >From 4c41cedc0a0efc93f060e4bec38337b6eb850690 Mon Sep 17 00:00:00 2001 From: Jiawen Wu Date: Fri, 13 Jun 2025 16:41:46 +0800 Subject: [PATCH] net/txgbe: fix packet type for FDIR filter Cc: Xueming Li [ upstream commit 8d10841e5acd381c7831e421103872d12e806780 ] To match the packet type more flexibly when the pattern is default, add packet type mask for FDIR filters. Fixes: b973ee26747a ("net/txgbe: parse flow director filter") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/txgbe_type.h | 20 +-- drivers/net/txgbe/txgbe_ethdev.h | 3 +- drivers/net/txgbe/txgbe_fdir.c | 16 +-- drivers/net/txgbe/txgbe_flow.c | 188 +++++++++++++++------------- 4 files changed, 116 insertions(+), 111 deletions(-) diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h index 89f6017937..3479639ec4 100644 --- a/drivers/net/txgbe/base/txgbe_type.h +++ b/drivers/net/txgbe/base/txgbe_type.h @@ -88,8 +88,11 @@ enum { #define TXGBE_ATR_L4TYPE_UDP 0x1 #define TXGBE_ATR_L4TYPE_TCP 0x2 #define TXGBE_ATR_L4TYPE_SCTP 0x3 -#define TXGBE_ATR_TUNNEL_MASK 0x10 -#define TXGBE_ATR_TUNNEL_ANY 0x10 +#define TXGBE_ATR_TYPE_MASK_TUN 0x80 +#define TXGBE_ATR_TYPE_MASK_TUN_OUTIP 0x40 +#define TXGBE_ATR_TYPE_MASK_TUN_TYPE 0x20 +#define TXGBE_ATR_TYPE_MASK_L3P 0x10 +#define TXGBE_ATR_TYPE_MASK_L4P 0x08 enum txgbe_atr_flow_type { TXGBE_ATR_FLOW_TYPE_IPV4 = 0x0, TXGBE_ATR_FLOW_TYPE_UDPV4 = 0x1, @@ -99,14 +102,6 @@ enum txgbe_atr_flow_type { TXGBE_ATR_FLOW_TYPE_UDPV6 = 0x5, TXGBE_ATR_FLOW_TYPE_TCPV6 = 0x6, TXGBE_ATR_FLOW_TYPE_SCTPV6 = 0x7, - TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4 = 0x10, - TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4 = 0x11, - TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4 = 0x12, - TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4 = 0x13, - TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6 = 0x14, - TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6 = 0x15, - TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6 = 0x16, - TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6 = 0x17, }; /* Flow Director ATR input struct. */ @@ -116,11 +111,8 @@ struct txgbe_atr_input { * * vm_pool - 1 byte * flow_type - 1 byte - * vlan_id - 2 bytes + * pkt_type - 2 bytes * src_ip - 16 bytes - * inner_mac - 6 bytes - * cloud_mode - 2 bytes - * tni_vni - 4 bytes * dst_ip - 16 bytes * src_port - 2 bytes * dst_port - 2 bytes diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 3d94ac7b2d..34280a146b 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -90,8 +90,7 @@ struct txgbe_hw_fdir_mask { uint16_t dst_port_mask; uint16_t flex_bytes_mask; uint8_t mac_addr_byte_mask; - uint32_t tunnel_id_mask; - uint8_t tunnel_type_mask; + uint8_t pkt_type_mask; /* reversed mask for hw */ }; struct txgbe_fdir_filter { diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c index 75bf30c00c..0d12fb9a11 100644 --- a/drivers/net/txgbe/txgbe_fdir.c +++ b/drivers/net/txgbe/txgbe_fdir.c @@ -187,18 +187,12 @@ txgbe_fdir_set_input_mask(struct rte_eth_dev *dev) return -ENOTSUP; } - /* - * Program the relevant mask registers. If src/dst_port or src/dst_addr - * are zero, then assume a full mask for that field. Also assume that - * a VLAN of 0 is unspecified, so mask that out as well. L4type - * cannot be masked out in this implementation. - */ - if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0) { - /* use the L4 protocol mask for raw IPv4/IPv6 traffic */ - fdirm |= TXGBE_FDIRMSK_L4P; - } + /* use the L4 protocol mask for raw IPv4/IPv6 traffic */ + if (info->mask.pkt_type_mask == 0 && info->mask.dst_port_mask == 0 && + info->mask.src_port_mask == 0) + info->mask.pkt_type_mask |= TXGBE_FDIRMSK_L4P; - /* TBD: don't support encapsulation yet */ + fdirm |= info->mask.pkt_type_mask; wr32(hw, TXGBE_FDIRMSK, fdirm); /* store the TCP/UDP port masks */ diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c index c9f732e038..3c02f0e891 100644 --- a/drivers/net/txgbe/txgbe_flow.c +++ b/drivers/net/txgbe/txgbe_flow.c @@ -1487,8 +1487,41 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[]) return 0; } +static void +txgbe_fdir_parse_flow_type(struct txgbe_atr_input *input, u8 ptid, bool tun) +{ + if (!tun) + ptid = TXGBE_PTID_PKT_IP; + + switch (input->flow_type & TXGBE_ATR_L4TYPE_MASK) { + case TXGBE_ATR_L4TYPE_UDP: + ptid |= TXGBE_PTID_TYP_UDP; + break; + case TXGBE_ATR_L4TYPE_TCP: + ptid |= TXGBE_PTID_TYP_TCP; + break; + case TXGBE_ATR_L4TYPE_SCTP: + ptid |= TXGBE_PTID_TYP_SCTP; + break; + default: + break; + } + + switch (input->flow_type & TXGBE_ATR_L3TYPE_MASK) { + case TXGBE_ATR_L3TYPE_IPV4: + break; + case TXGBE_ATR_L3TYPE_IPV6: + ptid |= TXGBE_PTID_PKT_IPV6; + break; + default: + break; + } + + input->pkt_type = cpu_to_be16(ptid); +} + /** - * Parse the rule to see if it is a IP or MAC VLAN flow director rule. + * Parse the rule to see if it is a IP flow director rule. * And get the flow director filter info BTW. * UDP/TCP/SCTP PATTERN: * The first not void item can be ETH or IPV4 or IPV6 @@ -1555,7 +1588,6 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_item_sctp *sctp_mask; const struct rte_flow_item_raw *raw_mask; const struct rte_flow_item_raw *raw_spec; - u32 ptype = 0; uint8_t j; if (!pattern) { @@ -1585,6 +1617,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, */ memset(rule, 0, sizeof(struct txgbe_fdir_rule)); memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask)); + rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_L3P | + TXGBE_ATR_TYPE_MASK_L4P; + memset(&rule->input, 0, sizeof(struct txgbe_atr_input)); /** * The first not void item should be @@ -1687,7 +1722,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, } } else { if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && - item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + item->type != RTE_FLOW_ITEM_TYPE_VLAN && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_RAW) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -1695,6 +1732,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } } + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + item = next_no_fuzzy_pattern(pattern, item); } /* Get the IPV4 info. */ @@ -1704,7 +1743,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4; - ptype = txgbe_ptype_table[TXGBE_PT_IPV4]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1716,31 +1755,26 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * Only care about src & dst addresses, * others should be masked. */ - if (!item->mask) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } - rule->b_mask = TRUE; - ipv4_mask = item->mask; - if (ipv4_mask->hdr.version_ihl || - ipv4_mask->hdr.type_of_service || - ipv4_mask->hdr.total_length || - ipv4_mask->hdr.packet_id || - ipv4_mask->hdr.fragment_offset || - ipv4_mask->hdr.time_to_live || - ipv4_mask->hdr.next_proto_id || - ipv4_mask->hdr.hdr_checksum) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + if (item->mask) { + rule->b_mask = TRUE; + ipv4_mask = item->mask; + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.next_proto_id || + ipv4_mask->hdr.hdr_checksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; + rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; } - rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; - rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; if (item->spec) { rule->b_spec = TRUE; @@ -1776,16 +1810,14 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6; - ptype = txgbe_ptype_table[TXGBE_PT_IPV6]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; /** * 1. must signature match * 2. not support last - * 3. mask must not null */ if (rule->mode != RTE_FDIR_MODE_SIGNATURE || - item->last || - !item->mask) { + item->last) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -1793,42 +1825,44 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } - rule->b_mask = TRUE; - ipv6_mask = item->mask; - if (ipv6_mask->hdr.vtc_flow || - ipv6_mask->hdr.payload_len || - ipv6_mask->hdr.proto || - ipv6_mask->hdr.hop_limits) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } - - /* check src addr mask */ - for (j = 0; j < 16; j++) { - if (ipv6_mask->hdr.src_addr[j] == UINT8_MAX) { - rule->mask.src_ipv6_mask |= 1 << j; - } else if (ipv6_mask->hdr.src_addr[j] != 0) { + if (item->mask) { + rule->b_mask = TRUE; + ipv6_mask = item->mask; + if (ipv6_mask->hdr.vtc_flow || + ipv6_mask->hdr.payload_len || + ipv6_mask->hdr.proto || + ipv6_mask->hdr.hop_limits) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Not supported by fdir filter"); return -rte_errno; } - } - /* check dst addr mask */ - for (j = 0; j < 16; j++) { - if (ipv6_mask->hdr.dst_addr[j] == UINT8_MAX) { - rule->mask.dst_ipv6_mask |= 1 << j; - } else if (ipv6_mask->hdr.dst_addr[j] != 0) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* check src addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.src_addr[j] == UINT8_MAX) { + rule->mask.src_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.src_addr[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* check dst addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.dst_addr[j] == UINT8_MAX) { + rule->mask.dst_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.dst_addr[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } } @@ -1866,10 +1900,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_TCP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_TCP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1933,10 +1965,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_UDP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_UDP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1995,10 +2025,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_SCTP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_SCTP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2163,17 +2191,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, } } - rule->input.pkt_type = cpu_to_be16(txgbe_encode_ptype(ptype)); - - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) { - if (rule->input.flow_type & TXGBE_ATR_L4TYPE_MASK) - rule->input.pkt_type &= 0xFFFF; - else - rule->input.pkt_type &= 0xF8FF; - - rule->input.flow_type &= TXGBE_ATR_L3TYPE_MASK | - TXGBE_ATR_L4TYPE_MASK; - } + txgbe_fdir_parse_flow_type(&rule->input, 0, false); return txgbe_parse_fdir_act_attr(attr, actions, rule, error); } @@ -2860,6 +2878,8 @@ txgbe_flow_create(struct rte_eth_dev *dev, flex_base); } + fdir_info->mask.pkt_type_mask = + fdir_rule.mask.pkt_type_mask; ret = txgbe_fdir_set_input_mask(dev); if (ret) goto out; -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2025-07-30 22:50:03.227957362 +0800 +++ 0001-net-txgbe-fix-packet-type-for-FDIR-filter.patch 2025-07-30 22:50:02.908738341 +0800 @@ -1 +1 @@ -From 8d10841e5acd381c7831e421103872d12e806780 Mon Sep 17 00:00:00 2001 +From 4c41cedc0a0efc93f060e4bec38337b6eb850690 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 8d10841e5acd381c7831e421103872d12e806780 ] @@ -21 +24 @@ -index 4371876649..383438ea3c 100644 +index 89f6017937..3479639ec4 100644 @@ -67 +70 @@ -index 0a3c634937..01e8a9fc05 100644 +index 3d94ac7b2d..34280a146b 100644 @@ -70 +73 @@ -@@ -91,8 +91,7 @@ struct txgbe_hw_fdir_mask { +@@ -90,8 +90,7 @@ struct txgbe_hw_fdir_mask { @@ -109 +112 @@ -index 8670c3e1d7..bce88aebd3 100644 +index c9f732e038..3c02f0e891 100644 @@ -291 +294 @@ -- if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { +- if (ipv6_mask->hdr.src_addr[j] == UINT8_MAX) { @@ -293 +296 @@ -- } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { +- } else if (ipv6_mask->hdr.src_addr[j] != 0) { @@ -311 +314 @@ -- if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { +- if (ipv6_mask->hdr.dst_addr[j] == UINT8_MAX) { @@ -313 +316 @@ -- } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { +- } else if (ipv6_mask->hdr.dst_addr[j] != 0) { @@ -321 +324 @@ -+ if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { ++ if (ipv6_mask->hdr.src_addr[j] == UINT8_MAX) { @@ -323 +326 @@ -+ } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { ++ } else if (ipv6_mask->hdr.src_addr[j] != 0) { @@ -334 +337 @@ -+ if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { ++ if (ipv6_mask->hdr.dst_addr[j] == UINT8_MAX) { @@ -336 +339 @@ -+ } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { ++ } else if (ipv6_mask->hdr.dst_addr[j] != 0) { @@ -404 +407 @@ -@@ -2863,6 +2881,8 @@ txgbe_flow_create(struct rte_eth_dev *dev, +@@ -2860,6 +2878,8 @@ txgbe_flow_create(struct rte_eth_dev *dev,