From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CDC2FA09E5; Wed, 19 Oct 2022 18:28:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A36042C5C; Wed, 19 Oct 2022 18:26:38 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2077.outbound.protection.outlook.com [40.107.93.77]) by mails.dpdk.org (Postfix) with ESMTP id A1D4F42C5A for ; Wed, 19 Oct 2022 18:26:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kMenzVec52/bo4DHU3ygfi/8ABkY+c0zZaVMUxCPiGJR7m46j5hJHyNQOOrF9RqmBLSEW2Ddk3AWTexxRdpMShAr21BUtLpvKqYfOInF+CRRmbTcBExZ4XdV/kutHKD/7V5xPTl8QbLRPx7yN8mRE0DXb8VdvhZTKHM5awPYciVUiq2wVUXQlksUjC+5XD6RRoP/HWQrX58X9EU73MXuuwExpODuuLDcn9IBcwkQnEQRxJDT0sCjJKgrCOovDZszRboPYVFtZNTTzmxaZNGWwN4u2Ew0kj5lZ1pH3MS+NccHYTIgduWhzp0naFX05qtwZ2hjG2daqWgrhYuZTOEIog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Uh9zKWMsh43Kgu2jbHKdpg5kcGQeWhqgWL62YnCPMbI=; b=LZHD9+yYk/+sIuL+94TtQoSq82Mftf7wt1z2A+wGVYzb6aJ1Kl34XmyJA5fGi1GG1yKd1DZdB+ZjPo3ynum8rAHIXEnQs67bwiyFfB/yBmzUMhpYhwHkrdHpy8PijTxGfDTUpFK/evYoCR1bI9ZKaGkKmkCfbGg5ytMU736bIyGYpXMpMt7cqg+a2VLKtNIrhS9jXrm2uwHmvkgFt882dpG7r/pfisDUK7hhkHfL507dyZmGIBt1ffL2XGRD+ytZ+ABFKKGKX0a+6Uw/ugn0HsobNDmwylbI94m9ovXs2kSTTTHwscloQ+OHaCJSKJxXLjk6KAuNsLtiHI479ogqHA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Uh9zKWMsh43Kgu2jbHKdpg5kcGQeWhqgWL62YnCPMbI=; b=Of1GdKJ23iQb/I/xHqU0UrsHRE+hm9bs/NByv2YPdqX7rV6xNAl4qReOULpFBoqiYtkT62+JV1XbeHQI5kOqCEv4V2KIaL2mKVpXoTSb/W/vW3JBjOzrfja/RSrNi3HKzciQW234WDB30FnNSk+QpIkhefgpS9e5a637fPoV4/v/SxG0edFwClRbaYbNKGihWhlsUc5BW9asHlRhex/dsS3mPVfVgbrlMAkUzA2UZ9RL/uiyBREWNzvHwjJSIxqceP6bX1b6vgDx9oCcEmJjBwqIbJFcFR/pu3/bprqf0+6fF6ZnbXAHe8uP1CtO3Yiyt1nV5903Owb4QXc01N/TIg== Received: from BN9PR03CA0652.namprd03.prod.outlook.com (2603:10b6:408:13b::27) by PH7PR12MB7454.namprd12.prod.outlook.com (2603:10b6:510:20d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.29; Wed, 19 Oct 2022 16:26:32 +0000 Received: from BN8NAM11FT112.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13b:cafe::5e) by BN9PR03CA0652.outlook.office365.com (2603:10b6:408:13b::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.33 via Frontend Transport; Wed, 19 Oct 2022 16:26:32 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT112.mail.protection.outlook.com (10.13.176.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Wed, 19 Oct 2022 16:26:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 09:26:18 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 09:26:16 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , Gregory Etelson Subject: [PATCH v4 15/18] net/mlx5: support flow integrity in HWS group 0 Date: Wed, 19 Oct 2022 19:25:25 +0300 Message-ID: <20221019162528.11045-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221019162528.11045-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20221019162528.11045-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT112:EE_|PH7PR12MB7454:EE_ X-MS-Office365-Filtering-Correlation-Id: ca072148-6d85-43c6-707b-08dab1eeaf3c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GnhWfbqf+31Wi+ScUlQXMs/LWIFqMSdg/xf8Kss9t8giSpGAZV4i6L1MIfnDfRmIgrk6dsRqqFdxH7pWLqQ+9ioVmZAACxfyL3+aCUjVaaR7/r4ShiEb1rAl9i1sSduwL7dEwtpRe10VymLfntJQ/+IM4BR5+5ZPQFoH8ogpFPYHEkeIJklneKyJ0vSPVhP2fasfLFTI6dnkR2572Pflw0LPsLQ4BFbjULllQHMXCH9wur7mFKr+30ROBNZkcCTrC5w5NbotPljyX1lGJxmCBbtSt3h2nvSqkOqU1IhJ3+zLkId2V1oHtyEEfWcxX/tq9ApmOwvm43ut/X3KfjcG1CZmbqyYcm4WBCE4FNRYYrjTnVi1EnY1lT9XQgRALYqZq7pP+wlNKm3m4wi5A37EWu51xDyVKBvwiW3ToyvSy044GtmVGmZb8H0gX7Y+Rdse5931PYqIUdIayCc7h47xmxoTCL9drDzoIVFdY52XDeMnu5Npxj5pd1nSL0FUnl0yINR7GKdtxBodgY2zcdX6rZfCzvtVzY/xaUQvkUmDKTuGat+mNbcYZhm6yDfW4FEy654MHtJ5IRBg/aYYt4X9ZDAYqclvfj5PDh+8aIjfvfblNGTPQ28ZJagK4sG3PH+oBFGPPd6Ldm6AaYBV2i5OebUuSub99+fnIa7N5+bL4lraAObwAX9xjIHN8Xmbh2tqpGCXVMFw+D+FBEzIpj63MwPIVk6BP7je2hjFsITSmZcBzV1/onKTZO1pgwBw9gmRFZJXaQJHH9i2kNAdSiHehdOSJpGRh5a4FuleswJtmnU= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(136003)(396003)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(6666004)(2616005)(478600001)(6286002)(26005)(36860700001)(83380400001)(336012)(55016003)(7696005)(186003)(16526019)(1076003)(47076005)(40460700003)(426003)(40480700001)(5660300002)(107886003)(54906003)(6636002)(110136005)(30864003)(82310400005)(316002)(4326008)(41300700001)(8936002)(70586007)(70206006)(8676002)(86362001)(36756003)(356005)(7636003)(2906002)(82740400003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2022 16:26:32.4309 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ca072148-6d85-43c6-707b-08dab1eeaf3c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT112.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7454 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson - Reformat flow integrity item translation for HWS code. - Support flow integrity bits in HWS group 0. - Update integrity item translation to match positive semantics only. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 163 ++++++++++++++++---------------- drivers/net/mlx5/mlx5_flow_hw.c | 8 ++ 3 files changed, 90 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 57cebb5ce6..ddc23aaf9c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1470,6 +1470,7 @@ struct mlx5_dv_matcher_workspace { struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ const struct rte_flow_item *gre_item; /* Flow GRE item. */ + const struct rte_flow_item *integrity_items[2]; }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3cc4b9bcd4..1497423891 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -12648,132 +12648,121 @@ flow_dv_aso_age_params_init(struct rte_eth_dev *dev, static void flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v) + void *headers) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value is used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l4_ok) { /* RTE l4_ok filter aggregates hardware l4_ok and * l4_checksum_ok filters. * Positive RTE l4_ok match requires hardware match on both L4 * hardware integrity bits. - * For negative match, check hardware l4_checksum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L4. + * PMD supports positive integrity item semantics only. */ - if (value->l4_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - !!value->l4_ok); - } - if (mask->l4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - value->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); } } static void flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v, bool is_ipv4) + void *headers, bool is_ipv4) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l3_ok) { /* RTE l3_ok filter aggregates for IPv4 hardware l3_ok and * ipv4_csum_ok filters. * Positive RTE l3_ok match requires hardware match on both L3 * hardware integrity bits. - * For negative match, check hardware l3_csum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L3. + * PMD supports positive integrity item semantics only. */ + MLX5_SET(fte_match_set_lyr_2_4, headers, l3_ok, 1); if (is_ipv4) { - if (value->l3_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, - l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - l3_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - ipv4_checksum_ok, !!value->l3_ok); - } else { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, - value->l3_ok); } - } - if (mask->ipv4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, - value->ipv4_csum_ok); + } else if (is_ipv4 && mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); } } static void -set_integrity_bits(void *headers_m, void *headers_v, - const struct rte_flow_item *integrity_item, bool is_l3_ip4) +set_integrity_bits(void *headers, const struct rte_flow_item *integrity_item, + bool is_l3_ip4, uint32_t key_type) { - const struct rte_flow_item_integrity *spec = integrity_item->spec; - const struct rte_flow_item_integrity *mask = integrity_item->mask; + const struct rte_flow_item_integrity *spec; + const struct rte_flow_item_integrity *mask; /* Integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (!mask) - mask = &rte_flow_item_integrity_mask; - flow_dv_translate_integrity_l3(mask, spec, headers_m, headers_v, - is_l3_ip4); - flow_dv_translate_integrity_l4(mask, spec, headers_m, headers_v); + if (MLX5_ITEM_VALID(integrity_item, key_type)) + return; + MLX5_ITEM_UPDATE(integrity_item, key_type, spec, mask, + &rte_flow_item_integrity_mask); + flow_dv_translate_integrity_l3(mask, headers, is_l3_ip4); + flow_dv_translate_integrity_l4(mask, headers); } static void -flow_dv_translate_item_integrity_post(void *matcher, void *key, +flow_dv_translate_item_integrity_post(void *key, const struct rte_flow_item *integrity_items[2], - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - void *headers_m, *headers_v; + void *headers; bool is_l3_ip4; if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, inner_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[1], is_l3_ip4); + set_integrity_bits(headers, integrity_items[1], is_l3_ip4, + key_type); } if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, outer_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[0], is_l3_ip4); + set_integrity_bits(headers, integrity_items[0], is_l3_ip4, + key_type); } } -static void +static uint64_t flow_dv_translate_item_integrity(const struct rte_flow_item *item, - const struct rte_flow_item *integrity_items[2], - uint64_t *last_item) + struct mlx5_dv_matcher_workspace *wks, + uint64_t key_type) { - const struct rte_flow_item_integrity *spec = (typeof(spec))item->spec; + if ((key_type & MLX5_SET_MATCHER_SW) != 0) { + const struct rte_flow_item_integrity + *spec = (typeof(spec))item->spec; - /* integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (spec->level > 1) { - integrity_items[1] = item; - *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + /* SWS integrity bits validation cleared spec pointer */ + if (spec->level > 1) { + wks->integrity_items[1] = item; + wks->last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + } else { + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + } } else { - integrity_items[0] = item; - *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + /* HWS supports outer integrity only */ + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } + return wks->last_item; } /** @@ -13401,6 +13390,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_meter_color(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + last_item = flow_dv_translate_item_integrity(items, + wks, key_type); + break; default: break; } @@ -13464,6 +13457,12 @@ flow_dv_translate_items_hws(const struct rte_flow_item *items, if (ret) return ret; } + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(key, + wks.integrity_items, + wks.item_flags, + key_type); + } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(key, wks.tunnel_item, @@ -13544,7 +13543,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, mlx5_flow_get_thread_workspace())->rss_desc, }; struct mlx5_dv_matcher_workspace wks_m = wks; - const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; int ret = 0; int tunnel; @@ -13555,10 +13553,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, NULL, "item not supported"); tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); switch (items->type) { - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &wks.last_item); - break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); @@ -13601,9 +13595,14 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, return -rte_errno; } if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - wks.item_flags); + flow_dv_translate_item_integrity_post(match_mask, + wks_m.integrity_items, + wks_m.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_integrity_post(match_value, + wks.integrity_items, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 2792a0fc39..3cbe0305e9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4655,6 +4655,14 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + /* + * Integrity flow item validation require access to + * both item mask and spec. + * Current HWS model allows item mask in pattern + * template and item spec in flow rule. + */ + break; case RTE_FLOW_ITEM_TYPE_END: items_end = true; break; -- 2.25.1