From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0993CA0A0E; Thu, 29 Apr 2021 08:17:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9F464117C; Thu, 29 Apr 2021 08:17:02 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2066.outbound.protection.outlook.com [40.107.220.66]) by mails.dpdk.org (Postfix) with ESMTP id 12DA5411B8 for ; Thu, 29 Apr 2021 08:17:00 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K5OJmI2R78b4AnDckcFB4CWqpEDRHzon9MFWq/aKl8XVhQheTeti9PJ/jfQvXFBmwDsiD83mizWeecnZb4P/xvJcubQPE3mUKTr6lbfSzGEFgzqlAYPgBjcau+6VkMeF7lQpsa44spJ9xW4WPBvD4roOVGmTsSOd7xvKtADM1G/n90JhQalujGLTeiKettYKZAfn2io36N79Sd/gkY0lsoKXGRD/pvWST06UIST6g2EFZLG2n0EpsPAD5A6Lvv183POl3yGM6YF9rry1p+ofg5UjEnHqhmaPOYQY/e9Aqs8AgLbY2jJf7kOlWKiCewEtxaAhz4jvvxZxSiUC8vYO7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2IIIKMkkAnigtUlpsxOxUmDvoeyTiA1my+gvWIjB5B8=; b=kUyas3U0ElEMpW9Ne0BagtKnEcOU2kvneAzSJBKmWH5d13Xrq4a0VU2vpQNkkDoMNvpdH/Bbs96HWzVFOl3/M8epSEiDuqxLzKRbvZTDuuVER5yLOLvX8cLC9MnpjajaUil62twNoPC7jm/BQx3B0EQXHMVu/5HW38w4hDjmYzYBzRtJ40gpkWFCf+/KqvqY7BTuwUgMdUfMqK9iQun05kmZu5h5lrpeUUtyLk+3uG5/Dbhn7elCYTRXaZNE53ghOmuR7lQEdPPoYIprSpBbytR8R9TcCM7j6Si6Dg/tg2QdQl0QKNxCD/Y6KxN3OpbfsT1EzQpSxwXMFD1PmejhBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2IIIKMkkAnigtUlpsxOxUmDvoeyTiA1my+gvWIjB5B8=; b=D8MOeS2eLfUHW0iRE8wCiGA6vPJjTxcUTMxR8i8XqVHnRT+X1YvgVIaktzFY8dT8QYL++1saE6qSnkbx3hdDWjgD0dAohARXiDz+Ver0/zNXEy26zM8h4123RqBrQeWmJFcm2FniI/FRp6nNWBgtVrAO+2KKLN8bVnDPXq+d325z90od27oYHCh+rLltTKLM3ukE9EfCl7rptIWLCIX10Fr0R51hTyjw+ivoeYY77nWV36Hdk++2kv2gBdX31OIlH4oyg/qO2KLOhnrfX9SnL9j4llj5DzGHn13i5g+AaQq77H73TIQIojZMBR3PR+TXI2q+7UiM2OKesMKaoe3Crg== Received: from MW4PR04CA0056.namprd04.prod.outlook.com (2603:10b6:303:6a::31) by CH0PR12MB5372.namprd12.prod.outlook.com (2603:10b6:610:d7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.21; Thu, 29 Apr 2021 06:16:58 +0000 Received: from CO1NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6a:cafe::a2) by MW4PR04CA0056.outlook.office365.com (2603:10b6:303:6a::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.28 via Frontend Transport; Thu, 29 Apr 2021 06:16:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT020.mail.protection.outlook.com (10.13.174.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4087.27 via Frontend Transport; Thu, 29 Apr 2021 06:16:58 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 29 Apr 2021 06:16:55 +0000 From: Gregory Etelson To: CC: , , , , Viacheslav Ovsiienko , "Shahaf Shuler" Date: Thu, 29 Apr 2021 09:16:32 +0300 Message-ID: <20210429061634.3481-4-getelson@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210429061634.3481-1-getelson@nvidia.com> References: <20210428175906.21387-1-getelson@nvidia.com> <20210429061634.3481-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4d38bb44-d9f1-4f0c-9e29-08d90ad66501 X-MS-TrafficTypeDiagnostic: CH0PR12MB5372: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:240; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eSA6FxM7hgEhHt33kunAII2EKARxdMdBgmwIKW3zImROeicqC9o764RsH6QDuYiCK0L649W8znOuWCN7lNJ6NxUaa5CZolEUOGNWY/Gid7F3yKIokHuONKwuFK5sLfCXOhfolWYAN8ABel9npGi3e1jzxvC1Nu/I84yQ9mq+xptCGyKbvdPXglOKZYLnkt2fGof429RxpGsdSQUq4BEmuQUjyNaltReNRElWyBWX6Q3zUMAfWclu2dRiCpbfaVO1WMia8DHIS1JMN7I+wYa96fVe5T/pQH+hy6Sb2Kzr+TZY9L25NA9vZZSpBguQSh8Hl7WVmg6C7yGwI7q7nOAGpqdDjsyB6E8a9ATcCWtbDDObR+sK5qhZb5o5WSn3LAY+ocC2JLXiHBkZgAJqITv7sMFnwW3Z11aQgea/0c8FTigwD/XIq4R1C84b5uURg5s2v4D93mJVlGT5ANh4cPvgsHWmgpw3TVTrziu0oc/9R/LtnR4gtaLdc5FgEDBmbvabSyhKGVjtmz78knpjUzJciuWTBrgcQdh5u+aRzrO6W5L2OZHPHG1KlpHQxN13Eq+0M1Ka5PNhvDFE5clazDh4FBaoBJpPYUv6DwpdelSPecnXZo2kkhbiOg3ATUYo6tGgNJZ4gYrJ1gSPgx7gPLnikKAoQEP3Ejnnymf+a4DT39HgRN8iOlxP/rFchbKprKlF X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(376002)(39860400002)(136003)(396003)(36840700001)(46966006)(70206006)(8936002)(16526019)(6916009)(82740400003)(82310400003)(5660300002)(186003)(1076003)(2906002)(70586007)(478600001)(426003)(86362001)(36756003)(2616005)(54906003)(7696005)(6666004)(6286002)(26005)(83380400001)(36906005)(47076005)(316002)(4326008)(336012)(30864003)(107886003)(7636003)(55016002)(356005)(36860700001)(8676002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2021 06:16:58.2480 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4d38bb44-d9f1-4f0c-9e29-08d90ad66501 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5372 Subject: [dpdk-dev] [PATCH v2 3/4] net/mlx5: support integrity flow item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" MLX5 PMD supports the following integrity filters for outer and inner network headers: - l3_ok - l4_ok - ipv4_csum_ok - l4_csum_ok `level` values 0 and 1 reference outer headers. `level` > 1 reference inner headers. Flow rule items supplied by application must explicitly specify network headers referred by integrity item. For example: flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end … or flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end … Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 25 ++++ drivers/net/mlx5/mlx5_flow.h | 26 ++++ drivers/net/mlx5/mlx5_flow_dv.c | 258 ++++++++++++++++++++++++++++++++ 3 files changed, 309 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 15ed5ec7a2..db9a251c68 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -8083,6 +8083,31 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev) return ret; } +const struct rte_flow_item * +mlx5_flow_find_tunnel_item(const struct rte_flow_item *item) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + case RTE_FLOW_ITEM_TYPE_GRE: + case RTE_FLOW_ITEM_TYPE_MPLS: + case RTE_FLOW_ITEM_TYPE_NVGRE: + case RTE_FLOW_ITEM_TYPE_GENEVE: + return item; + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 || + item[1].type == RTE_FLOW_ITEM_TYPE_IPV6) + return item; + break; + } + } + return NULL; +} + #ifndef HAVE_MLX5DV_DR #define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1)) #else diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 56908ae08b..eb7035d259 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -145,6 +145,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32) #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33) +/* INTEGRITY item bit */ +#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1010,6 +1013,20 @@ struct rte_flow { (MLX5_RSS_HASH_IPV6 | IBV_RX_HASH_DST_PORT_TCP) #define MLX5_RSS_HASH_NONE 0ULL +/* + * Define integrity bits supported by the PMD + */ +#define MLX5_DV_PKT_INTEGRITY_MASK \ + (RTE_FLOW_ITEM_INTEGRITY_L3_OK | RTE_FLOW_ITEM_INTEGRITY_L4_OK | \ + RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK | \ + RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK) + +#define MLX5_ETHER_TYPE_FROM_HEADER(_s, _m, _itm, _prt) do { \ + (_prt) = ((const struct _s *)(_itm)->mask)->_m; \ + (_prt) &= ((const struct _s *)(_itm)->spec)->_m; \ + (_prt) = rte_be_to_cpu_16((_prt)); \ +} while (0) + /* array of valid combinations of RX Hash fields for RSS */ static const uint64_t mlx5_rss_hash_fields[] = { MLX5_RSS_HASH_IPV4, @@ -1282,6 +1299,13 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx) return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL]; } +static __rte_always_inline const struct rte_flow_item * +mlx5_find_end_item(const struct rte_flow_item *item) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++); + return item; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -1433,6 +1457,8 @@ struct mlx5_flow_meter_sub_policy *mlx5_flow_meter_sub_policy_rss_prepare struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev); int mlx5_action_handle_flush(struct rte_eth_dev *dev); +const struct rte_flow_item * +mlx5_flow_find_tunnel_item(const struct rte_flow_item *item); void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id); int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d810466242..2d4042e458 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6230,6 +6230,163 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev, return ret; } +static uint16_t +mlx5_flow_locate_proto_l3(const struct rte_flow_item **head, + const struct rte_flow_item *end) +{ + const struct rte_flow_item *item = *head; + uint16_t l3_protocol; + + for (; item != end; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + l3_protocol = RTE_ETHER_TYPE_IPV4; + goto l3_ok; + case RTE_FLOW_ITEM_TYPE_IPV6: + l3_protocol = RTE_ETHER_TYPE_IPV6; + goto l3_ok; + case RTE_FLOW_ITEM_TYPE_ETH: + if (item->mask && item->spec) { + MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth, + type, item, + l3_protocol); + if (l3_protocol == RTE_ETHER_TYPE_IPV4 || + l3_protocol == RTE_ETHER_TYPE_IPV6) + goto l3_ok; + } + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + if (item->mask && item->spec) { + MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan, + inner_type, item, + l3_protocol); + if (l3_protocol == RTE_ETHER_TYPE_IPV4 || + l3_protocol == RTE_ETHER_TYPE_IPV6) + goto l3_ok; + } + break; + } + } + + return 0; + +l3_ok: + *head = item; + return l3_protocol; +} + +static uint8_t +mlx5_flow_locate_proto_l4(const struct rte_flow_item **head, + const struct rte_flow_item *end) +{ + const struct rte_flow_item *item = *head; + uint8_t l4_protocol; + + for (; item != end; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_TCP: + l4_protocol = IPPROTO_TCP; + goto l4_ok; + case RTE_FLOW_ITEM_TYPE_UDP: + l4_protocol = IPPROTO_UDP; + goto l4_ok; + case RTE_FLOW_ITEM_TYPE_IPV4: + if (item->mask && item->spec) { + const struct rte_flow_item_ipv4 *mask, *spec; + + mask = (typeof(mask))item->mask; + spec = (typeof(spec))item->spec; + l4_protocol = mask->hdr.next_proto_id & + spec->hdr.next_proto_id; + if (l4_protocol == IPPROTO_TCP || + l4_protocol == IPPROTO_UDP) + goto l4_ok; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + if (item->mask && item->spec) { + const struct rte_flow_item_ipv6 *mask, *spec; + mask = (typeof(mask))item->mask; + spec = (typeof(spec))item->spec; + l4_protocol = mask->hdr.proto & spec->hdr.proto; + if (l4_protocol == IPPROTO_TCP || + l4_protocol == IPPROTO_UDP) + goto l4_ok; + } + break; + } + } + + return 0; + +l4_ok: + *head = item; + return l4_protocol; +} + +static int +flow_dv_validate_item_integrity(struct rte_eth_dev *dev, + const struct rte_flow_item *rule_items, + const struct rte_flow_item *integrity_item, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items; + const struct rte_flow_item_integrity *mask = (typeof(mask)) + integrity_item->mask; + const struct rte_flow_item_integrity *spec = (typeof(spec)) + integrity_item->spec; + uint32_t protocol; + + if (!priv->config.hca_attr.pkt_integrity_match) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "packet integrity integrity_item not supported"); + if (!mask) + mask = &rte_flow_item_integrity_mask; + if (mask->value && ((mask->value & ~MLX5_DV_PKT_INTEGRITY_MASK) != 0)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "unsupported integrity filter"); + tunnel_item = mlx5_flow_find_tunnel_item(rule_items); + if (spec->level > 1) { + if (!tunnel_item) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing tunnel item"); + item = tunnel_item; + end_item = mlx5_find_end_item(tunnel_item); + } else { + end_item = tunnel_item ? tunnel_item : + mlx5_find_end_item(integrity_item); + } + if (mask->l3_ok || mask->ipv4_csum_ok) { + protocol = mlx5_flow_locate_proto_l3(&item, end_item); + if (!protocol) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing L3 protocol"); + } + if (mask->l4_ok || mask->l4_csum_ok) { + protocol = mlx5_flow_locate_proto_l4(&item, end_item); + if (!protocol) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing L4 protocol"); + } + + return 0; +} + /** * Internal validation function. For validating both actions and items. * @@ -6321,6 +6478,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, .fdb_def_rule = !!priv->fdb_def_rule, }; const struct rte_eth_hairpin_conf *conf; + const struct rte_flow_item *rule_items = items; bool def_policy = false; if (items == NULL) @@ -6644,6 +6802,18 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + if (item_flags & RTE_FLOW_ITEM_TYPE_INTEGRITY) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "multiple integrity items not supported"); + ret = flow_dv_validate_item_integrity(dev, rule_items, + items, error); + if (ret < 0) + return ret; + last_item = MLX5_FLOW_ITEM_INTEGRITY; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -11119,6 +11289,90 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev, return age_idx; } +static void +flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, + const struct rte_flow_item_integrity *value, + void *headers_m, void *headers_v) +{ + if (mask->l4_ok) { + /* application l4_ok filter aggregates all hardware l4 filters + * therefore hw l4_checksum_ok must be implicitly added here. + */ + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); + if (value->l4_ok) { + /* application l4_ok = 1 matches sets both hw flags + * l4_ok and l4_checksum_ok flags to 1. + */ + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + l4_checksum_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1); + } else { + /* application l4_ok = 0 matches on hw flag + * l4_checksum_ok = 0 only. + */ + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + l4_checksum_ok, 0); + } + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, + mask->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, + mask->ipv4_csum_ok & value->ipv4_csum_ok); + } +} + +static void +flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, + const struct rte_flow_item_integrity *value, + void *headers_m, void *headers_v) +{ + if (mask->l3_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, + mask->ipv4_csum_ok); + if (value->l3_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ipv4_checksum_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, 1); + } else { + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ipv4_checksum_ok, 0); + } + } else if (mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, + mask->ipv4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, + value->ipv4_csum_ok); + } +} + +static void +flow_dv_translate_item_integrity(void *matcher, void *key, + const struct rte_flow_item *item) +{ + const struct rte_flow_item_integrity *mask = item->mask; + const struct rte_flow_item_integrity *value = item->spec; + void *headers_m; + void *headers_v; + + if (!value) + return; + if (!mask) + mask = &rte_flow_item_integrity_mask; + if (value->level > 1) { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v); + flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v); +} + /** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). @@ -12027,6 +12281,10 @@ flow_dv_translate(struct rte_eth_dev *dev, /* No other protocol should follow eCPRI layer. */ last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(match_mask, + match_value, items); + break; default: break; } -- 2.31.1