From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4D13AA0547; Thu, 29 Apr 2021 20:37:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 60E4941273; Thu, 29 Apr 2021 20:37:29 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2065.outbound.protection.outlook.com [40.107.93.65]) by mails.dpdk.org (Postfix) with ESMTP id F329541267 for ; Thu, 29 Apr 2021 20:37:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DbbGXjfBbFzBJlrGiMn9egzQJwMdYE8eEFWKhZnH/k48eYCyCpVIFuPyU2cyE/Tux+DU8hKyHs+FX8KF91NGavcZ2/T3Na0Jo3kpY8plyOAvbh9YN6CbGE4xJ9j+nujD5G+SCOQQykM+HOmVHmubL79ikyXaIRf0DO998AfArUReE7kopQil7hiA5cDLcWWW2S7ZNV2FZimBL+1Lo7vMalEdDEPrXG7rovumikWdUF7M+WIscImR1Nueoq1R097lnnkpz8aOLQs90TMH9m3iLn61LLIofd+fyvtarWSbTp1K2PhXkqn8Go3kffh4Aa0EEFB764fYGUdAYJhHrfVeuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GGeuKk/76fHYgaEro/wB2iWVMsWE/lCm5k5LwNuFylc=; b=Tpl1wgPTXdYgCp23Qzz/kSR06qMqE2og+KiU20TPkZ/c3XDEYTbIfzztHfD1vfFECY0+fOXJwxwU8zADkR3Znkyhjvt2pMr7h6pZz0h68Egkl1ixaaRRdUXWUfLpfxvJhmH15HJBcxPlelCVHM8HoZxQE7wMDUTOHIYHSNbJzG0tGItiIHT5HjxfHIyw5vbNjYsOyCQuyx/65YYJvgr5yixbsyTPnFsxtwBmIguzhVUXKtHqKL8zLXgw9teTU/OQ4j6anB1x15v5DSCWa5I27EDPcUKtnr86c5yCGR6FJecmCLOjY/YEyY2eizlNB/7yWOoJ/1toUERaThWnoXO2GQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GGeuKk/76fHYgaEro/wB2iWVMsWE/lCm5k5LwNuFylc=; b=HXZKJbN5+hybrqSYY/eaSXCE5ZdCfm10azzuHpPntdJRRmCXdTcMEn6Y2jvnH75f38uhA5x1EN9ta4SiTpx57m/72DAGKOe3HUIygCqm5SOeh6YGt+PfTL0n61wNqo1PyCswdGVFgyhwUijEg/wIXQWOPM8lGjRgwa5Yt1s9MaTijcWdu8OFfRy/zHO6Jc0lajz/0x18g2e5iN+3L2jHZajI/JDm0Qiecje7N0CCT0PVG3bNsemKPQi4oxzHJWdU7NLE7H6KgnvGscSiwTKHLd35qs3ZHpPB1JZ5Wbk1ypmDymqi6jZ5ty+GHEIxF6R4GSmm5M0yjpIgpnAKoy8TWg== Received: from DM6PR18CA0027.namprd18.prod.outlook.com (2603:10b6:5:15b::40) by BY5PR12MB4116.namprd12.prod.outlook.com (2603:10b6:a03:210::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.23; Thu, 29 Apr 2021 18:37:24 +0000 Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:5:15b:cafe::19) by DM6PR18CA0027.outlook.office365.com (2603:10b6:5:15b::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.27 via Frontend Transport; Thu, 29 Apr 2021 18:37:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4087.27 via Frontend Transport; Thu, 29 Apr 2021 18:37:23 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 29 Apr 2021 18:37:21 +0000 From: Gregory Etelson To: CC: , , , , Viacheslav Ovsiienko , "Shahaf Shuler" Date: Thu, 29 Apr 2021 21:36:58 +0300 Message-ID: <20210429183659.14765-4-getelson@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210429183659.14765-1-getelson@nvidia.com> References: <20210428175906.21387-1-getelson@nvidia.com> <20210429183659.14765-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f38f3ede-22be-4a98-2cb1-08d90b3dd4cf X-MS-TrafficTypeDiagnostic: BY5PR12MB4116: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:312; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fAqui+T0ohrnR1AcB6k2KShENre9846oMXgYdcRmgkXH/6+uVH7no0dqDW40j2gCaoC+ZyBbEuioJpP/6G0MKpA5BP51C4j37RFiu9EJYrskzYEmWcYUg4PrOaaN8YNwxCso9gduF2IOIjA/JTtBezDQjR4GkAfCQw0xnFMxxfvQP9E/oxgptaJYctt9rIm0sDGW1gNKmR3QP4t8T0d188dps/74dg9FICUG6xFkhVD05gZ4oO2b1urkIWfH4gcXhgUlbbMwWNSsH9579TtUjfcVnj7Zlemt4YO73yyAIiFThomR4P36nfyL3bMxgSA2VL+H2W0exH8q1/HkuO/IEFUl5Q1lE2OdiOR6L3lkP6BzAwh7jPJlzzooaSfZxT6o88zkf3g09rj8sjGCymg3Ko2ZLzS4zLYAKtUo19XhJOeXbs0gm0wbUgbMVNfbWA5c5krv7OSDRclyvmOfHLU/DkPQ44mgG+HKSXt9fgRadS0EUy1VQA5PSjxZ7hySSmfc0vA6jL2tJC7k5/2/tbR3S7pJJl2xVoEtAriVeV9IN1BcEEjTbCYsOw8XHw/JBdZvx9d0uC+bhGE+rTlKAb//buOCAr/HaU4IvTo0qu2CIJev52MHAm3FX12xQXAtLNg+INuN0u7LTQZ7PBP41fzG3Zk9I44tE6fLu8SsoIAIfXX1Vuf9mxLaTbuR6B3gGK+0 X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(36840700001)(46966006)(6286002)(4326008)(47076005)(107886003)(7696005)(186003)(26005)(16526019)(426003)(82310400003)(36756003)(36860700001)(6666004)(2906002)(70586007)(55016002)(478600001)(336012)(30864003)(316002)(36906005)(5660300002)(6916009)(8676002)(86362001)(83380400001)(82740400003)(54906003)(1076003)(70206006)(8936002)(7636003)(356005)(2616005)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2021 18:37:23.9354 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f38f3ede-22be-4a98-2cb1-08d90b3dd4cf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4116 Subject: [dpdk-dev] [PATCH v3 3/4] net/mlx5: support integrity flow item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" MLX5 PMD supports the following integrity filters for outer and inner network headers: - l3_ok - l4_ok - ipv4_csum_ok - l4_csum_ok `level` values 0 and 1 reference outer headers. `level` > 1 reference inner headers. Flow rule items supplied by application must explicitly specify network headers referred by integrity item. For example: flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end … or flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end … Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.h | 29 +++ drivers/net/mlx5/mlx5_flow_dv.c | 311 ++++++++++++++++++++++++++++++++ 2 files changed, 340 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 56908ae08b..6b3bcf3f46 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -145,6 +145,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32) #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33) +/* INTEGRITY item bit */ +#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1010,6 +1013,14 @@ struct rte_flow { (MLX5_RSS_HASH_IPV6 | IBV_RX_HASH_DST_PORT_TCP) #define MLX5_RSS_HASH_NONE 0ULL + +/* extract next protocol type from Ethernet & VLAN headers */ +#define MLX5_ETHER_TYPE_FROM_HEADER(_s, _m, _itm, _prt) do { \ + (_prt) = ((const struct _s *)(_itm)->mask)->_m; \ + (_prt) &= ((const struct _s *)(_itm)->spec)->_m; \ + (_prt) = rte_be_to_cpu_16((_prt)); \ +} while (0) + /* array of valid combinations of RX Hash fields for RSS */ static const uint64_t mlx5_rss_hash_fields[] = { MLX5_RSS_HASH_IPV4, @@ -1282,6 +1293,24 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx) return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL]; } +static __rte_always_inline const struct rte_flow_item * +mlx5_find_end_item(const struct rte_flow_item *item) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++); + return item; +} + +static __rte_always_inline bool +mlx5_validate_integrity_item(const struct rte_flow_item_integrity *item) +{ + struct rte_flow_item_integrity test = *item; + test.l3_ok = 0; + test.l4_ok = 0; + test.ipv4_csum_ok = 0; + test.l4_csum_ok = 0; + return (test.value == 0); +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d810466242..6d094d7d0e 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -268,6 +268,31 @@ struct field_modify_info modify_tcp[] = { {0, 0, 0}, }; +static const struct rte_flow_item * +mlx5_flow_find_tunnel_item(const struct rte_flow_item *item) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + case RTE_FLOW_ITEM_TYPE_GRE: + case RTE_FLOW_ITEM_TYPE_MPLS: + case RTE_FLOW_ITEM_TYPE_NVGRE: + case RTE_FLOW_ITEM_TYPE_GENEVE: + return item; + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 || + item[1].type == RTE_FLOW_ITEM_TYPE_IPV6) + return item; + break; + } + } + return NULL; +} + static void mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused, uint8_t next_protocol, uint64_t *item_flags, @@ -6230,6 +6255,158 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev, return ret; } +static uint16_t +mlx5_flow_locate_proto_l3(const struct rte_flow_item **head, + const struct rte_flow_item *end) +{ + const struct rte_flow_item *item = *head; + uint16_t l3_protocol; + + for (; item != end; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + l3_protocol = RTE_ETHER_TYPE_IPV4; + goto l3_ok; + case RTE_FLOW_ITEM_TYPE_IPV6: + l3_protocol = RTE_ETHER_TYPE_IPV6; + goto l3_ok; + case RTE_FLOW_ITEM_TYPE_ETH: + if (item->mask && item->spec) { + MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth, + type, item, + l3_protocol); + if (l3_protocol == RTE_ETHER_TYPE_IPV4 || + l3_protocol == RTE_ETHER_TYPE_IPV6) + goto l3_ok; + } + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + if (item->mask && item->spec) { + MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan, + inner_type, item, + l3_protocol); + if (l3_protocol == RTE_ETHER_TYPE_IPV4 || + l3_protocol == RTE_ETHER_TYPE_IPV6) + goto l3_ok; + } + break; + } + } + return 0; +l3_ok: + *head = item; + return l3_protocol; +} + +static uint8_t +mlx5_flow_locate_proto_l4(const struct rte_flow_item **head, + const struct rte_flow_item *end) +{ + const struct rte_flow_item *item = *head; + uint8_t l4_protocol; + + for (; item != end; item++) { + switch (item->type) { + default: + break; + case RTE_FLOW_ITEM_TYPE_TCP: + l4_protocol = IPPROTO_TCP; + goto l4_ok; + case RTE_FLOW_ITEM_TYPE_UDP: + l4_protocol = IPPROTO_UDP; + goto l4_ok; + case RTE_FLOW_ITEM_TYPE_IPV4: + if (item->mask && item->spec) { + const struct rte_flow_item_ipv4 *mask, *spec; + + mask = (typeof(mask))item->mask; + spec = (typeof(spec))item->spec; + l4_protocol = mask->hdr.next_proto_id & + spec->hdr.next_proto_id; + if (l4_protocol == IPPROTO_TCP || + l4_protocol == IPPROTO_UDP) + goto l4_ok; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + if (item->mask && item->spec) { + const struct rte_flow_item_ipv6 *mask, *spec; + mask = (typeof(mask))item->mask; + spec = (typeof(spec))item->spec; + l4_protocol = mask->hdr.proto & spec->hdr.proto; + if (l4_protocol == IPPROTO_TCP || + l4_protocol == IPPROTO_UDP) + goto l4_ok; + } + break; + } + } + return 0; +l4_ok: + *head = item; + return l4_protocol; +} + +static int +flow_dv_validate_item_integrity(struct rte_eth_dev *dev, + const struct rte_flow_item *rule_items, + const struct rte_flow_item *integrity_item, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items; + const struct rte_flow_item_integrity *mask = (typeof(mask)) + integrity_item->mask; + const struct rte_flow_item_integrity *spec = (typeof(spec)) + integrity_item->spec; + uint32_t protocol; + + if (!priv->config.hca_attr.pkt_integrity_match) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "packet integrity integrity_item not supported"); + if (!mask) + mask = &rte_flow_item_integrity_mask; + if (!mlx5_validate_integrity_item(mask)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "unsupported integrity filter"); + tunnel_item = mlx5_flow_find_tunnel_item(rule_items); + if (spec->level > 1) { + if (!tunnel_item) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing tunnel item"); + item = tunnel_item; + end_item = mlx5_find_end_item(tunnel_item); + } else { + end_item = tunnel_item ? tunnel_item : + mlx5_find_end_item(integrity_item); + } + if (mask->l3_ok || mask->ipv4_csum_ok) { + protocol = mlx5_flow_locate_proto_l3(&item, end_item); + if (!protocol) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing L3 protocol"); + } + if (mask->l4_ok || mask->l4_csum_ok) { + protocol = mlx5_flow_locate_proto_l4(&item, end_item); + if (!protocol) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "missing L4 protocol"); + } + return 0; +} + /** * Internal validation function. For validating both actions and items. * @@ -6321,6 +6498,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, .fdb_def_rule = !!priv->fdb_def_rule, }; const struct rte_eth_hairpin_conf *conf; + const struct rte_flow_item *rule_items = items; bool def_policy = false; if (items == NULL) @@ -6644,6 +6822,18 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "multiple integrity items not supported"); + ret = flow_dv_validate_item_integrity(dev, rule_items, + items, error); + if (ret < 0) + return ret; + last_item = MLX5_FLOW_ITEM_INTEGRITY; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -11119,6 +11309,121 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev, return age_idx; } +static void +flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, + const struct rte_flow_item_integrity *value, + void *headers_m, void *headers_v) +{ + if (mask->l4_ok) { + /* application l4_ok filter aggregates all hardware l4 filters + * therefore hw l4_checksum_ok must be implicitly added here. + */ + struct rte_flow_item_integrity local_item; + + local_item.l4_csum_ok = 1; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, + local_item.l4_csum_ok); + if (value->l4_ok) { + /* application l4_ok = 1 matches sets both hw flags + * l4_ok and l4_checksum_ok flags to 1. + */ + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + l4_checksum_ok, local_item.l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, + mask->l4_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, + value->l4_ok); + } else { + /* application l4_ok = 0 matches on hw flag + * l4_checksum_ok = 0 only. + */ + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + l4_checksum_ok, 0); + } + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, + mask->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, + value->l4_csum_ok); + } +} + +static void +flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, + const struct rte_flow_item_integrity *value, + void *headers_m, void *headers_v, + bool is_ipv4) +{ + if (mask->l3_ok) { + /* application l3_ok filter aggregates all hardware l3 filters + * therefore hw ipv4_checksum_ok must be implicitly added here. + */ + struct rte_flow_item_integrity local_item; + + local_item.ipv4_csum_ok = !!is_ipv4; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, + local_item.ipv4_csum_ok); + if (value->l3_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ipv4_checksum_ok, local_item.ipv4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, + mask->l3_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, + value->l3_ok); + } else { + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ipv4_checksum_ok, 0); + } + } else if (mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, + mask->ipv4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, + value->ipv4_csum_ok); + } +} + +static void +flow_dv_translate_item_integrity(void *matcher, void *key, + const struct rte_flow_item *head_item, + const struct rte_flow_item *integrity_item) +{ + const struct rte_flow_item_integrity *mask = integrity_item->mask; + const struct rte_flow_item_integrity *value = integrity_item->spec; + const struct rte_flow_item *tunnel_item, *end_item, *item; + void *headers_m; + void *headers_v; + uint32_t l3_protocol; + + if (!value) + return; + if (!mask) + mask = &rte_flow_item_integrity_mask; + if (value->level > 1) { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + tunnel_item = mlx5_flow_find_tunnel_item(head_item); + if (value->level > 1) { + /* tunnel item was verified during the item validation */ + item = tunnel_item; + end_item = mlx5_find_end_item(tunnel_item); + } else { + item = head_item; + end_item = tunnel_item ? tunnel_item : + mlx5_find_end_item(integrity_item); + } + l3_protocol = mask->l3_ok ? + mlx5_flow_locate_proto_l3(&item, end_item) : 0; + flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v, + l3_protocol == RTE_ETHER_TYPE_IPV4); + flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v); +} + /** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). @@ -11199,6 +11504,7 @@ flow_dv_translate(struct rte_eth_dev *dev, .skip_scale = dev_flow->skip_scale & (1 << MLX5_SCALE_FLOW_GROUP_BIT), }; + const struct rte_flow_item *head_item = items; if (!wks) return rte_flow_error_set(error, ENOMEM, @@ -12027,6 +12333,11 @@ flow_dv_translate(struct rte_eth_dev *dev, /* No other protocol should follow eCPRI layer. */ last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(match_mask, + match_value, + head_item, items); + break; default: break; } -- 2.31.1