From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25EAAA034C; Wed, 21 Dec 2022 09:44:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C8F942D3E; Wed, 21 Dec 2022 09:43:47 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2048.outbound.protection.outlook.com [40.107.244.48]) by mails.dpdk.org (Postfix) with ESMTP id C2EAD42D32 for ; Wed, 21 Dec 2022 09:43:45 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oSidrGSyHi9oIBJhVwcpbW65Rog1jfoCzQbbCuwVk9W5aVtACcC+ddPfroGIntETNMiBqPHCOmzVly+dD+z80s1QLYY9LU42AsG4V+mv8xqgBStaZm0odmZtCSGf/njyXU2O875FAlgFnAhZhjsztyZ28Ig1kG0ASb0wRIR6Pgeqb7hqWSZSUfSs8d6P1YGvZQESDXd6S4dnE6hez1kFeAh55E+HRwiKnWsma+CwvavbVNyqAPQeLoMzdC9bB7DdQqL9VzIbYSXaanCHwc0MeMh3la6VsFue6/3j6iURv2955AyB0+wofh0HVAMHfq5siDI5+ZCg4hfWoXQZjGtixQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rQVTLTpWGk965uyiGepvzcb4xOH6GLFZw6eSBM56DMw=; b=RfmcY1Kn0Kk4yRSCV8GlscyX6Lt5Ztgla6L17qFKxMIynpMoHNagqE0x6icvE52/qEQBb3ZGidpugsEfGz6ai4btb3oahwjsz6Fmsk9ZMLjT/GmAWr6uUWD4AtOgzNFCTQGpPIMJ7qj18E1axJGNiWxXLoW2F5TJVMpvhua03NA2BTiuXTSV7n+CS1gfc24CaQF93iivPEPR+ZSVrt+iEAI9LtTmE2VHknHzGhuh9Y1XqgcAlScp4R9YxNZV5YdeBq82K8YlzBWG2IGN7GjJRPpRBsZOlH0bN7jiiP+5Utxaxf36dGDINVqOCPufKgArMgYhMylpXanKe4INyp7edw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rQVTLTpWGk965uyiGepvzcb4xOH6GLFZw6eSBM56DMw=; b=ery2Nd2MwFnLUhQIXj4FDkvnOsbif+HZkUp9h+N5mz1Ebk7qa9nV3v/CkdvdJNjq0w0F+FJtad1lcFLLl1J0v+oCCQ8aGYGJwYLSwHpE0Dv10AvZ57CddvpgLV1McKZhRDDtYSm35I9fAx58LMkztHJvifILilceTvcGi6L0BgaQTM4A1VOlCqV11dkwGufhNMMrqHs7/JC2GepBVCM+B82smliY5M3yllhyULrQDMjt5HoAp0Y3bSf4qzYUY2BqcOiJUzA6KyUqDA5WNd8OScSQtJmnzCsd4rWk6vPbdf12GOmu9zx4RnSuHD+E7Y3RjDRtMCpWsbWVuTQFSBgI+A== Received: from MN2PR19CA0023.namprd19.prod.outlook.com (2603:10b6:208:178::36) by CY5PR12MB6276.namprd12.prod.outlook.com (2603:10b6:930:f::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 08:43:43 +0000 Received: from BL02EPF00010209.namprd05.prod.outlook.com (2603:10b6:208:178:cafe::d7) by MN2PR19CA0023.outlook.office365.com (2603:10b6:208:178::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.21 via Frontend Transport; Wed, 21 Dec 2022 08:43:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF00010209.mail.protection.outlook.com (10.167.241.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.8 via Frontend Transport; Wed, 21 Dec 2022 08:43:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 21 Dec 2022 00:43:34 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 21 Dec 2022 00:43:31 -0800 From: Rongwei Liu To: , , , CC: , Subject: [RFC 5/9] net/mlx5/hws: add IPv6 routing extension matching support Date: Wed, 21 Dec 2022 10:43:00 +0200 Message-ID: <20221221084304.3680690-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221221084304.3680690-1-rongweil@nvidia.com> References: <20221221084304.3680690-1-rongweil@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF00010209:EE_|CY5PR12MB6276:EE_ X-MS-Office365-Filtering-Correlation-Id: a97ea8c8-e7e8-442e-664f-08dae32f777e X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JctipE2ZDXguxkWfBWc63AQQ9YsPDGNusq0OuWRJDKVFIWK9smxxJYKYX2hr/kZ7v7S/CqqhrkcJzAGXu4lOVp++V9iFlMZoM+2nrZqWT3D4w4fKpLKVzykwqKhHW4MqqayBsZmO+qU+qPULmFvWY2Rh5clFpE2k891/edR1NmKihJoiCm5v+jQ7Qk+AxwrGq3rPKW4y5EvzdhugK3pL+GOJFttTrI+BYCUTFTms83M3Xy6dQ1XUmnubBpL3ikHCFoq8XNri7vLAqRav+52TMlLORcQwkL8vYcO8ONRWiBLVIzoEFDZfItXCrVwcyisT+SmPdLgYxEnhk3wNaz/PRFfFUP82regjrtnjALKombPk2tmyrtSqG9q+W9dgLx9XkJL2/6L0DXHAzn+ZA3rIqZeFYJ9M5wWsy0ksyoFZ/KlPuG3YPJha4j0CvK4MvQc0baEI4LmG+lirm8A41vu2HSq5wRfjyyTES65pYEqBYx0FBJVOYG+2JDxs4JikgwpwWAxknAcjPQSiDYRF0aIZMNj5htQF4X+L269Hgn1KJn1zRR88+RlpE17QA3/zX+oBqu5/6Bnt2UChzpAj7WbgOow8wXIUO7dK+Oi67brz4yqC7WwqTSdD2pbpYTl61k1UXeraDEXEWO4jNpo0QtIL/wSkfzAAuA1w67Uiz9rdcUW7CFLpMJF7+1nEzIQsODkurV/fCI/CD9iizw5F896w+Q== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199015)(46966006)(40470700004)(36840700001)(54906003)(7696005)(40460700003)(36860700001)(316002)(82740400003)(478600001)(7636003)(82310400005)(4326008)(8676002)(110136005)(2616005)(1076003)(70586007)(356005)(86362001)(36756003)(70206006)(40480700001)(83380400001)(5660300002)(8936002)(107886003)(47076005)(6666004)(426003)(336012)(2906002)(26005)(30864003)(41300700001)(186003)(16526019)(6286002)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 08:43:43.1899 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a97ea8c8-e7e8-442e-664f-08dae32f777e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF00010209.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6276 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add mlx5 HWS logic to match IPv6 routing extension header. Once detecting IPv6 matching extension items in pattern template create callback, PMD allocates a flex parser to sample the first dword of srv6 header. Only support next_hdr/segments_left/type for now. Signed-off-by: Rongwei Liu --- doc/guides/nics/features/mlx5.ini | 1 + doc/guides/nics/mlx5.rst | 1 + drivers/net/mlx5/hws/mlx5dr.h | 21 ++++++ drivers/net/mlx5/hws/mlx5dr_context.c | 81 +++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_context.h | 1 + drivers/net/mlx5/hws/mlx5dr_definer.c | 103 ++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 10 +++ drivers/net/mlx5/mlx5_flow.h | 3 + drivers/net/mlx5/mlx5_flow_hw.c | 39 ++++++++-- 9 files changed, 250 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 62fd330e2b..bd911a467b 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -87,6 +87,7 @@ vlan = Y vxlan = Y vxlan_gpe = Y represented_port = Y +ipv6_routing_ext = Y [rte_flow actions] age = I diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 51f51259e3..98dcf9af16 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -106,6 +106,7 @@ Features - Sub-Function representors. - Sub-Function. - Matching on represented port. +- Matching on IPv6 routing extension header. Limitations diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index f8de27c615..ba1566de9f 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -592,4 +592,25 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); +/* Allocate an internal flex parser for srv6 option. + * + * @param[in] dr_ctx + * The dr_context which the flex parser belongs to. + * @param[in] config + * Devx configuration per port. + * @param[in] ctx + * Device contex + * @return zero on success non zero otherwise. + */ +int mlx5dr_alloc_srh_flex_parser(struct mlx5dr_context *dr_ctx, + struct mlx5_common_dev_config *config, + void *ctx); + +/* Free srv6 flex parser. + * + * @param[in] dr_ctx + * The dr_context which the flex parser belongs to. + * @return zero on success non zero otherwise. + */ +int mlx5dr_free_srh_flex_parser(struct mlx5dr_context *dr_ctx); #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c index 76ada7bb7f..6329271ff6 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.c +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -178,6 +178,76 @@ static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) mlx5dr_context_uninit_pd(ctx); } +int mlx5dr_alloc_srh_flex_parser(struct mlx5dr_context *dr_ctx, + struct mlx5_common_dev_config *config, + void *ctx) +{ + struct mlx5_devx_graph_node_attr node = { + .modify_field_select = 0, + }; + struct mlx5_ext_sample_id ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; + int ret; + + memset(ids, 0xff, sizeof(ids)); + if (!config->hca_attr.parse_graph_flex_node) { + DR_LOG(ERR, "Dynamic flex parser is not supported"); + return -ENOTSUP; + } + if (__atomic_add_fetch(&dr_ctx->srh_flex_parser->refcnt, 1, __ATOMIC_RELAXED) > 1) + return 0; + + node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD; + /* Srv6 first two DW are not counted in. */ + node.header_length_base_value = 0x8; + /* The unit is uint64_t. */ + node.header_length_field_shift = 0x3; + /* Header length is the 2nd byte. */ + node.header_length_field_offset = 0x8; + node.header_length_field_mask = 0xF; + /* One byte next header protocol. */ + node.next_header_field_size = 0x8; + node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; + node.in[0].compare_condition_value = IPPROTO_ROUTING; + node.sample[0].flow_match_sample_en = 1; + /* First come first serve not matter inner or outer. */ + node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; + node.out[0].compare_condition_value = IPPROTO_TCP; + node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; + node.out[1].compare_condition_value = IPPROTO_UDP; + node.out[2].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IPV6; + node.out[2].compare_condition_value = IPPROTO_IPV6; + + dr_ctx->srh_flex_parser->fp = mlx5_devx_cmd_create_flex_parser(ctx, &node); + if (!dr_ctx->srh_flex_parser->fp) { + DR_LOG(ERR, "Failed to create flex parser node object."); + return (rte_errno == 0) ? -ENODEV : -rte_errno; + } + dr_ctx->srh_flex_parser->num = 1; + ret = mlx5_devx_cmd_query_parse_samples(dr_ctx->srh_flex_parser->fp, ids, + dr_ctx->srh_flex_parser->num, + &dr_ctx->srh_flex_parser->anchor_id); + if (ret) { + DR_LOG(ERR, "Failed to query sample IDs."); + return (rte_errno == 0) ? -ENODEV : -rte_errno; + } + dr_ctx->srh_flex_parser->offset[0] = 0x0; + dr_ctx->srh_flex_parser->ids[0].id = ids[0].id; + return 0; +} + +int mlx5dr_free_srh_flex_parser(struct mlx5dr_context *dr_ctx) +{ + struct mlx5_internal_flex_parser_profile *fp = dr_ctx->srh_flex_parser; + + if (__atomic_sub_fetch(&fp->refcnt, 1, __ATOMIC_RELAXED)) + return 0; + if (fp->fp) + mlx5_devx_cmd_destroy(fp->fp); + fp->fp = NULL; + return 0; +} + struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, struct mlx5dr_context_attr *attr) { @@ -197,16 +267,22 @@ struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, if (!ctx->caps) goto free_ctx; + ctx->srh_flex_parser = simple_calloc(1, sizeof(*ctx->srh_flex_parser)); + if (!ctx->srh_flex_parser) + goto free_caps; + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); if (ret) - goto free_caps; + goto free_flex; ret = mlx5dr_context_init_hws(ctx, attr); if (ret) - goto free_caps; + goto free_flex; return ctx; +free_flex: + simple_free(ctx->srh_flex_parser); free_caps: simple_free(ctx->caps); free_ctx: @@ -217,6 +293,7 @@ struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, int mlx5dr_context_close(struct mlx5dr_context *ctx) { mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->srh_flex_parser); simple_free(ctx->caps); pthread_spin_destroy(&ctx->ctrl_lock); simple_free(ctx); diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h index b0c7802daf..c1c627aced 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.h +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -35,6 +35,7 @@ struct mlx5dr_context { struct mlx5dr_send_engine *send_queue; size_t queues; LIST_HEAD(table_head, mlx5dr_table) head; + struct mlx5_internal_flex_parser_profile *srh_flex_parser; }; #endif /* MLX5DR_CONTEXT_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 10b1e43d6e..09acd5d719 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -125,6 +125,7 @@ struct mlx5dr_definer_conv_data { X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \ X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ @@ -293,6 +294,18 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); } +static void +mlx5dr_definer_ipv6_routing_ext_set(struct mlx5dr_definer_fc *fc, + const void *item, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6_routing_ext *v = item; + uint32_t val = 0; + + val = v->hdr.nexthdr << 24 | v->hdr.type << 8 | v->hdr.segments_left; + DR_SET_BE32(tag, RTE_BE32(val), fc->byte_off, 0, fc->bit_mask); +} + static void mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, const void *item_spec, @@ -1468,6 +1481,91 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_conv_item_ipv6_routing_ext(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + struct mlx5_internal_flex_parser_profile *fp = cd->ctx->srh_flex_parser; + enum mlx5dr_definer_fname i = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0; + const struct rte_flow_item_ipv6_routing_ext *m = item->mask; + uint32_t byte_off = fp->ids[0].format_select_dw * 4; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (!fp->num) + return -1; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_routing_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + } + + if (m->hdr.nexthdr || m->hdr.type || m->hdr.segments_left) { + for (; i <= MLX5DR_DEFINER_FNAME_FLEX_PARSER_7; i++) { + switch (i) { + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_0]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_0); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_1]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_1); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_2]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_2); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_3]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_3); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_4]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_4); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_5]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_5); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_6]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_6); + break; + case MLX5DR_DEFINER_FNAME_FLEX_PARSER_7: + default: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_FLEX_PARSER_7]; + DR_CALC_SET_HDR(fc, flex_parser, flex_parser_7); + break; + } + if (fc->byte_off == byte_off) + break; + } + if (i > MLX5DR_DEFINER_FNAME_FLEX_PARSER_7) + return -ENOTSUP; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_routing_ext_set; + fc->fname = i; + } + return 0; +} + static int mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, struct mlx5dr_match_template *mt, @@ -1584,6 +1682,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT : + MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1c11b77ac3..6dbd5f9622 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -543,6 +543,16 @@ struct mlx5_counter_stats_raw { volatile struct flow_counter_stats *data; }; +/* Mlx5 internal flex parser profile structure. */ +struct mlx5_internal_flex_parser_profile { + uint32_t num;/* Actual number of samples. */ + struct mlx5_ext_sample_id ids[MLX5_FLEX_ITEM_MAPPING_NUM];/* Sample IDs for this profile. */ + uint32_t offset[MLX5_FLEX_ITEM_MAPPING_NUM];/* Each ID sample offset. */ + uint8_t anchor_id; + uint32_t refcnt; + void *fp; /* DevX flex parser object. */ +}; + TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool); /* Counter global management structure. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1f57ecd6e1..81e2bc47a0 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -218,6 +218,9 @@ enum mlx5_feature_name { /* Meter color item */ #define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) +/* IPv6 routing extension item */ +#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45) +#define MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT (UINT64_C(1) << 46) /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 20c71ff7f0..ff52eb28f0 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -213,23 +213,25 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc, } /** - * Generate the pattern item flags. + * Generate the matching pattern item flags. * Will be used for shared RSS action. * * @param[in] items * Pointer to the list of items. + * @param[out] flags + * Flags superset including non-RSS items. * * @return - * Item flags. + * RSS Item flags. */ static uint64_t -flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) +flow_hw_matching_item_flags_get(const struct rte_flow_item items[], uint64_t *flags) { - uint64_t item_flags = 0; uint64_t last_item = 0; + *flags = 0; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int tunnel = !!(*flags & MLX5_FLOW_LAYER_TUNNEL); int item_type = items->type; switch (item_type) { @@ -249,6 +251,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : MLX5_FLOW_LAYER_OUTER_L4_UDP; break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT : + MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT; + break; case RTE_FLOW_ITEM_TYPE_GRE: last_item = MLX5_FLOW_LAYER_GRE; break; @@ -273,9 +279,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) default: break; } - item_flags |= last_item; + *flags |= last_item; } - return item_flags; + return *flags & ~(MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT | + MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT); } /** @@ -4732,6 +4739,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP: case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* @@ -4809,6 +4817,8 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, .mask = &tag_m, .last = NULL }; + struct mlx5dr_context *dr_ctx = priv->dr_ctx; + uint64_t flags; if (flow_hw_pattern_validate(dev, attr, items, error)) return NULL; @@ -4860,7 +4870,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, "cannot create match template"); return NULL; } - it->item_flags = flow_hw_rss_item_flags_get(tmpl_items); + it->item_flags = flow_hw_matching_item_flags_get(tmpl_items, &flags); if (copied_items) { if (attr->ingress) it->implicit_port = true; @@ -4868,6 +4878,15 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, it->implicit_tag = true; mlx5_free(copied_items); } + if (flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT | + MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) { + if (mlx5dr_alloc_srh_flex_parser(dr_ctx, &priv->sh->cdev->config, + priv->sh->cdev->ctx)) { + claim_zero(mlx5dr_match_template_destroy(it->mt)); + mlx5_free(it); + return NULL; + } + } __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; @@ -4891,6 +4910,9 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, struct rte_flow_pattern_template *template, struct rte_flow_error *error __rte_unused) { + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_context *dr_ctx = priv->dr_ctx; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Item template %p is still in use.", (void *)template); @@ -4899,6 +4921,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, NULL, "item template in using"); } + mlx5dr_free_srh_flex_parser(dr_ctx); LIST_REMOVE(template, next); claim_zero(mlx5dr_match_template_destroy(template->mt)); mlx5_free(template); -- 2.27.0