From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E76443238; Sun, 29 Oct 2023 17:37:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 001E242DF1; Sun, 29 Oct 2023 17:34:36 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2041.outbound.protection.outlook.com [40.107.212.41]) by mails.dpdk.org (Postfix) with ESMTP id 44CAE406B8 for ; Sun, 29 Oct 2023 17:34:05 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fs4hBxOlGog4ngX0CMH04hDKwRiNx8Kpv3S6ZEe835HuIVtsHlc23IhGev1SD7vZYsKK+Mo1pIxRYs55gq1AL90ZuGjWE9sx6f6VJ5naHb4IPJC8C53A2x0VFKAfGiY4DlCLkaaPq9lF/0LkvgLbTAfkClqEeTgStHBNrhHoCmsFZb30rTn9wRqCgB2BLwYNchd7VfdUk+jc1Jc5wtOoqGer41DfyDjl6aVoeIDIZVRNwIWfzybZYX/sxCPULmfkwC31/vaOD0YugpnYYv7Dhfavo/Ev4LA2R0kKunR9cU6m0wQ7eMxhPqGDq7bPjd6ng/XM/P3HrNzXx7ThXJrG+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gUE6T5eaADwd9IiiMLpdrSPEjfhe+ubAWWksbzuC01Y=; b=aHMBIW7Zgcxisvmm7/0/1gyztsbBwlk2v7J68oK2l8vIyfrezN/syf7tIbiBwM3ixAX2/m5bngcfsG70u0VnksrzQ1Mc/anRxNekjnKKVb/qbUj/H2M05WCjZZcvSamOfwYxOatrQaIGGeR2GVQTn2PyGtvE5OgBkoa2vsafQckC6bwM1IKpbxiNm2RIuY2ynLC3AAMKuZESrq598T+HfKqcaFCdTzdsq+5F6DydrKFI1fg/SARuzVgkYKL15+A5QnaPn9zGrzhfkxE6qH474uHIAv9TK+MeTzgAjh9lB29yKd6IGAk2utHzYkpxwS768QLe2Ff7mjhuscCp3SFW3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gUE6T5eaADwd9IiiMLpdrSPEjfhe+ubAWWksbzuC01Y=; b=KtMRXN+0Fd3nbrFDElloAqdSayb1iUlb1BRbdF2QW5v7qUSToUUYrsIHkDjvdkaZT5lCzK/5rUjT8M8D2W44UQIRSHQdc4Yn+6hAxLHfljomqbHq0thxVqIGBZx+Ejhw9q1YbJvQ8oMW1DOTMN08ddHdMIS2+GeVDMa8xCoyDRNhtqEkQDzOh1OPuu0GQwwSA1/NfieswK3LT4f3T+K+sAM/ZpGCLmEuOXwaON+DKGLg2tVPYdMLOKlggaEOVikT7S99SCnR8xFF0bMYIyw1sQ7xRaUyl54M8jP7XnxkizgXm1kSvfsQm2xgAL2UfK7BM9fZYDW3m/br/2y+tUKWpA== Received: from BY3PR05CA0038.namprd05.prod.outlook.com (2603:10b6:a03:39b::13) by CY8PR12MB7561.namprd12.prod.outlook.com (2603:10b6:930:94::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.25; Sun, 29 Oct 2023 16:34:02 +0000 Received: from DS2PEPF00003448.namprd04.prod.outlook.com (2603:10b6:a03:39b:cafe::4b) by BY3PR05CA0038.outlook.office365.com (2603:10b6:a03:39b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.15 via Frontend Transport; Sun, 29 Oct 2023 16:34:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003448.mail.protection.outlook.com (10.167.17.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.15 via Frontend Transport; Sun, 29 Oct 2023 16:34:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 09:33:52 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 09:33:48 -0700 From: Gregory Etelson To: CC: , , , Rongwei Liu , Ori Kam , Suanming Mou , Matan Azrad , Viacheslav Ovsiienko Subject: [PATCH 29/30] net/mlx5: implement IPv6 routing push remove Date: Sun, 29 Oct 2023 18:32:01 +0200 Message-ID: <20231029163202.216450-29-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231029163202.216450-1-getelson@nvidia.com> References: <20231029163202.216450-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003448:EE_|CY8PR12MB7561:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b068874-66c5-4555-645b-08dbd89cdbf1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vAmGcogeesN/AbhFE6DnD7R1L1CP9wQaHKzmEKAH6p5goRv1Y4EEDtArVcCXmUzhYaQ5EbCSj4SDlLjEzW7m9/vlbo2Djqi0d0NOErcpuQQLrO7M57oemkTB1KZ+Ffh+PmWT+Js+1V/9yH4LMGZGq8AJLhaFtPw2QVd+yKzILtOAeFlEkOLu1SBzX3ATcg99xzy7rpkm7jpU96jc8X1Vwc4Tm3vZXanIQD0mRJKYMtvpstx62z1gb1K9qL8I130n69nLsbv8N+zEgnT76TXikHMSki/Dy9JzXKr0VQi7arMU97+1P4fIUz3Q4sOlmNk4bkMwPt6xVYh45EMB7f33PLPfo8oBCUOzPAztxtYalyAQYXlAOwjTKJFQgoPmbYagzubZqJWvrA2VYK+7AzVttjLSR/otfaXyAA7LB17/mimnJbnWXC52F+xtIEW6b0xpUa5cTN806/eAMHvw0OJFtTFCaIu1V45Z214KCW0G6of5Fgpp0jm4/D/IyQMyGyWIbPeb/dLyQ0Yb7gUDmS4MA7L6njEbOkrNnv5OhySBGri14ATF5c717zX5rQ6cXh1/G3ZBXcdMS5peC47a4xDC5tJ6siArZ/c5ZejtCUzx3R2ke+MR3xX7luAAtyrsxewDywlZC/+uHPcLm5rdcs9F3HyS9b575QlFPuuoXZ6KEUdKLO89C1ZjzNWMi4dMGTG8VmTvop+2tOKXybX140coukmCxezgtucmUXNFUkb+9A9b9cks4RVCsvCmDkAlBK82CywBSJTX6bOUg15BpDanpfN/6t3sZPpNrA6K/oc23p0= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(136003)(376002)(396003)(39860400002)(230273577357003)(230922051799003)(230173577357003)(186009)(64100799003)(451199024)(1800799009)(82310400011)(36840700001)(40470700004)(46966006)(40480700001)(86362001)(7636003)(356005)(36756003)(40460700003)(47076005)(478600001)(82740400003)(6666004)(7696005)(41300700001)(107886003)(8936002)(4326008)(8676002)(336012)(5660300002)(83380400001)(26005)(6286002)(30864003)(16526019)(2616005)(426003)(2906002)(1076003)(70586007)(36860700001)(70206006)(316002)(6916009)(54906003)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2023 16:34:01.8022 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b068874-66c5-4555-645b-08dbd89cdbf1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003448.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7561 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Rongwei Liu Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 ++- drivers/net/mlx5/mlx5_flow_hw.c | 282 +++++++++++++++++++++++++++++- 5 files changed, 307 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index a85d755734..9c943fe5da 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -107,6 +107,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 81cc193f34..7e0a3d4cb8 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,7 +148,9 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. +- Modify flex item field. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. - E-Switch mirroring and jump. @@ -166,7 +168,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -728,6 +730,13 @@ Limitations The flow engine of a process cannot move from active to standby mode if preceding active application rules are still present and vice versa. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ad82d8060e..c60886abff 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -373,6 +373,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8174c03d50..6f4979a575 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -358,6 +358,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43) #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 48) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1263,6 +1265,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1309,6 +1313,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1353,6 +1361,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1376,7 +1385,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1407,6 +1423,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 84c78ba19c..f19b548a25 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1874,6 +1918,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1907,19 +2027,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2118,6 +2243,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2265,6 +2420,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2662,11 +2825,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2797,6 +2962,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -2953,6 +3125,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -4731,6 +4908,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -4957,6 +5166,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5053,6 +5263,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5160,6 +5385,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5251,6 +5478,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5259,7 +5488,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5268,8 +5498,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5301,6 +5534,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5373,11 +5616,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -5786,7 +6043,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -5823,6 +6080,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -5831,6 +6091,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8398,6 +8660,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8413,7 +8676,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8433,13 +8696,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; -- 2.39.2