From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 849D444145; Mon, 3 Jun 2024 12:44:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72D5F42E21; Mon, 3 Jun 2024 12:44:32 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2089.outbound.protection.outlook.com [40.107.95.89]) by mails.dpdk.org (Postfix) with ESMTP id D6C5D42E1E for ; Mon, 3 Jun 2024 12:44:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QjLzWQil6c7rVynKY7IkqswHB1r1hvTgCqyC65klSiMygkrTLuXGeLVdw+7lKBCJhMtw4WtjM5OwAZxZC0aFEuXn+3q2shqIlBUgYn5ZY7Wo820Mhv9nKmCprkKLoMZE4t1x4D+/5T/bSofTCsKqafCqrwo+fxGOfefg1aL84+P2TUS7DWpic/SyMVZftXTR5dmt58urz4ZYO3M9ZSWbNu0HaDYsliiXJCNId/hY+sbWe4pCFkH7iEen0zTzQIWbJ0W28onH6uJ43z3/Bi0loPtQyCQjdk8utuagFVTzB1kSFJ7o5Xc1bhw1hOwW9AVLh+L7M1lSJZpkFPC+Yz0t8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=A5o62CqwaVEBczHvBJr2GA8BhAGCZ1RIqWvyEcOeBuY=; b=eiWyRmjoEmPQL7ftnF98APd2Ggk2VNRA1JVct8wKZYw/2mElKj2tVnBJAiIMj6ErJOHMSjxmFoda6N5G3ettS5PVkHAOlUTC/JNhMcW7zrM6MBfL2VAnJEIp3uP1UO8u2IsPhKrDqh+Sy/LciCoC7CrymQYuSnMkiVyvbA2s7POL/7gA48MqdtWbDO+56hmj/tIXhvYzMlh8sl3CyvVC8N3Q4kydXYXAI4qQa7x2dK2hNp1siuzlPRTrlO6Y33V/wo7piSNjzQko7y+CpZ4MBfuVXO2cJERkOdU5RLW+QytD0Q4qvCdICjaBpwYfdnVvYOsq/JFkeBSyJl1c4BC8wQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=A5o62CqwaVEBczHvBJr2GA8BhAGCZ1RIqWvyEcOeBuY=; b=lrdSp7OG/PqOuNFHAKohWl3Xoe048s6RSU3Td+4NT9bjikmfBa8Se/7msDhaJX6B279VkbaMed1P9BUWetj6ZiXF5ce343jIOulgQRzSmQBNCXfvZEKSOXZsZVMD7K3S7uwMVanqU/XGrgTWuW8EMOLYxFnk+biVsQJXTzaa9649XVN0T4F5VnrwnRm9wYPK97bisTMevZjuszoiM6Zl36hV0H73Y7EjiNjk6lFSdYIFn3nAVeIcLGqib9ZoleVJCru96BIKcYozdH+xPDKe6YoClyEyTztVZX/lmT1GGj4lN6WOEcfZBV0lYGZ5ofUG+5ijg2GVbv7PeQW7ec/QVQ== Received: from CY5PR18CA0039.namprd18.prod.outlook.com (2603:10b6:930:13::28) by IA1PR12MB6531.namprd12.prod.outlook.com (2603:10b6:208:3a4::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.30; Mon, 3 Jun 2024 10:44:27 +0000 Received: from CY4PEPF0000EE36.namprd05.prod.outlook.com (2603:10b6:930:13:cafe::3) by CY5PR18CA0039.outlook.office365.com (2603:10b6:930:13::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.30 via Frontend Transport; Mon, 3 Jun 2024 10:44:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EE36.mail.protection.outlook.com (10.167.242.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Mon, 3 Jun 2024 10:44:27 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 3 Jun 2024 03:44:14 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 3 Jun 2024 03:44:13 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Mon, 3 Jun 2024 03:44:10 -0700 From: Maayan Kashani To: CC: , , , Yevgeny Kliteynik , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v3 3/3] net/mlx5/hws: add support for backward-compatible API Date: Mon, 3 Jun 2024 13:43:56 +0300 Message-ID: <20240603104357.9437-3-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240603104357.9437-1-mkashani@nvidia.com> References: <20240602102601.196750-1-mkashani@nvidia.com> <20240603104357.9437-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE36:EE_|IA1PR12MB6531:EE_ X-MS-Office365-Filtering-Correlation-Id: 379287cc-07ee-416f-2c74-08dc83ba2406 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|376005|36860700004|1800799015|82310400017; X-Microsoft-Antispam-Message-Info: =?utf-8?B?V2lKYUs4SU9RSTczR3RWbTBIN2VyL3FKT1NlMXJaa1NDU1FmMmk4YXU2Wnlo?= =?utf-8?B?M09QbzZSWWQ0T0YzSEZzNVNEVVBDNzlhQkRLNzl5NkRPcnU0NXRWTUI2WHVa?= =?utf-8?B?aG8yRnp6UVJ4QXFVeXBXeW1vSzQvV2Ruamh5RGxwYVlWQ3FBZXdVOGRFNHZm?= =?utf-8?B?Wm9yekZjaTZqMjMxK2s3MjRwaklaYkZsMW1NV202U21tZE9XRjFqUHM3UEI3?= =?utf-8?B?ZnIwbGl0QWhoOTJCNjZPa2NyMzNzYzZ6MVFtM0oxSDFVd0t0M2psMTFMUk90?= =?utf-8?B?elhvbDdaRkJLOTRKRmNCUEVwTHYyYlNBN1ZoYzYyU1pFQUZ4cWNvNHZwN1da?= =?utf-8?B?c2Z0ZkQrVzhIOEpHcFI2aFZiUWR3MkwxVitKV0VZWjdYQW1EWW1CTVgyajhM?= =?utf-8?B?eWVEeUIyeDRFOG5HSnhuRnplYllad2cxR1orYmxSYnViSWxJbkErSHV5dVly?= =?utf-8?B?WnlyNTBKYStqVzBsT3FMQzNuYkIrMkNlUFVYaWNYVGdjZm1QWjJsVnp5REdw?= =?utf-8?B?bFBCd1VoOHNVaERsYS9EWVU0WmdqN2lXNklUUGkxVHRsM01QZDIyNUxDZ3U2?= =?utf-8?B?bER3ZjI5ZG9taVJEQ3lReHhiYzNZdyt5Nyt6bzNwejJSNEk4QVllcFNlM0ww?= =?utf-8?B?VDh0U3V3UkxKMVhwdGRVUzRpRE9vdWRaZ0ZOclkyU1ZLWnlVMzVPYUNrRGZz?= =?utf-8?B?dE9FR2c4WDVrc1dlZ2NYK09HTXhYQXE0TEdZRzRkaCt0U0RZclhkeVQvbmxK?= =?utf-8?B?ck9HM1BSbEhDUEErTWp0L1MyUDVuanFkUm9mK0cydWlnT2xFOTcxd2p5Q0FF?= =?utf-8?B?N0ppUEthWUZBUXFNSno4NzRQUURTSWwzNkdWS1dVOWJvN0RWTG9IVlNKV3Vw?= =?utf-8?B?K3J4TnNCR3Vscmh0aHNleWsxNFV0bU5IUndWM0ZXMVZGWHlFZ0Faa2JRaFVr?= =?utf-8?B?YTdKRDF5d2RjZ1RRTnNMRExVYmhJM2duVXhxRnRVWk8wVE1HVXMrQW91Ry9I?= =?utf-8?B?YVFWcTA4RzZibEtqdUF6bWdNWFJFa05ZaEJPbUpSTjBGL1JoNnRwQ3Mxdk11?= =?utf-8?B?T3VkMEpYL1FzdzBVTjBtWi9zNmtTZUJCcSsyS3h4UmxHaHVEYklmM0lPTkZN?= =?utf-8?B?T1l1OXJzUE1PVG5Sb1NrMHl0V20wVXhYemw0blMySjJLenQzU1BjK1VPMUxH?= =?utf-8?B?aHhBN1VRdXNTNm40WmpTRGxYaGxLNkN1K2drMkozUGNyNGt1TGdtZ243OG1S?= =?utf-8?B?OTJZWUgrOGV4eHk1VXZjU2lFb1pxMERQUDZjSVdwU0lWdmNBVGhkMWE3Rkhk?= =?utf-8?B?blhhNE5FVXk0eTEvanprQ0dNVHRoM1ZiTXlaQmV3RWpld1lGU3Y2MGZLRDRi?= =?utf-8?B?Um5xVjM0WkNKVXJNTkdTNFlNdXJGaisrZ2Z4bVJqV2dlbmVodEp3b3dTNzMw?= =?utf-8?B?TllzL081MXJqME9GWUJVZFNwNXExRkxrd0RENmpEdUhEdVdWUmt3cWpsOWF2?= =?utf-8?B?NFBYbWFYTFdRUFVWSWdVZ09qdWppdWlHdUJabFNkOFVNbnpZVFQxRUtkc3dz?= =?utf-8?B?aGVtaU12YjFQdFVuUTFXMW5IbHZWaEFGYUM4RDVobmxvd0JWUVZ0b1NTMmtN?= =?utf-8?B?WjBEUExXVlpuSmlNbmlOU1VBa0xuZTJaNHpCbVBvdUxVRzEwSVNHSTJYWDNk?= =?utf-8?B?WFY0aDcyU3hJNEdhU1d5a1YzYmxHNE85WjhCYlNiN1l5RjhnejZLdHpxbDVW?= =?utf-8?B?bm5mUE1tV1hnNU5BbEVYMnBISGRKdTN1ZVZBWDJrbENjZjYxVzNQcGxjVE9r?= =?utf-8?B?bFZzbWVQOEtWUUhLR29ENitFM3UycWxmdUIvY2k3YWZOSFIzNCtRWTkyWElH?= =?utf-8?Q?GdLSeHaRBv64S?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(376005)(36860700004)(1800799015)(82310400017); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2024 10:44:27.0214 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 379287cc-07ee-416f-2c74-08dc83ba2406 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE36.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6531 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yevgeny Kliteynik HWS API is very different from SWS and has limitations: - Queue based async insertion/deletion - Doesn’t handle complex rules - Requires the user to specify fixed matcher size – no rehash - Requires the user to specify action combination on matcher creation - Matching fields are passed using items and not FW/PRM format These problems need to be solved in order to support HWS with the old rte_flow API. This patch adds backward-compatible (BWC) HWS API to allow for the use of HWS in rte_flow and support the existing template-based API alongside with the rte_flow API. The new BWC API comprises mainly of the following four functions: - mlx5dr_bwc_matcher_create - mlx5dr_bwc_matcher_destroy - mlx5dr_bwc_rule_create - mlx5dr_bwc_rule_destroy Notes: - To enable BWC API support, need to turn on 'bwc' flag in context attributes when calling mlx5dr_context_open() - BWC functions use their own queues - BWC functions provide synchronous API - they add/remove rule and poll for completion - BWC functions handle all the required matchers and rules allocations - For non-root tables, BWC functions use matcher resize and matcher attach action template features Signed-off-by: Yevgeny Kliteynik --- drivers/net/mlx5/hws/meson.build | 1 + drivers/net/mlx5/hws/mlx5dr.h | 75 +++ drivers/net/mlx5/hws/mlx5dr_bwc.c | 898 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_bwc.h | 34 + drivers/net/mlx5/hws/mlx5dr_context.c | 3 + drivers/net/mlx5/hws/mlx5dr_context.h | 7 + drivers/net/mlx5/hws/mlx5dr_internal.h | 1 + drivers/net/mlx5/hws/mlx5dr_send.c | 43 +- 8 files changed, 1061 insertions(+), 1 deletion(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_bwc.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_bwc.h diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build index bbcc628557..859a87fb85 100644 --- a/drivers/net/mlx5/hws/meson.build +++ b/drivers/net/mlx5/hws/meson.build @@ -20,4 +20,5 @@ sources += files( 'mlx5dr_debug.c', 'mlx5dr_pat_arg.c', 'mlx5dr_crc32.c', + 'mlx5dr_bwc.c', ) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 5ceb1a7b4b..0fe39e9c76 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -106,6 +106,7 @@ struct mlx5dr_context_attr { struct ibv_pd *pd; /* Optional other ctx for resources allocation, all objects will be created on it */ struct ibv_context *shared_ibv_ctx; + bool bwc; /* add support for backward compatible API*/ }; struct mlx5dr_table_attr { @@ -992,4 +993,78 @@ int mlx5dr_crc_encap_entropy_hash_calc(struct mlx5dr_context *ctx, uint8_t entropy_res[], enum mlx5dr_crc_encap_entropy_hash_size res_size); +struct mlx5dr_bwc_matcher; +struct mlx5dr_bwc_rule; + +/* Create a new BWC direct rule matcher. + * This function does the following: + * - creates match template based on flow items + * - creates an empty action template + * - creates a usual mlx5dr_matcher with these mt and at, setting + * its size to minimal + * Notes: + * - table->ctx must have BWC support + * - complex rules are not supported + * + * @param[in] table + * The table in which the new matcher will be opened + * @param[in] priority + * Priority for this BWC matcher + * @param[in] flow_items + * Array of flow items that serve as basis for match and action templates + * @return pointer to mlx5dr_bwc_matcher on success or NULL otherwise. + */ +struct mlx5dr_bwc_matcher * +mlx5dr_bwc_matcher_create(struct mlx5dr_table *table, + uint32_t priority, + const struct rte_flow_item flow_items[]); + +/* Destroy BWC direct rule matcher. + * + * @param[in] bwc_matcher + * Matcher to destroy + * @return zero on success, non zero otherwise + */ +int mlx5dr_bwc_matcher_destroy(struct mlx5dr_bwc_matcher *bwc_matcher); + +/* Create a new BWC rule. + * Unlike the usual rule creation function, this one is blocking: when the + * function returns, the rule is written to its place (no need to poll). + * This function does the following: + * - finds matching action template based on the provided rule_actions, or + * creates new action template if matching action template doesn't exist + * - updates corresponding BWC matcher stats + * - if needed, the function performs rehash: + * - creates a new matcher based on mt, at, new_sz + * - moves all the existing matcher rules to the new matcher + * - removes the old matcher + * - inserts new rule + * - polls till completion is received + * Notes: + * - matcher->tbl->ctx must have BWC support + * - separate BWC ctx queues are used + * + * @param[in] bwc_matcher + * The BWC matcher in which the new rule will be created. + * @param[in] flow_items + * Flow items to be used for the value matching + * @param[in] rule_actions + * Rule action to be executed on match + * @param[in, out] rule_handle + * A valid rule handle. The handle doesn't require any initialization + * @return valid BWC rule handle on success, NULL otherwise + */ +struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + struct mlx5dr_rule_action rule_actions[]); + +/* Destroy BWC direct rule. + * + * @param[in] bwc_rule + * Rule to destroy + * @return zero on success, non zero otherwise + */ +int mlx5dr_bwc_rule_destroy(struct mlx5dr_bwc_rule *bwc_rule); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_bwc.c b/drivers/net/mlx5/hws/mlx5dr_bwc.c new file mode 100644 index 0000000000..eef3053ee0 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_bwc.c @@ -0,0 +1,898 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static uint16_t mlx5dr_bwc_queues(struct mlx5dr_context *ctx) +{ + return (ctx->queues - 1) / 2; +} + +static uint16_t +mlx5dr_bwc_gen_queue_idx(struct mlx5dr_context *ctx) +{ + /* assign random queue */ + return rand() % mlx5dr_bwc_queues(ctx); +} + +static uint16_t +mlx5dr_bwc_get_queue_id(struct mlx5dr_context *ctx, uint16_t idx) +{ + return idx + mlx5dr_bwc_queues(ctx); +} + +static rte_spinlock_t * +mlx5dr_bwc_get_queue_lock(struct mlx5dr_context *ctx, uint16_t idx) +{ + return &ctx->bwc_send_queue_locks[idx]; +} + +static void mlx5dr_bwc_lock_all_queues(struct mlx5dr_context *ctx) +{ + uint16_t bwc_queues = mlx5dr_bwc_queues(ctx); + rte_spinlock_t *queue_lock; + int i; + + for (i = 0; i < bwc_queues; i++) { + queue_lock = mlx5dr_bwc_get_queue_lock(ctx, i); + rte_spinlock_lock(queue_lock); + } +} + +static void mlx5dr_bwc_unlock_all_queues(struct mlx5dr_context *ctx) +{ + uint16_t bwc_queues = mlx5dr_bwc_queues(ctx); + rte_spinlock_t *queue_lock; + int i; + + for (i = 0; i < bwc_queues; i++) { + queue_lock = mlx5dr_bwc_get_queue_lock(ctx, i); + rte_spinlock_unlock(queue_lock); + } +} + +static void mlx5dr_bwc_matcher_init_attr(struct mlx5dr_matcher_attr *attr, + uint32_t priority, + uint8_t size_log, + bool is_root) +{ + memset(attr, 0, sizeof(*attr)); + + attr->priority = priority; + attr->optimize_using_rule_idx = 0; + attr->mode = MLX5DR_MATCHER_RESOURCE_MODE_RULE; + attr->optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY; + attr->insert_mode = MLX5DR_MATCHER_INSERT_BY_HASH; + attr->distribute_mode = MLX5DR_MATCHER_DISTRIBUTE_BY_HASH; + attr->rule.num_log = size_log; + + if (!is_root) { + attr->resizable = true; + attr->max_num_of_at_attach = MLX5DR_BWC_MATCHER_ATTACH_AT_NUM; + } +} + +struct mlx5dr_bwc_matcher * +mlx5dr_bwc_matcher_create(struct mlx5dr_table *table, + uint32_t priority, + const struct rte_flow_item flow_items[]) +{ + enum mlx5dr_action_type init_action_types[1] = { MLX5DR_ACTION_TYP_LAST }; + uint16_t bwc_queues = mlx5dr_bwc_queues(table->ctx); + struct mlx5dr_bwc_matcher *bwc_matcher; + struct mlx5dr_matcher_attr attr = {0}; + int i; + + if (!mlx5dr_context_bwc_supported(table->ctx)) { + rte_errno = EINVAL; + DR_LOG(ERR, "BWC rule: Context created w/o BWC API compatibility"); + return NULL; + } + + bwc_matcher = simple_calloc(1, sizeof(*bwc_matcher)); + if (!bwc_matcher) { + rte_errno = ENOMEM; + return NULL; + } + + bwc_matcher->rules = simple_calloc(bwc_queues, sizeof(*bwc_matcher->rules)); + if (!bwc_matcher->rules) { + rte_errno = ENOMEM; + goto free_bwc_matcher; + } + + for (i = 0; i < bwc_queues; i++) + LIST_INIT(&bwc_matcher->rules[i]); + + mlx5dr_bwc_matcher_init_attr(&attr, + priority, + MLX5DR_BWC_MATCHER_INIT_SIZE_LOG, + mlx5dr_table_is_root(table)); + + bwc_matcher->mt = mlx5dr_match_template_create(flow_items, + MLX5DR_MATCH_TEMPLATE_FLAG_NONE); + if (!bwc_matcher->mt) { + rte_errno = EINVAL; + goto free_bwc_matcher_rules; + } + + bwc_matcher->priority = priority; + bwc_matcher->size_log = MLX5DR_BWC_MATCHER_INIT_SIZE_LOG; + + /* create dummy action template */ + bwc_matcher->at[0] = mlx5dr_action_template_create(init_action_types, 0); + bwc_matcher->num_of_at = 1; + + bwc_matcher->matcher = mlx5dr_matcher_create(table, + &bwc_matcher->mt, 1, + &bwc_matcher->at[0], + bwc_matcher->num_of_at, + &attr); + if (!bwc_matcher->matcher) { + rte_errno = EINVAL; + goto free_at; + } + + return bwc_matcher; + +free_at: + mlx5dr_action_template_destroy(bwc_matcher->at[0]); + mlx5dr_match_template_destroy(bwc_matcher->mt); +free_bwc_matcher_rules: + simple_free(bwc_matcher->rules); +free_bwc_matcher: + simple_free(bwc_matcher); + + return NULL; +} + +int mlx5dr_bwc_matcher_destroy(struct mlx5dr_bwc_matcher *bwc_matcher) +{ + int i; + + if (bwc_matcher->num_of_rules) + DR_LOG(ERR, "BWC matcher destroy: matcher still has %d rules", + bwc_matcher->num_of_rules); + + mlx5dr_matcher_destroy(bwc_matcher->matcher); + bwc_matcher->matcher = NULL; + + for (i = 0; i < bwc_matcher->num_of_at; i++) + mlx5dr_action_template_destroy(bwc_matcher->at[i]); + + mlx5dr_match_template_destroy(bwc_matcher->mt); + simple_free(bwc_matcher->rules); + simple_free(bwc_matcher); + + return 0; +} + +static int +mlx5dr_bwc_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t *pending_rules, + bool drain) +{ + bool queue_full = *pending_rules == MLX5DR_BWC_MATCHER_REHASH_QUEUE_SZ; + bool got_comp = *pending_rules >= MLX5DR_BWC_MATCHER_REHASH_BURST_TH; + struct rte_flow_op_result comp[MLX5DR_BWC_MATCHER_REHASH_BURST_TH]; + int ret; + int i; + + /* Check if there are any completions at all */ + if (!got_comp && !drain) + return 0; + + while (queue_full || ((got_comp || drain) && *pending_rules)) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, + MLX5DR_BWC_MATCHER_REHASH_BURST_TH); + if (unlikely(ret < 0)) { + DR_LOG(ERR, "Rehash error: polling queue %d returned %d\n", + queue_id, ret); + return -EINVAL; + } + + if (ret) { + (*pending_rules) -= ret; + for (i = 0; i < ret; i++) { + if (unlikely(comp[i].status != RTE_FLOW_OP_SUCCESS)) + DR_LOG(ERR, + "Rehash error: polling queue %d returned completion with error\n", + queue_id); + } + queue_full = false; + } + + got_comp = !!ret; + } + + return 0; +} + +static void +mlx5dr_bwc_rule_fill_attr(struct mlx5dr_bwc_matcher *bwc_matcher, + uint16_t bwc_queue_idx, + struct mlx5dr_rule_attr *rule_attr) +{ + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + + /* no use of INSERT_BY_INDEX in bwc rule */ + rule_attr->rule_idx = 0; + + /* notify HW at each rule insertion/deletion */ + rule_attr->burst = 0; + + /* We don't need user data, but the API requires it to exist */ + rule_attr->user_data = (void *)0xFACADE; + + rule_attr->queue_id = mlx5dr_bwc_get_queue_id(ctx, bwc_queue_idx); +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_alloc(void) +{ + struct mlx5dr_bwc_rule *bwc_rule; + + bwc_rule = simple_calloc(1, sizeof(*bwc_rule)); + if (unlikely(!bwc_rule)) + goto out_err; + + bwc_rule->rule = simple_calloc(1, sizeof(*bwc_rule->rule)); + if (unlikely(!bwc_rule->rule)) + goto free_rule; + + return bwc_rule; + +free_rule: + simple_free(bwc_rule); +out_err: + rte_errno = ENOMEM; + return NULL; +} + +static void +mlx5dr_bwc_rule_free(struct mlx5dr_bwc_rule *bwc_rule) +{ + if (likely(bwc_rule->rule)) + simple_free(bwc_rule->rule); + simple_free(bwc_rule); +} + +static void +mlx5dr_bwc_rule_list_add(struct mlx5dr_bwc_rule *bwc_rule, uint16_t idx) +{ + struct mlx5dr_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher; + + rte_atomic_fetch_add_explicit(&bwc_matcher->num_of_rules, 1, rte_memory_order_relaxed); + bwc_rule->bwc_queue_idx = idx; + LIST_INSERT_HEAD(&bwc_matcher->rules[idx], bwc_rule, next); +} + +static void mlx5dr_bwc_rule_list_remove(struct mlx5dr_bwc_rule *bwc_rule) +{ + struct mlx5dr_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher; + + rte_atomic_fetch_sub_explicit(&bwc_matcher->num_of_rules, 1, rte_memory_order_relaxed); + LIST_REMOVE(bwc_rule, next); +} + +static int +mlx5dr_bwc_rule_destroy_hws_async(struct mlx5dr_bwc_rule *bwc_rule, + struct mlx5dr_rule_attr *attr) +{ + return mlx5dr_rule_destroy(bwc_rule->rule, attr); +} + +static int +mlx5dr_bwc_rule_destroy_hws_sync(struct mlx5dr_bwc_rule *bwc_rule, + struct mlx5dr_rule_attr *rule_attr) +{ + struct mlx5dr_context *ctx = bwc_rule->bwc_matcher->matcher->tbl->ctx; + struct rte_flow_op_result completion; + int ret; + + ret = mlx5dr_bwc_rule_destroy_hws_async(bwc_rule, rule_attr); + if (unlikely(ret)) + return ret; + + do { + ret = mlx5dr_send_queue_poll(ctx, rule_attr->queue_id, &completion, 1); + } while (ret != 1); + + if (unlikely(completion.status != RTE_FLOW_OP_SUCCESS || + (bwc_rule->rule->status != MLX5DR_RULE_STATUS_DELETED && + bwc_rule->rule->status != MLX5DR_RULE_STATUS_DELETING))) { + DR_LOG(ERR, "Failed destroying BWC rule: completion %d, rule status %d", + completion.status, bwc_rule->rule->status); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_bwc_rule_destroy_hws(struct mlx5dr_bwc_rule *bwc_rule) +{ + struct mlx5dr_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher; + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + uint16_t idx = bwc_rule->bwc_queue_idx; + struct mlx5dr_rule_attr attr; + rte_spinlock_t *queue_lock; + int ret; + + mlx5dr_bwc_rule_fill_attr(bwc_matcher, idx, &attr); + + queue_lock = mlx5dr_bwc_get_queue_lock(ctx, idx); + + rte_spinlock_lock(queue_lock); + + ret = mlx5dr_bwc_rule_destroy_hws_sync(bwc_rule, &attr); + mlx5dr_bwc_rule_list_remove(bwc_rule); + + rte_spinlock_unlock(queue_lock); + + mlx5dr_bwc_rule_free(bwc_rule); + + return ret; +} + +static int mlx5dr_bwc_rule_destroy_root(struct mlx5dr_bwc_rule *bwc_rule) +{ + int ret; + + ret = mlx5dr_rule_destroy_root_no_comp(bwc_rule->rule); + + mlx5dr_bwc_rule_free(bwc_rule); + + return ret; +} + +int mlx5dr_bwc_rule_destroy(struct mlx5dr_bwc_rule *bwc_rule) +{ + if (unlikely(mlx5dr_table_is_root(bwc_rule->bwc_matcher->matcher->tbl))) + return mlx5dr_bwc_rule_destroy_root(bwc_rule); + + return mlx5dr_bwc_rule_destroy_hws(bwc_rule); +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create_hws_async(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *rule_attr) +{ + struct mlx5dr_bwc_rule *bwc_rule; + int ret; + + bwc_rule = mlx5dr_bwc_rule_alloc(); + if (unlikely(!bwc_rule)) + return NULL; + + bwc_rule->bwc_matcher = bwc_matcher; + + ret = mlx5dr_rule_create(bwc_matcher->matcher, + 0, /* only one match template supported */ + flow_items, + at_idx, + rule_actions, + rule_attr, + bwc_rule->rule); + + if (unlikely(ret)) { + mlx5dr_bwc_rule_free(bwc_rule); + rte_errno = EINVAL; + return NULL; + } + + return bwc_rule; +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create_root_sync(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + uint8_t num_actions, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_bwc_rule *bwc_rule; + int ret; + + bwc_rule = mlx5dr_bwc_rule_alloc(); + if (unlikely(!bwc_rule)) { + rte_errno = ENOMEM; + return NULL; + } + + bwc_rule->bwc_matcher = bwc_matcher; + bwc_rule->rule->matcher = bwc_matcher->matcher; + + ret = mlx5dr_rule_create_root_no_comp(bwc_rule->rule, + flow_items, + num_actions, + rule_actions); + if (unlikely(ret)) { + mlx5dr_bwc_rule_free(bwc_rule); + rte_errno = EINVAL; + return NULL; + } + + return bwc_rule; +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create_hws_sync(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *rule_attr) + +{ + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + struct rte_flow_op_result completion; + struct mlx5dr_bwc_rule *bwc_rule; + int ret; + + bwc_rule = mlx5dr_bwc_rule_create_hws_async(bwc_matcher, flow_items, + at_idx, rule_actions, + rule_attr); + if (unlikely(!bwc_rule)) + return NULL; + + do { + ret = mlx5dr_send_queue_poll(ctx, rule_attr->queue_id, &completion, 1); + } while (ret != 1); + + if (unlikely(completion.status != RTE_FLOW_OP_SUCCESS || + (bwc_rule->rule->status != MLX5DR_RULE_STATUS_CREATED && + bwc_rule->rule->status != MLX5DR_RULE_STATUS_CREATING))) { + DR_LOG(ERR, "Failed creating BWC rule: completion %d, rule status %d", + completion.status, bwc_rule->rule->status); + mlx5dr_bwc_rule_free(bwc_rule); + return NULL; + } + + return bwc_rule; +} + +static bool +mlx5dr_bwc_matcher_size_maxed_out(struct mlx5dr_bwc_matcher *bwc_matcher) +{ + struct mlx5dr_cmd_query_caps *caps = bwc_matcher->matcher->tbl->ctx->caps; + + return bwc_matcher->size_log + MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH >= + caps->ste_alloc_log_max - 1; +} + +static bool +mlx5dr_bwc_matcher_rehash_size_needed(struct mlx5dr_bwc_matcher *bwc_matcher, + uint32_t num_of_rules) +{ + /* size-based rehash for root table is kernel's responsibility */ + if (unlikely(mlx5dr_table_is_root(bwc_matcher->matcher->tbl))) + return false; + + if (unlikely(mlx5dr_bwc_matcher_size_maxed_out(bwc_matcher))) + return false; + + if (unlikely((num_of_rules * 100 / MLX5DR_BWC_MATCHER_REHASH_PERCENT_TH) >= + (1UL << bwc_matcher->size_log))) + return true; + + return false; +} + +static void +mlx5dr_bwc_rule_actions_to_action_types(struct mlx5dr_rule_action rule_actions[], + enum mlx5dr_action_type action_types[]) +{ + int i = 0; + + for (i = 0; + rule_actions[i].action && (rule_actions[i].action->type != MLX5DR_ACTION_TYP_LAST); + i++) { + action_types[i] = (enum mlx5dr_action_type)rule_actions[i].action->type; + } + + action_types[i] = MLX5DR_ACTION_TYP_LAST; +} + +static int +mlx5dr_bwc_rule_actions_num(struct mlx5dr_rule_action rule_actions[]) +{ + int i = 0; + + while (rule_actions[i].action && + (rule_actions[i].action->type != MLX5DR_ACTION_TYP_LAST)) + i++; + + return i; +} + +static int +mlx5dr_bwc_matcher_extend_at(struct mlx5dr_bwc_matcher *bwc_matcher, + struct mlx5dr_rule_action rule_actions[]) +{ + enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS]; + + mlx5dr_bwc_rule_actions_to_action_types(rule_actions, action_types); + + bwc_matcher->at[bwc_matcher->num_of_at] = + mlx5dr_action_template_create(action_types, 0); + + if (unlikely(!bwc_matcher->at[bwc_matcher->num_of_at])) { + rte_errno = ENOMEM; + return rte_errno; + } + + bwc_matcher->num_of_at++; + return 0; +} + +static int +mlx5dr_bwc_matcher_extend_size(struct mlx5dr_bwc_matcher *bwc_matcher) +{ + struct mlx5dr_cmd_query_caps *caps = bwc_matcher->matcher->tbl->ctx->caps; + + if (unlikely(mlx5dr_bwc_matcher_size_maxed_out(bwc_matcher))) { + DR_LOG(ERR, "Can't resize matcher: depth exceeds limit %d", + caps->rtc_log_depth_max); + return -ENOMEM; + } + + bwc_matcher->size_log = + RTE_MIN(bwc_matcher->size_log + MLX5DR_BWC_MATCHER_SIZE_LOG_STEP, + caps->ste_alloc_log_max - MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH); + + return 0; +} + +static int +mlx5dr_bwc_matcher_find_at(struct mlx5dr_bwc_matcher *bwc_matcher, + struct mlx5dr_rule_action rule_actions[]) +{ + enum mlx5dr_action_type *action_type_arr; + int i, j; + + /* start from index 1 - first action template is a dummy */ + for (i = 1; i < bwc_matcher->num_of_at; i++) { + j = 0; + action_type_arr = bwc_matcher->at[i]->action_type_arr; + + while (rule_actions[j].action && + rule_actions[j].action->type != MLX5DR_ACTION_TYP_LAST) { + if (action_type_arr[j] != rule_actions[j].action->type) + break; + j++; + } + + if (action_type_arr[j] == MLX5DR_ACTION_TYP_LAST && + (!rule_actions[j].action || + rule_actions[j].action->type == MLX5DR_ACTION_TYP_LAST)) + return i; + } + + return -1; +} + +static int +mlx5dr_bwc_matcher_move_all(struct mlx5dr_bwc_matcher *bwc_matcher) +{ + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + uint16_t bwc_queues = mlx5dr_bwc_queues(ctx); + struct mlx5dr_bwc_rule **bwc_rules; + struct mlx5dr_rule_attr rule_attr; + uint32_t *pending_rules; + bool all_done; + int i, j, ret; + + if (mlx5dr_table_is_root(bwc_matcher->matcher->tbl)) { + rte_errno = EINVAL; + return -rte_errno; + } + + mlx5dr_bwc_rule_fill_attr(bwc_matcher, 0, &rule_attr); + + pending_rules = simple_calloc(bwc_queues, sizeof(*pending_rules)); + if (!pending_rules) { + rte_errno = ENOMEM; + return -rte_errno; + } + + bwc_rules = simple_calloc(bwc_queues, sizeof(*bwc_rules)); + if (!bwc_rules) { + rte_errno = ENOMEM; + goto free_pending_rules; + } + + for (i = 0; i < bwc_queues; i++) { + if (LIST_EMPTY(&bwc_matcher->rules[i])) + bwc_rules[i] = NULL; + else + bwc_rules[i] = LIST_FIRST(&bwc_matcher->rules[i]); + } + + do { + all_done = true; + + for (i = 0; i < bwc_queues; i++) { + rule_attr.queue_id = mlx5dr_bwc_get_queue_id(ctx, i); + + for (j = 0; j < MLX5DR_BWC_MATCHER_REHASH_BURST_TH && bwc_rules[i]; j++) { + rule_attr.burst = !!((j + 1) % MLX5DR_BWC_MATCHER_REHASH_BURST_TH); + ret = mlx5dr_matcher_resize_rule_move(bwc_matcher->matcher, + bwc_rules[i]->rule, + &rule_attr); + if (ret) { + DR_LOG(ERR, "Moving BWC rule failed during rehash - %d", + ret); + rte_errno = ENOMEM; + goto free_bwc_rules; + } + + all_done = false; + pending_rules[i]++; + bwc_rules[i] = LIST_NEXT(bwc_rules[i], next); + + mlx5dr_bwc_queue_poll(ctx, rule_attr.queue_id, + &pending_rules[i], false); + } + } + } while (!all_done); + + /* drain all the bwc queues */ + for (i = 0; i < bwc_queues; i++) { + if (pending_rules[i]) { + uint16_t queue_id = mlx5dr_bwc_get_queue_id(ctx, i); + mlx5dr_send_engine_flush_queue(&ctx->send_queue[queue_id]); + mlx5dr_bwc_queue_poll(ctx, queue_id, + &pending_rules[i], true); + } + } + + rte_errno = 0; + +free_bwc_rules: + simple_free(bwc_rules); +free_pending_rules: + simple_free(pending_rules); + + return -rte_errno; +} + +static int +mlx5dr_bwc_matcher_move(struct mlx5dr_bwc_matcher *bwc_matcher) +{ + struct mlx5dr_matcher_attr matcher_attr = {0}; + struct mlx5dr_matcher *old_matcher; + struct mlx5dr_matcher *new_matcher; + int ret; + + mlx5dr_bwc_matcher_init_attr(&matcher_attr, + bwc_matcher->priority, + bwc_matcher->size_log, + mlx5dr_table_is_root(bwc_matcher->matcher->tbl)); + + old_matcher = bwc_matcher->matcher; + new_matcher = mlx5dr_matcher_create(old_matcher->tbl, + &bwc_matcher->mt, 1, + bwc_matcher->at, + bwc_matcher->num_of_at, + &matcher_attr); + if (!new_matcher) { + DR_LOG(ERR, "Rehash error: matcher creation failed"); + return -ENOMEM; + } + + ret = mlx5dr_matcher_resize_set_target(old_matcher, new_matcher); + if (ret) { + DR_LOG(ERR, "Rehash error: failed setting resize target"); + return ret; + } + + ret = mlx5dr_bwc_matcher_move_all(bwc_matcher); + if (ret) { + DR_LOG(ERR, "Rehash error: moving rules failed"); + return -ENOMEM; + } + + bwc_matcher->matcher = new_matcher; + mlx5dr_matcher_destroy(old_matcher); + + return 0; +} + +static int +mlx5dr_bwc_matcher_rehash(struct mlx5dr_bwc_matcher *bwc_matcher, bool rehash_size) +{ + uint32_t num_of_rules; + int ret; + + /* If the current matcher size is already at its max size, we can't + * do the rehash. Skip it and try adding the rule again - perhaps + * there was some change. + */ + if (mlx5dr_bwc_matcher_size_maxed_out(bwc_matcher)) + return 0; + + /* It is possible that other rule has already performed rehash. + * Need to check again if we really need rehash. + * If the reason for rehash was size, but not any more - skip rehash. + */ + num_of_rules = rte_atomic_load_explicit(&bwc_matcher->num_of_rules, + rte_memory_order_relaxed); + if (rehash_size && + !mlx5dr_bwc_matcher_rehash_size_needed(bwc_matcher, num_of_rules)) + return 0; + + /* Now we're done all the checking - do the rehash: + * - extend match RTC size + * - create new matcher + * - move all the rules to the new matcher + * - destroy the old matcher + */ + + ret = mlx5dr_bwc_matcher_extend_size(bwc_matcher); + if (ret) + return ret; + + return mlx5dr_bwc_matcher_move(bwc_matcher); +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create_root(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_bwc_rule *bwc_rule; + + bwc_rule = mlx5dr_bwc_rule_create_root_sync(bwc_matcher, + flow_items, + mlx5dr_bwc_rule_actions_num(rule_actions), + rule_actions); + + if (unlikely(!bwc_rule)) + DR_LOG(ERR, "BWC rule: failed creating rule on root tbl"); + + return bwc_rule; +} + +static struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create_hws(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + struct mlx5dr_bwc_rule *bwc_rule = NULL; + struct mlx5dr_rule_attr rule_attr; + rte_spinlock_t *queue_lock; + bool rehash_size = false; + uint32_t num_of_rules; + uint16_t idx; + int at_idx; + int ret; + + idx = mlx5dr_bwc_gen_queue_idx(ctx); + + mlx5dr_bwc_rule_fill_attr(bwc_matcher, idx, &rule_attr); + + queue_lock = mlx5dr_bwc_get_queue_lock(ctx, idx); + + rte_spinlock_lock(queue_lock); + + /* check if rehash needed due to missing action template */ + at_idx = mlx5dr_bwc_matcher_find_at(bwc_matcher, rule_actions); + if (at_idx < 0) { + /* we need to extend BWC matcher action templates array */ + rte_spinlock_unlock(queue_lock); + mlx5dr_bwc_lock_all_queues(ctx); + + ret = mlx5dr_bwc_matcher_extend_at(bwc_matcher, rule_actions); + if (unlikely(ret)) { + mlx5dr_bwc_unlock_all_queues(ctx); + rte_errno = EINVAL; + DR_LOG(ERR, "BWC rule: failed extending action template - %d", ret); + return NULL; + } + + /* action templates array was extended, we need the last idx */ + at_idx = bwc_matcher->num_of_at - 1; + + ret = mlx5dr_matcher_attach_at(bwc_matcher->matcher, + bwc_matcher->at[at_idx]); + if (unlikely(ret)) { + mlx5dr_action_template_destroy(bwc_matcher->at[at_idx]); + bwc_matcher->at[at_idx] = NULL; + bwc_matcher->num_of_at--; + + mlx5dr_bwc_unlock_all_queues(ctx); + DR_LOG(ERR, "BWC rule: failed attaching action template - %d", ret); + return NULL; + } + + mlx5dr_bwc_unlock_all_queues(ctx); + rte_spinlock_lock(queue_lock); + } + + /* check if number of rules require rehash */ + num_of_rules = rte_atomic_load_explicit(&bwc_matcher->num_of_rules, + rte_memory_order_relaxed); + if (mlx5dr_bwc_matcher_rehash_size_needed(bwc_matcher, num_of_rules)) { + rehash_size = true; + goto rehash; + } + + bwc_rule = mlx5dr_bwc_rule_create_hws_sync(bwc_matcher, + flow_items, + at_idx, + rule_actions, + &rule_attr); + + if (likely(bwc_rule)) { + mlx5dr_bwc_rule_list_add(bwc_rule, idx); + rte_spinlock_unlock(queue_lock); + return bwc_rule; /* rule inserted successfully */ + } + + /* At this point the rule wasn't added. + * It could be because there was collision, or some other problem. + * If we don't dive deeper than API, the only thing we know is that + * the status of completion is RTE_FLOW_OP_ERROR. + * Try rehash and insert rule again - last chance. + */ + +rehash: + rte_spinlock_unlock(queue_lock); + + mlx5dr_bwc_lock_all_queues(ctx); + ret = mlx5dr_bwc_matcher_rehash(bwc_matcher, rehash_size); + mlx5dr_bwc_unlock_all_queues(ctx); + + if (ret) { + DR_LOG(ERR, "BWC rule insertion: rehash failed - %d", ret); + return NULL; + } + + /* Rehash done, but we still have that pesky rule to add */ + rte_spinlock_lock(queue_lock); + + bwc_rule = mlx5dr_bwc_rule_create_hws_sync(bwc_matcher, + flow_items, + at_idx, + rule_actions, + &rule_attr); + + if (unlikely(!bwc_rule)) { + rte_spinlock_unlock(queue_lock); + DR_LOG(ERR, "BWC rule insertion failed"); + return NULL; + } + + mlx5dr_bwc_rule_list_add(bwc_rule, idx); + rte_spinlock_unlock(queue_lock); + + return bwc_rule; +} + +struct mlx5dr_bwc_rule * +mlx5dr_bwc_rule_create(struct mlx5dr_bwc_matcher *bwc_matcher, + const struct rte_flow_item flow_items[], + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_context *ctx = bwc_matcher->matcher->tbl->ctx; + + if (unlikely(!mlx5dr_context_bwc_supported(ctx))) { + rte_errno = EINVAL; + DR_LOG(ERR, "BWC rule: Context created w/o BWC API compatibility"); + return NULL; + } + + if (unlikely(mlx5dr_table_is_root(bwc_matcher->matcher->tbl))) + return mlx5dr_bwc_rule_create_root(bwc_matcher, + flow_items, + rule_actions); + + return mlx5dr_bwc_rule_create_hws(bwc_matcher, + flow_items, + rule_actions); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_bwc.h b/drivers/net/mlx5/hws/mlx5dr_bwc.h new file mode 100644 index 0000000000..648443861b --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_bwc.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BWC_H_ +#define MLX5DR_BWC_H_ + +#define MLX5DR_BWC_MATCHER_INIT_SIZE_LOG 1 +#define MLX5DR_BWC_MATCHER_SIZE_LOG_STEP 1 +#define MLX5DR_BWC_MATCHER_REHASH_PERCENT_TH 70 +#define MLX5DR_BWC_MATCHER_REHASH_BURST_TH 32 +#define MLX5DR_BWC_MATCHER_REHASH_QUEUE_SZ 256 +#define MLX5DR_BWC_MATCHER_ATTACH_AT_NUM 255 + +struct mlx5dr_bwc_matcher { + struct mlx5dr_matcher *matcher; + struct mlx5dr_match_template *mt; + struct mlx5dr_action_template *at[MLX5DR_BWC_MATCHER_ATTACH_AT_NUM]; + uint8_t num_of_at; + uint32_t priority; + uint8_t size_log; + RTE_ATOMIC(uint32_t)num_of_rules; /* atomically accessed */ + LIST_HEAD(rule_head, mlx5dr_bwc_rule) * rules; +}; + +struct mlx5dr_bwc_rule { + struct mlx5dr_bwc_matcher *bwc_matcher; + struct mlx5dr_rule *rule; + struct rte_flow_item *flow_items; + uint16_t bwc_queue_idx; + LIST_ENTRY(mlx5dr_bwc_rule) next; +}; + +#endif /* MLX5DR_BWC_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c index 7f120b3b1b..db5e72927a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.c +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -176,6 +176,9 @@ static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, if (ret) goto uninit_pd; + if (attr->bwc) + ctx->flags |= MLX5DR_CONTEXT_FLAG_BWC_SUPPORT; + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); if (ret) goto pools_uninit; diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h index f476c2308c..7678232ecc 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.h +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -8,6 +8,7 @@ enum mlx5dr_context_flags { MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, + MLX5DR_CONTEXT_FLAG_BWC_SUPPORT = 1 << 2, }; enum mlx5dr_context_shared_stc_type { @@ -44,6 +45,7 @@ struct mlx5dr_context { enum mlx5dr_context_flags flags; struct mlx5dr_send_engine *send_queue; size_t queues; + rte_spinlock_t *bwc_send_queue_locks; LIST_HEAD(table_head, mlx5dr_table) head; }; @@ -52,6 +54,11 @@ static inline bool mlx5dr_context_shared_gvmi_used(struct mlx5dr_context *ctx) return ctx->local_ibv_ctx ? true : false; } +static inline bool mlx5dr_context_bwc_supported(struct mlx5dr_context *ctx) +{ + return ctx->flags & MLX5DR_CONTEXT_FLAG_BWC_SUPPORT; +} + static inline struct ibv_context * mlx5dr_context_get_local_ibv(struct mlx5dr_context *ctx) { diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h index b9efdc4a9a..2abc516b5e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_internal.h +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -39,6 +39,7 @@ #include "mlx5dr_debug.h" #include "mlx5dr_pat_arg.h" #include "mlx5dr_crc32.h" +#include "mlx5dr_bwc.h" #define W_SIZE 2 #define DW_SIZE 4 diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 2942668e76..0120f03a48 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -971,10 +971,42 @@ static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queu mlx5dr_send_queue_close(&ctx->send_queue[queues]); } +static int mlx5dr_bwc_send_queues_init(struct mlx5dr_context *ctx) +{ + int bwc_queues = ctx->queues - 1; + int i; + + if (!mlx5dr_context_bwc_supported(ctx)) + return 0; + + ctx->queues += bwc_queues; + + ctx->bwc_send_queue_locks = simple_calloc(bwc_queues, + sizeof(*ctx->bwc_send_queue_locks)); + if (!ctx->bwc_send_queue_locks) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < bwc_queues; i++) + rte_spinlock_init(&ctx->bwc_send_queue_locks[i]); + + return 0; +} + +static void mlx5dr_send_queues_bwc_locks_destroy(struct mlx5dr_context *ctx) +{ + if (!mlx5dr_context_bwc_supported(ctx)) + return; + + simple_free(ctx->bwc_send_queue_locks); +} + void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) { __mlx5dr_send_queues_close(ctx, ctx->queues); simple_free(ctx->send_queue); + mlx5dr_send_queues_bwc_locks_destroy(ctx); } int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, @@ -987,10 +1019,16 @@ int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, /* Open one extra queue for control path */ ctx->queues = queues + 1; + /* open a separate set of queues and locks for bwc API */ + err = mlx5dr_bwc_send_queues_init(ctx); + if (err) + return err; + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); if (!ctx->send_queue) { rte_errno = ENOMEM; - return rte_errno; + err = rte_errno; + goto free_bwc_locks; } for (i = 0; i < ctx->queues; i++) { @@ -1006,6 +1044,9 @@ int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, simple_free(ctx->send_queue); +free_bwc_locks: + mlx5dr_send_queues_bwc_locks_destroy(ctx); + return err; } -- 2.25.1