From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42AEFA06CC; Wed, 19 Oct 2022 16:45:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA48342C1F; Wed, 19 Oct 2022 16:44:14 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2059.outbound.protection.outlook.com [40.107.102.59]) by mails.dpdk.org (Postfix) with ESMTP id 1256142C1E for ; Wed, 19 Oct 2022 16:44:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mckriWLt5u4Un6wNTuFcIj0QrHU9qwirX8/ekSTT21Ose7DQmpQdOoHOaMescFDV5g+i9x5hcChfuU75W7KB0wmlE0iiErDF3qfiQ1rxitqOz+62XESlXdzoHbTE7AvcqckfFe4lEbMlM2Zo2ZsnWBnRR/r1erkyohjm35oxMyPolp+FcXzUmBYx8CCJpGFgrBNAp5J/FvehRftSVPQ7CJ3ZDB39qXYAZsgrXJ3udeP9X2Jh0CcoiWTwL3M8tFJb/QDhS3/hQJf1HspHlKj1UDPrgyUORhLWaIu575gZgnNANPhSVmfY/k/IWzSLnRAi9KBJl5FDk3BWrGVXnIjDTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4HqD1MIWwdQT7E+hwE7RB5/wQLAm3Ixp7wiF+Asevco=; b=HSOLNF0VB98Bg6p4KUirdEmYEgV/jh3hkHuqGQNYXuud2AeeGnNGYvEu/eDQiNzxgGJ/oGADy2VNc2j6/L/34+lJEq3fvirXTOfx/i6C7TgMR62ZFffskPWKyPLAbUvcXWMlZKsYz6ZqVZl+LhJqjqfQN9HakYgvC2iywKW7eS2TYGfHvLTf1TKLo8DoRlB2MFyyGbe9jd6H6uUxApxJ70Jx0ylixE8wxr14LjUvt5Yzq4n+sgzyUVqeljbkMb0cicNXbCgCvcPoVYPYcLhQBsExDgE60xYZ+0yQyeKgpw8yrSyOf0gAzJEqan+pqHHOpFvCjBogkFVBr2LWcw98cg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4HqD1MIWwdQT7E+hwE7RB5/wQLAm3Ixp7wiF+Asevco=; b=iYSP02+PkJZxpL+ANK8LNq8Zw1zanvXEPFxFyCOjdGSt+Ge+Nb7gE3nD1GzcGm91xQukuPIgwwgfhjkVyI8ehKk7Muzs/S1DHN0upIXlVocpDunfL56OYzCsiV3k9Yp32KsyH0LH56Zr5GSQFBZwh8L0w1SEiHHBbBY3iZfh9BPTTUX8BppWUb1uTMTH/JIkxKlf+DdRVGUtCvPhj0YCGFwT8UySeLIPnzQ4xNXwXT/hOBFuYq2pMlGd/1ClE2nQ6ndemY3oZPt+D8ld4775nzHpgLca6x7dwLUr5MCiCZYn8ISZFaYKkzO8NSPOh+xhzN/GbeA+dyF3+04zteKrfQ== Received: from BN9P223CA0022.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::27) by DM4PR12MB6037.namprd12.prod.outlook.com (2603:10b6:8:b0::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.29; Wed, 19 Oct 2022 14:44:10 +0000 Received: from BN8NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10b:cafe::bb) by BN9P223CA0022.outlook.office365.com (2603:10b6:408:10b::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.34 via Frontend Transport; Wed, 19 Oct 2022 14:44:10 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT036.mail.protection.outlook.com (10.13.177.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Wed, 19 Oct 2022 14:44:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 07:43:58 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 07:43:56 -0700 From: Alex Vesker To: , , , , Matan Azrad CC: , , Erez Shitrit Subject: [v4 15/18] net/mlx5/hws: Add HWS rule object Date: Wed, 19 Oct 2022 17:42:55 +0300 Message-ID: <20221019144258.16282-16-valex@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221019144258.16282-1-valex@nvidia.com> References: <20220922190345.394-1-valex@nvidia.com> <20221019144258.16282-1-valex@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT036:EE_|DM4PR12MB6037:EE_ X-MS-Office365-Filtering-Correlation-Id: 9921352b-378e-4972-117f-08dab1e06215 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nBw8fPNZEJZFhU1/81tYswr/jqqraW+n/1wfrp72O/N6wiUJgr1e93OoFfznpJSaCdYcKm/HWGP8M7Nss8A3y4GqfJFrFrDR3hn6LXcbGvb6LiXkshqh44dEqsHE10/WPzultEf2vUzAm8qdCBN/V/RYx6drNRlvCFAi0W/KLtiHrv75cgmGd5kNxjSiY6KqHonVPP+7GHR2YxU8fDPBqKeSMP+m8ngzpcnBY3PwhANYlz7zB2WkHvpu6bHLOBfiG1AjyutnZeqPYm9BFiql4QFOpO8ucvZFDr56j+cVZe3dCoUdfeABy9/mBSbNK0zWKnt9qF/sYoU28yvPksCwFf1mK4eZXp46b5KLS+mCjwKu9z/VbJjfi/SUDr1nejQ1ilUGgkOUGOpblZRdsNsuTooFLS2ivFUU9HIVUvbcaj/rVv7p93wCF4b3Hjgx7re47mnWIWW0ycArhxBhTW9WQ9wMkrILuL8fZngxAl7WA8CG3CgWCjOxVzB3kDeGwEF0GSSGOB/ADxRrRUzVOYMwK9H8M58+/cbwrLrILUc/tZp4lqZa/ZgjVLgXSWZIZqJke++stmATjrQc7JfpGQpLd7UcZkETTUEz/Du0uvWLo/Uk+exLQtqMRbvmn7IrMnIDX/SXhaWl87LtYKFZD8GH8STREEWx6aew1rZp3+KUuoHUhWiajewGUb7RBOlwdVQ/bh1XGCaVfL5rzrJe17h5t2jJMH0voZ1a1n7miuy5Lo5eC5D5TAkmh7E+46rrYtwBicv+HAPws6n+srIeOOXaaw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199015)(40470700004)(46966006)(36840700001)(82310400005)(2616005)(86362001)(83380400001)(356005)(1076003)(47076005)(426003)(16526019)(7636003)(186003)(36860700001)(82740400003)(30864003)(107886003)(5660300002)(40480700001)(41300700001)(55016003)(8936002)(6636002)(6286002)(478600001)(26005)(8676002)(7696005)(70586007)(70206006)(2906002)(110136005)(54906003)(316002)(4326008)(40460700003)(36756003)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2022 14:44:10.0341 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9921352b-378e-4972-117f-08dab1e06215 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6037 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1