From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9761C43C06; Tue, 27 Feb 2024 14:53:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7993442EBB; Tue, 27 Feb 2024 14:53:18 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2056.outbound.protection.outlook.com [40.107.101.56]) by mails.dpdk.org (Postfix) with ESMTP id EFF9A4027D for ; Tue, 27 Feb 2024 14:53:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UBy/rM7ccKyStwYQft/2wKH26UUnYrJyOxONdBBqTiEkDi2u+MFDlYm3+M5h52vw1SC2NCM43wz7nTg+HcbOKp2BwHiLz9jTVumeI8sUhDcHHlMZOCICL+TtrqcoNfGTqmvIFjUfMSkFCTnz/fpJ4lol8NgkjOMS4Cjol1ox0bIzZeVW4CUeLcIFdkD02zD2Ed2DoPeFc7YhOtwFaj6hoTVy1g6f9l0uLQx1enu9ApEU4xE2W9ScVa/QsSy0Aed35pzHDF8T1Td8DPSfptyT+rAWVXeb8u0X3MZdqhgWF/vi1Ea25ya8tEOLT4STf4zsjtwqSHjccPchJ/a+r/MH1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MgOHJ2EpGIzw0KYq7OnOTR8UHHrjpdYekpVM1AJLNzM=; b=RuBOtzguKebHLag37Hjc9ue5szzeTCO4mDRLgIXpu3l7NlazcluTzC1kuwWH2lwO4a6fMiMyKAGHvoR6KMPOg6GSdxArN2cQw9QTVt6cwpD2zMvEVshdoTNyoizdI68far58nPg2iHxVAjPj7DLVHGR6RwQhaEa13BcYlqcHbbmg1a236i5SYm4ne53nu/o5Pn42eiV7vYB+zwfrl0p6PbkjW4aeY7qChlUKL9/Dfz6KkY7Ius2pS7fh11mXYFdj88wryGyymtF9VJ3nOvLnp63Go3F4D84vSp5mSV1iO19RzDGn0WIl9DN4hPevrKW6tFQlmImbwg7NUfrEgFsXpQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MgOHJ2EpGIzw0KYq7OnOTR8UHHrjpdYekpVM1AJLNzM=; b=GX+gsqC0H9/MstrEgL1ZJelYEOUdDp4dzJtglQhQs/FtIdv0wCM0od2oGMtvJaR5vwyB+YgKNZSrJAO+0RBNoq9Z9CgVL0YROAfyUewbxf/Vgucz0ug5+kNreFI0swVqIkQ2MPj37M+n9UWlr4q0oiTOGqVapoJ3XsWELclutIV9P6xS+/e7Mo+OcnGRdPw7KGxeIvH3nxKtqKNmzDShkx9wIraOQOqcqHZEW1cdvr3FWSBH9U2n7f2UriwKVZKOVCEy6QQ+4YJJzJq0vRR7tb25+kb0ohs3pc8P24P3tkHZz6A4Q4YvMc3SDNXle5UoQy4awRHy9sYAzIic0bpv2A== Received: from BN7PR06CA0065.namprd06.prod.outlook.com (2603:10b6:408:34::42) by SJ0PR12MB5469.namprd12.prod.outlook.com (2603:10b6:a03:37f::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Tue, 27 Feb 2024 13:53:09 +0000 Received: from BN3PEPF0000B078.namprd04.prod.outlook.com (2603:10b6:408:34:cafe::60) by BN7PR06CA0065.outlook.office365.com (2603:10b6:408:34::42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.49 via Frontend Transport; Tue, 27 Feb 2024 13:53:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN3PEPF0000B078.mail.protection.outlook.com (10.167.243.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Tue, 27 Feb 2024 13:53:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 27 Feb 2024 05:52:51 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 27 Feb 2024 05:52:48 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v3 4/4] net/mlx5: remove port from conntrack handle representation Date: Tue, 27 Feb 2024 15:52:24 +0200 Message-ID: <20240227135224.20066-5-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240227135224.20066-1-dsosnowski@nvidia.com> References: <20240223142320.49470-1-dsosnowski@nvidia.com> <20240227135224.20066-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B078:EE_|SJ0PR12MB5469:EE_ X-MS-Office365-Filtering-Correlation-Id: b3075190-4d49-4848-5471-08dc379b6ea8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lN2fcovtxDaQbYXJkyTb9OOkPKUjhkNJDm2Tz4Jf3PNuXq48G1bVxuqF45OlP6Swb/T782PkTKNiYcyTWU2Z/lnFIe1dhbgpS5h3g7y/F2srwvOYDkycuXIlYjQlcjMRWUABf0OyEldHO7vmZH6YbFvbIYgzWSRnWq2b9bfDr7jX0Y5nyWAFrvcPa4TFQeuDrjbIPmykf8C864qaK3DsY+nuD3r+R6XYYgaz6lDYDS/rTCpUhn2IA8s/xhZ1LPjEL8003ckQ8ve3Zt7WzbL3eVRD1ZpWdYR9SKXhlmetmE2uHSJXp60qBcM6m4zHForEjO3eqt5VMiFjCLxPHVuPFP6xJq0MdUyekPIq5bIfkDuLdpDtVSDdeia063BDX8rjrWjbBx4IQHEW1QCjwJrkhQTKIeDfvDg5tmiz94O01/jzwyhNu4cZ+lZGnMtrEaHiXcX1lXu0XcYBDy3reRLce+5xa1y8WN+we95DsYTHHBNfpJvNPThfeid5Thz2NcrBsR3Qb1Pl3eY8PuWMuJskfEb5Zr4Q+kllpSMtcI+CAvTvqjnQQdsJO/EtMVkc48hSyUqwdO2T8QxHFHBzEmmJ1WHDlSrRit1Ea4Y5dwlgT6XVl1i9yMHKdx87gJ6zlZrjNQyhEtq9iG1PxnRDIQD/cw6rZPNsh7Z/fuRxlAqM4gouzi3X3WO0mewRFV7AG9HjWBo/b5vU74Re3yzVXNQ0sV9ymmD19W8JhNkjDnccp+1iLKNauIoSUWYJqfc63eNk X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Feb 2024 13:53:09.3564 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b3075190-4d49-4848-5471-08dc379b6ea8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B078.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5469 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch removes the owner port index from integer representation of indirect action handle in mlx5 PMD for conntrack flow actions. This index is not needed when HW Steering flow engine is enabled, because either: - port references its own indirect actions or, - port references indirect actions of the host port when sharing indirect actions was configured. In both cases it is explicitly known which port owns the action. Port index, included in action handle, introduced unnecessary limitation and caused undefined behavior issues when application used more than supported number of ports. This patch removes the port index from indirect conntrack action handle representation when HW steering flow engine is used. It does not affect SW Steering flow engine. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- doc/guides/nics/mlx5.rst | 2 +- drivers/net/mlx5/mlx5_flow.h | 18 +++++++++++--- drivers/net/mlx5/mlx5_flow_hw.c | 44 +++++++++++---------------------- 3 files changed, 29 insertions(+), 35 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index db47d70b70..329b98f68f 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -815,7 +815,7 @@ Limitations - Cannot co-exist with ASO meter, ASO age action in a single flow rule. - Flow rules insertion rate and memory consumption need more optimization. - - 16 ports maximum. + - 16 ports maximum (with ``dv_flow_en=1``). - 32M connections maximum. - Multi-thread flow insertion: diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index b4bf96cd64..187f440893 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -80,7 +80,12 @@ enum mlx5_indirect_type { #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1) -/* 29-31: type, 25-28: owner port, 0-24: index */ +/* + * When SW steering flow engine is used, the CT action handles are encoded in a following way: + * - bits 31:29 - type + * - bits 28:25 - port index of the action owner + * - bits 24:0 - action index + */ #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \ ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \ (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \ @@ -93,9 +98,14 @@ enum mlx5_indirect_type { #define MLX5_INDIRECT_ACT_CT_GET_IDX(index) \ ((index) & ((1 << MLX5_INDIRECT_ACT_CT_OWNER_SHIFT) - 1)) -#define MLX5_ACTION_CTX_CT_GET_IDX MLX5_INDIRECT_ACT_CT_GET_IDX -#define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER -#define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX +/* + * When HW steering flow engine is used, the CT action handles are encoded in a following way: + * - bits 31:29 - type + * - bits 28:0 - action index + */ +#define MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(index) \ + ((struct rte_flow_action_handle *)(uintptr_t) \ + ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (index))) enum mlx5_indirect_list_type { MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 2550e0604f..e48a927bf0 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -563,7 +563,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_ct_action *ct; - ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx)); + ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct))) return -1; rule_act->action = priv->hws_ctpool->dr_action; @@ -2462,8 +2462,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: if (masks->conf) { - ct_idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)actions->conf); + ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(actions->conf); if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx, &acts->rule_acts[dr_pos])) goto err; @@ -3180,8 +3179,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->cnt_id = act_data->shared_counter.id; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: - ct_idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)action->conf); + ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf); if (flow_hw_ct_compile(dev, queue, ct_idx, &rule_acts[act_data->action_dst])) return -1; @@ -3796,16 +3794,14 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx); aso_mtr->state = ASO_METER_READY; } else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); aso_ct->state = ASO_CONNTRACK_READY; } } else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) { type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); mlx5_aso_ct_obj_analyze(job->query.user, job->query.hw); @@ -10225,7 +10221,6 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev, uint32_t idx, struct rte_flow_error *error) { - uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; struct mlx5_aso_ct_action *ct; @@ -10235,7 +10230,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "CT destruction is not allowed to guest port"); - ct = mlx5_ipool_get(pool->cts, ct_idx); + ct = mlx5_ipool_get(pool->cts, idx); if (!ct) { return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -10244,7 +10239,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev, } __atomic_store_n(&ct->state, ASO_CONNTRACK_FREE, __ATOMIC_RELAXED); - mlx5_ipool_free(pool->cts, ct_idx); + mlx5_ipool_free(pool->cts, idx); return 0; } @@ -10257,15 +10252,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; struct mlx5_aso_ct_action *ct; - uint32_t ct_idx; if (priv->shared_host) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "CT query is not allowed to guest port"); - ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); - ct = mlx5_ipool_get(pool->cts, ct_idx); + ct = mlx5_ipool_get(pool->cts, idx); if (!ct) { return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -10293,7 +10286,6 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; struct mlx5_aso_ct_action *ct; const struct rte_flow_action_conntrack *new_prf; - uint32_t ct_idx; int ret = 0; if (priv->shared_host) @@ -10301,8 +10293,7 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "CT update is not allowed to guest port"); - ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); - ct = mlx5_ipool_get(pool->cts, ct_idx); + ct = mlx5_ipool_get(pool->cts, idx); if (!ct) { return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -10363,13 +10354,6 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue, "CT is not enabled"); return 0; } - if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "CT supports port indexes up to " - RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT)); - return 0; - } ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx); if (!ct) { rte_flow_error_set(error, rte_errno, @@ -10399,8 +10383,7 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue, return 0; } } - return (struct rte_flow_action_handle *)(uintptr_t) - MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx); + return MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(ct_idx); } /** @@ -10741,7 +10724,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_CT: if (ct_conf->state) aso = true; - ret = flow_hw_conntrack_update(dev, queue, update, act_idx, + ret = flow_hw_conntrack_update(dev, queue, update, idx, job, push, error); break; case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: @@ -10830,7 +10813,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx); break; case MLX5_INDIRECT_ACTION_TYPE_CT: - ret = flow_hw_conntrack_destroy(dev, act_idx, error); + ret = flow_hw_conntrack_destroy(dev, idx, error); break; case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); @@ -11116,6 +11099,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_hw_q_job *job = NULL; uint32_t act_idx = (uint32_t)(uintptr_t)handle; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = MLX5_INDIRECT_ACTION_IDX_GET(handle); uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK; int ret; bool push = flow_hw_action_push(attr); @@ -11139,7 +11123,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, aso = true; if (job) job->query.user = data; - ret = flow_hw_conntrack_query(dev, queue, act_idx, data, + ret = flow_hw_conntrack_query(dev, queue, idx, data, job, push, error); break; case MLX5_INDIRECT_ACTION_TYPE_QUOTA: -- 2.25.1